This is a quick summary of the first day of the EISTA 2010 conference. The main focus of the day, for me, was around presentations on transforming curriculum. The first thing I noticed was although the speakers (USA: Drexel University, and University of Texas, along with AEFIS) talked about Assessment, they were discussing what we would call curriculum review and that their award programme structures were very similar to ours and they are experiencing the same problems with how they managed their curriculum. The presentations didn't address this from an institutional perspective, but rather a faculty. They were also presenting the benefits of using AEFIS who also presented, and mentioned that they were looking for academic partners from across the world.
The first presentation talked about creating & sustaining change, linking reviews to student outcomes. They note the same issues with change that has been encountered by Enable, culture, hierarchy, communication, lack of understanding of reason for change. They discuss a faculty top down approach to change and how they managed communication to set up a new review process (the old one did not work, people never did it as they felt it was too much effort, and that the one size fits all approach does not work). They they talked about their implementation of the AEFIS to track development and review of course curriculum to improve the student experience. They now use the system as a central point to deliver resources to faculties.
The second presentation focused on how curriculum mapping with, at first Excel, and then AEFIS, helped them recognise that often course units where not being assessed in the best way to promote student experience, that they had too many units at an introduction level (14 programming units at introduction level and none at emphasis, reinforce or applies level so the students could say "hello" but not hold a conversation in any of them). The mapping exercise involved looking at 14 different learning outcomes and then breaking them down to 70 performance criteria (this break down took a couple of months of research as they did not start from scratch - nor would they recommend that anyone tried it from scratch). They said that the entire process of mapping took them 16 months in Excel but with AEFIS a lot of the work is automated and could be done in a day! We then did a nice paper exercise around curriculum mapping outcomes against level of learning and means of assessment, manually identifying issues with the curriculum - the presenter pointed out that the software helps but can't do everything.
The next presentations I attended focused on managing the curriculum using analytics to improve and inform learning. The first stressed the importance of being able to use good quality data, and that the most important needs to be selected from the avalanche of data that faculties and institutions collect. We were shown a model of developing learning analytics, with the focus on the individual student learning experience, to support optimising the student path through the course - this seemed to link to APEL and allowing students to use what they already know to skip units.
The second discussed the development of an Instructional Decision Support System. This system is designed to link student characteristics (learning styles etc), student performance, instructor characteristics, learning outcomes and instructional methods to inform faculty decisions (yes that was straight off the handout!). After collecting data on students in different courses they were able to show, for example, that on one course students were sequential learners but the teacher was using global learning and as such the feedback from the course was not positive, by adjusting her teaching to fit sequential learners over two years feedback from the students was much more positive.
The final presentation for this area was around creating EduApps - creating support documentation and applications for faculty staff so that they can focus on innovative teaching, these "apps" are not necessarily technological solutions, but can be guidance notes on using different technologies. They are developed based on faculty issues, and those faculties that raise the issues become owners of the solution. The main criteria of the apps is that they should be focused on a particular problem, pose low risk for faculty adoption, the tools should not require a learning curve or a significant investing of time or resources to use it.
Despite the great promotional work done by the universities and AEFIS themselves I did have a concern that they spoke about being the one system for university's fitting into different areas, rather than looking at how to support universities with different systems (along the lines of the work already happening in the UK around FSD and SOA). Although this could just be me and unfortunately I could not follow up on this during lunch as they did not provide any and I really need to eat at the moment!