Although there are a plethora of new assessment tools that are affecting eLearning, I was unable to leverage these because the objectives and the level of learners I have.
The new assessment tools and methods are exciting! The learning activities that teachers can leverage for collaborative learning by using technologies like Google Docs and Wikis is amazing. However these are often human graded by using rubric. Even more amazing are some of the means to leverage technology in assessment. For example, the automated scoring of quizzes has made mastery testing almost commonplace in eLearning.
With newer voice recognition technologies there will be new assessment tools, especially in second language learning. One of these is the Chinese Computerized Adaptive Listening Comprehension Test (CCALT). This test uses sound algorithms to adapt the difficulty level of the items to the individual student. Adaptability to the individual student is not new- it has been a feature of the GRE for years. But now it can be applied to language learning- where before a human ear was needed to evaluate, adjust and adapt the learning.
Unfortunately the course I am developing is for pre-learners, so leveraging these tools is improper, and might scare my learning audience. Having taught second-language learners for 10 years I have to be sensitive to second-language anxiety. This is a situation specific anxiety that can affect people who are not usually nervous in other situations. Using computer adaptive assessment would be appropriate if I was aiming my course towards a general population and needed to weed out false beginners and advanced students. However, my audience is rural Kentucky university students, so introducing them to computer adaptive assessment may increase their anxiety with an already foreign topic.
It is more important for a beginner to feel safe- especially adult learners. The human ego is fragile and new learning is change. If I was conducting an ILT I could adapt my tone of teaching and level of assessment more accurately to the audience, since I have tested second-language learners for years and can usually tell in the first minute their second language competency and comfort.
Another aspect of ILT testing is human kindness, something machine learning cannot yet imitate in a believable fashion. An instructor can adjust not only to the specific audience, but can do it in a means that respects the human dignity of learners who may be reacting in a perceived ill manner to the testing situation because of anxiety. Many times I have put on the kind instructor mask while the testee struggled and stuttered through a few words, and then reverted back to their native language. The testee already had enough shame and grief without my helping attach more anxiety. The supportive and encouraging smile of a human face may encourage further learning, something a pop out answer on a screen cannot do.
Anyhow, I have to fall back on the rather boring and standard identify and matching questions for my assessment and trust that the computerized response is not scaring off learners by performing a thorough evaluation.
Oh well, looking forward to developing higher level courses where more complex assessment means are appropriate!