Evaluating concept worker performance is an interesting challenge:
- No right answer - Most often there is no single right answer. Which authoring tool, LMS, etc. we should use in a particular situation - you can't possibly get it exactly right. You are trying always to arrive at a reasonably correct answer given all the other factors (amount of time you can spend finding an answer, etc.)
- Evaluator knowledge limit - In most cases, the person doing the performance evaluation knows less about the subject that the performer. So, they can't directly judge the answer, but may be able to sense when answers are possibly not correct.
- Process - They went through a reasonable process to arrive at their conclusions.
- Reasonable - Their conclusions are reasonable in your opinion (if you can formulate one).
- Compare - If you took what they did and compared it to what you would expect from other similar performers, would they have arrived at the same result.
What I've been saying in recent presentations is that going to Google and searching for information as your primary mechanism leaves you open to criticism. Instead, having a conversation with a peer can give you feedback on:
- Was my process appropriate?
- Is my answer reasonable?
- How does my answer compare?
There's a beauty in this!
But it does require better ability to reach into networks for help.
No comments:
Post a Comment