Does anyone have any information or experience with employees performing proactive Self-Audit (evaluation) in office environments? We are investigating the validity and effectiveness of these types of evaluations.
The employees would be provided with a checklist to fill out and possibly a Power Point training file to assist them in the evaluation.
We have an online evaluation for computer users. It consists of about 20 questions. The results are customized for each user according to the responses they give. The best part is that the results are very educational – explaining what is good about what they do, what needs improving and suggestions for improving..
The site we use is product neutral – and is good in that it does not say things like “you MUST get this device” etc..so management likes that !! –
I have found it very helpful, often it fixes the issue without any intervention from me – as I am 1 person for 11,000 employees!! –
There is also a module they can take that expands on the basics and goes into detail on aspects of computer ergo, should the employee want to learn more.
The company is Sytrio out of California
TLutz, the research isn’t great, but I think it also depends largely on the larger context of how it is collected and used.
A couple of studies on estimating postures…
Lowe, B. D. (2004) Accuracy and validity of observational estimates of wrist and forearm posture. Ergonomics. 47, 5, 527-554. Abstract: Numerous observational methods for analysis of working posture of the wrist/forearm have been reported in the literature yet few of these methods have been validated for the accuracy of their posture classification. The present study evaluated the accuracy of estimates of working posture made by 28 experienced ergonomists using methods of scaling upper limb posture typical of those reported in the literature. Observational estimates of wrist/forearm posture of four jobs presented on video-recording were compared with posture levels measured directly with an electrogoniometer system. Ergonomists using a visual analogue scale tended to underestimate peak and average wrist extension with mean errors of -29.4% and -10.5% of the joint ROM, respectively (p<0.05). While estimates of wrist flexion, pronation and supination resulted in less bias, variability in observer error was large for all wrist postures. The probability of an analyst misclassifying the most frequently occurring posture using a three- and a six-category scale was 54 and 70%, respectively. The probability of misclassifying peak posture was 22 and 61% using a three- and a six-category scale respectively. This suggests a trade-off between the degree of precision afforded by the categorical scale and the likelihood of posture misclassification. Estimates of the temporal distribution of posture among the categories appeared to be biased towards more neutral postures than were measured for the jobs. This indicated the possibility of a trend towards underestimation of posture duration severity by the ergonomists.
Lowe, B. D. (2004) Accuracy and validity of observational estimates of shoulder and elbow posture. Appl Ergon. 35, 2, 159-171. Abstract: This study investigated the accuracy of video-based observational posture analysis for the elbow and shoulder. Posture analyses were conducted by 28 ergonomists for four jobs presented on a traditional VHS format video recording. Estimates of posture from the observational-based methods were compared with values measured directly with an optical motion capture system. Ergonomists used categorical posture scales and a continuous visual analog scale to estimate the peak and most frequently occurring or average posture for each job. Use of a three-category scale resulted in misclassifications of peak and most frequently occurring elbow and shoulder posture with a probability averaging 30.1%. With the six-category posture scale this average probability of misclassification increased to 64.9%. Using a continuous visual analog scale peak shoulder elevation was the only posture for which the average error among ergonomists’ estimates was significantly different from zero (p<0.05). Correlations between the estimated postures and measured postures were higher and statistically significant (p<0.05) for elbow flexion and shoulder elevation (r2 between 0.45 and 0.66) but were considerably lower and not significant (r2 between 0.03 and 0.18) for the peak and average horizontal shoulder abduction. Ergonomists' estimates of the temporal distribution of shoulder posture, indicating the duration severity of the posture, appeared to be biased such that the percentage of the cycle time in each posture category was estimated as more uniformly distributed than the measured values indicated.
Sidebar – I don’t think you can get anywhere with a straight list of questons. It’s important to have questions that branch out into new questions that reflect that individual’s prior responses and don’t waste other user’s time when it’s not relevant. So if, for example, they say they have glare on the screen – a leading predictor of pain and discomfort at any of the body parts – it opens into questions about do they have a window (and then where is it relative to them, do they have blinds, are they open), do they have a task light (do they use it and then how is it positioned), about their vision (ton of questions open up there) and so on… so you have a large number of variables that are accounted for in a unique way that you can analyze as a cluster fo factors for each individual. This is a big deal to write. The one I developed in the early 90’s (and not available on the market) – had many hundreds of pages of code, but it accounted for a large range of potential risk factors that interact with each other and would not make sense by themselves. I don’t have confidence in users responding to straight questions that aren’t analyzed as part of a pattern. Again, I am not promoting a product here – I don’t have one to offer – I’m just giving my own take about what is needed to get meaningful information from novice users.
Rani Lueder, CPE
At the University of California San Francisco/Berkeley Ergonomics Program, we developed a validated office ergonomics checklist that focused on outcomes instead of workstation features. In fact, this and other training materials we provided were the basis for the Syntrio materials that Natalie Campaneria mentioned in her reply to your post. We also developed a guidebook keyed to the checklist, to lead the user in reducing risk factors and improving workstation ergonomics. The checklist and guidebook were later tested with a large employer and used as a self-evaluation instrument by each participating employee, or by a co-worker, or by an Ergonomics Coordinator with training and experience. Workstations were evaluated by an independent ergonomist before and after interventions were made. The results indicated that the checklist and guidebook were effective in making significant improvements in postures and workstation conditions only when administered by an Ergonomics Coordinator, not when used as a self-assessment or by an untrained co-worker.
I presented these findings at the HFES conf in 2002, Janowitz, et al, Validation and field testing of an ergonomic computer use checklist and guidebook, Proceedings of the 46th Annual Conference of the Human Factors and Ergonomics Society, Baltimore, MD 2002.] I think all of us know that most people are unaware of their posture while they work. The bad news, according to our findings, is that this means that although people can tell us important information about the problems they’re having at work, they are unlikely to be good self-evaluators, and neither they nor their ‘buddies’ at work are effective at selecting interventions or making adjustment that would help, even when provided with a pretty good guidebook, if I do say so myself.
The forum ‘General Ergonomics Topics’ is closed to new topics and replies.