CatchBob analysis documentation

A rough list of how I am analysing CatchBob data:

excel files:- results.xls: paht, time, refresh... client parsing - Results_map_annotations.xls: map annotations on the tabletPC - results_drawings.xls: drawn paths

1) Map annotations

Absolute number of messages in both conditions: total, position, direction, signal. strategy, off-task, acknowledgement and correction Variance analysis to check the differences (+normality+ homosedasticity)

% number of messages/total in both conditions: total, position, direction, signal. strategy, off-task, acknowledgement and correction Variance analysis to check the differences (+normality+ homosedasticity)

Frequency of messages/time in both conditions: total, position, direction, signal. strategy, off-task, acknowledgement and correction Variance analysis to check the differences (+normality+ homosedasticity)

***** correlation between number of position messages and number of direction messages **** **** split the groups (post-hoc) in 2 (50/50 or 40/20/40 depending on the repartition) and check if the group who annotate a lot make less errors IF yes: annotation is good, if not: the awareness tool makes people asleep! *****

2) Time analysis

Time= time spent to find bob (till YOU WON)

Histogram(time), normality Time spent in both conditions Variance analysis: NoAT seemed to have more time to write messages (and then more position messages).

3) Errors analysis

Errors = sum of the number of errors made by A to draw B and C's paths Histogram(errors), normality Errors made in both conditions Variance analysis: NoAT make less errors : anova(awareness~errors) Covariance analysis: try to include the time in the model: anova(awareness~errors * time) The comparison of those two model (with or without time taken into account) is not significant

*****Try a model adding workload, disconnection or path length or bob's position********

4) Path length

Our real dependant variable Path length = sum of individual's path length among a group Histogram(length), normality Length made in both conditions Variance analysis: anova(awareness~length) !!!!!!! Multi Level Modelling !!!!!!! Analysis at the group level: data are not independent *****Try to create new model: covariate with: Time, workload, bob's position, disconnection****

5) Workload

Workload = NASA TLX evaluation Histogram(workload), normality Workload made in both conditions Variance analysis: anova(awareness~workload) !!!!!!! Multi Level Modelling !!!!!!! Analysis at the group level: data are not independent *****Try to create new model: covariate with: Time, length, bob's position, disconnection*****

6) Verbalization after the game ...

7) Various correlations Pearsons or Spearman or Kendall: it might often be Spearman/Kendall since the data are not linear

- correlation between number of position messages and number of direction messages - number of messages (total, position, direction...) and path length (groupe/individual?) - errors and number of messages (total, position...) for the 2 conditions and for each - errors and path length (more errors when path is longer?) - number of refresh (AT) and number of errors - intragroup correlation of number of messages

8) Division of labor - indexes: task div, backtracking, overlap - Do the teams with synchronous awareness tools develop different problem solving strategies than those with asynchronous awareness tools? Different division of labor ?

9) Other questions - How does the frequency of coordination acts (explicit or implicit) vary over time? Are these request more frequent at the beginning of a task or do they increase at specific phases in terms of problem solving strategy? - intragroup correlation of number of refresh (AT)

10) Other techniques to explore?? - Sequential analysis: I need to find some literature to create my models then - Multilevel modeling - Cluster analysis