Back

Approaching Adaptation Smartness Systematically

Since EC-TEL it has been pretty quiet here. But never mind, it wasn't that quiet under the surface. I have been pretty busy in polishing and breaking my smart indicator prototype and sharpening the evaluation design for the follow-up studies of my research. In this post I discuss some of preliminary results that popped-up while I have been worked on the evaluation design.

One of the problems researching adaptive ICT systems is the evaluation of these systems. There is a widely used evaluation approach, which checks differences in user behavior or user perception when these users are confronted with a adaptive system. The evaluation utilizes often a combination of questionnaires or interviews and system access analysis. This evaluation approach is suitable to analyze the effects of adaptation strategies particularly if the evaluation context covers a relatively short timespan.

The challenge of my research is not analyzing the effects of one adaptation strategy rather than understanding the principles of providing meaningful information under different circumstances and changing conditions. In other words, I want to understand why (and when) to adapt a system's behavior. The conventional "look for effects" approach will not offer sufficient answers to this question. Therefore I choose to follow the principles of system dynamics for the evaluation design.

In my previous articles for EC-TEL and IJCEELL I already proposed a meta system-model that helps to understand the relation between user behavior and supportive information provided by a system. One drawback of this model is that it is basically a component model that helps to describe the underlying architecture and the data flow of my prototype system. While working on my evaluation design, I understood that it will not help me to explain the effects that I expect to observe in the experiments. Therefore, I reshaped the model for the evaluation. The result shows the information that affects the adaptation strategy more clearly, while the technical components became almost invisible.

Modeling adaptation

In my previous articles I referred to Buttler & Winne and their analysis of the research on self regulated learning (SRL). For me the concepts of SRL are suitable for modeling the user side of human computer interaction, because SRL research investigated the principles of how people control their behavior and alter their habits without external intervention. These concepts are very useful in situations where instruction and structured guidance is not possible. The core insight from SRL is that users use external information for validating their previous behavior in order to verify or to change their (learning) tactics and strategies.

For the system side the results from SRL means basically that it is necessary to model the information that is meaningful to people for judging their own behavior.

It is important for the model that it can be used to argue the conditions why information is shown to or hidden from a user. Regarding context adaptation of indicators I understood that this is more complex than I expected: cases where information is initially hidden and then shown to the user can be easily modeled by applying the concept of information overload; while the same concept does not fully explain reasons why information that is initially shown to a user is hidden in later phases of a process - or for that matter, is hidden in different contexts.

Two types of adaptation strategies

While modeling I realized that in my current prototype I implemented two different adaptation strategies.

The first strategy uses basically the same information and enriches this information and alters the visualization over time. The strategy is used for the action counter and the performance bar-chart. While in the first phase the action counter simply shows the actions of a user in my prototype system, the performance bar-chart adds social information to the users action information. This social information is in this case nothing else but the action information of the peer users. Showing both indicators to a user makes not much sense, because the underlying data of both indicators is of the same type. Therefore, the action counter indicator will not provide any additional information that is not available in the performance indicator. Hiding the action counter will therefore reduce the information noise in the user interface and thus reduce the information load for the user.

The second strategy is related to the variances of the information that is displayed to the user. This strategy can be used to explain why two or more indicators are provided to a user, where each indicator shows different data. This can be used to explain why the action counter should be hidden even if no performance counter is available: if an indicator has little variances in displaying information, then it cannot get used for validating the user interactions and the user will use other indicators for that purpose. In the prototype case a highly active user will hardly see changes in the action counter indicator, whereas the tag cloud/interest indicator offers more opportunities to assess the user's actions. In these cases the action counter can be hidden from the user because of the lack of variance of the visualized data. I like to refer to this as information underflow.

Last words

It is important to recall that these ideas rely on the system model I use as an underlying evaluation design - not on empirical data. The evaluation will focus on testing these concepts.