Botha, J., van der Westhuizen, D., & De Swardt, E. (2005, July 30). Towards appropriate methodologies to research interactive learning: Using a design experiment to assess a learning programme for complex thinking development. International Journal of Education and Development using ICT [Online], 1(2). Available: http://ijedict.dec.uwi.edu/viewarticle.php?id=43.

Botha, Westhuizen & Swardt - Using a Design Experiment


Towards Appropriate Methodologies to Research Interactive Learning: Using a Design Experiment to Assess a Learning Programme for Complex Thinking

 
Jean Botha, Duan van der Westhuizen and Estelle De Swardt
University of Johannesburg, South Africa
 
ABSTRACT

In this paper, we advance that there are several issues pertaining to the design of research in instructional technology. It is our view that much of the current research taking place may suffer from poor quality, inappropriate design, and lack of social responsibility. We contend that the most appropriate way to research the effectiveness of online learning is the use of design experiments. We present an exemplar of a recent design experiment that was completed at a university in Johannesburg, South Africa. During this study, the researchers explored the extent to which complex thinking skills can be facilitated in online learning environments. A design experiment was engineered in which a learning programme was designed and developed for Masters students. Specific instructional methodologies were employed in the learning programme, and activities were designed that facilitate the use of complex thinking skills. The extent to which these skills were evident in student online activities was easily detected by using the comprehensive checklists and rubrics that were generated. A rigorous framework for analysis was developed. The findings were integrated with theoretical perspectives on instructional strategies for complex thinking development and new, unique criteria for online learning design were yielded. We are of the view that the findings of our study are 'true', as the appropriate methodology was used to conduct it.

Keywords: Design experiments, complex thinking, instructional methodologies, online learning, unique criteria.

INTRODUCTION: QUESTIONING INSTRUCTIONAL TECHNOLOGY RESEARCH

The decision whether to use some form of instructional technology in education should be based on the question: Is the use of instructional technology likely to improve education? (Mitchell 1997; Reeves 1995). The way in which scholars, lecturers or teachers attempt to establish whether such interventions are indeed beneficial is through a process of scientific research in the form of case studies, course evaluations or experimental studies. In fact, the literature abounds with reports in which the benefits of instructional technology interventions are espoused. Lockard and Abrams (2001) list many research studies in which it has been found that the use of instructional technology shows gains in subject-matter achievement, learning retention and speed, attitudes towards learning, problem solving and for students who are at risk. We assert in this paper that the research results pertaining to instructional technology research may be flawed due to poor quality research and inappropriate research designs. We further assert that an academic system that rewards research that is not socially responsible will not produce relevant and high quality research. We will argue that design experiments (development research) that are executed rigorously will address the concerns that we have about instructional technology research.

There is significant evidence that the research results pertaining to the benefits of using interactive technologies to support teaching and learning is questionable, often because of a lack of rigour during the execution of the research. According to Reeves (2000), the "quality of published research in Instructional Technology is generally poor". Reeves (1995) launches a scathing attack on research done in instructional technology, and claims that most published research articles are "pseudoscience" (also see Mitchell 1997) He claims, after an analysis of five articles published in refereed journals, that these articles have specification errors, have few links to robust theory, have inadequate literature reviews and treatment implementation, have measurement flaws and inconsequential outcome measures, inadequate sample sizes, inappropriate statistical analyses and meaningless discussion of results. Dillon and Gabbard (1998), who reviewed 500 papers for an article they prepared for the journal, Review of Educational Research, found that only 30 of these met the minimal criteria for good scientific studies for inclusion in their review. Reeves, Mitchell and Stokes are not the only dissenting voices in the research community who have expressed concern about the state of instructional technology research. In fact, Reeves (1995) refer to authors like Mielke (1968), Lumsdaine (1963), Schramm (1977), Clark (1983) and Salomon (1991), who were the forerunners in the questioning of research practice in instructional technology. In his seminal work, Clark (1983, 1994) asserts that media (and therefore instructional technology) has no influence on learning and he criticises the research in this field. He explains that meta-analytic reviews report an approximate 20% increase in evaluation scores following the use of instructional technology in comparison to conventional forms of teaching. However, he contends that it is the instructional methodology that underpins these interventions that account for the gains in learning of those research reports. The research studies that have examined the use and effectiveness of the media used therefore failed to isolate the real reasons for the learning gains that were demonstrated. The publication of Clark's initial work sparked the well-reported Clark-Kozma debate, wherein the two opposing sides drew the proverbial line in the sand about the value of media (instructional technology) for learning. A primary thrust in this debate was the selection of appropriate methodologies for researching instructional technology.

The root of the problem may possibly be found in the 'quantitative-qualitative' paradigm debate. Hoepfl (1997) explains that the relative value of qualitative or quantitative inquiries has been raging for a long time. Quantitative research is based on an experimental design in which a hypothesis is tested and from which generalisations can be drawn. Reeves and Hedberg (2003) describe this type of research as "analytic-empirical-positivist-quantitative". Many researchers claim that positivist, experimental designs are the only appropriate ones for doing valid and reliable research. In fact, Reeves (2000) found that most published research in leading journals for education was situated within the quantitative, positivist paradigm. Qualitative research on the other hand does not rely on numerical or statistical data and attempts to understand phenomena in "natural settings" (Hoepfl 1997). Strauss and Corbin (1990, p.17) define qualitative research as producing results that are not "arrived at by means of statistical procedures or other means of quantification" . Many scholars are of the opinion that research in education should be based on qualitative data. In addition to these two paradigms, Soltis (1992) explains that research can be situated within a 'critical theory' paradigm. Critical research aims to critique the social order to bring about change and examines restrictive and alienating conditions. It questions the maintenance of the status quo and wants to bring about cultural, political and social change.

The question is which of these paradigms (or combination of paradigms) is suitable for researching instructional technology. Roblyer and Knezek (2003) claim that research findings that confirm the benefits of modern technologies for learning may "simply not hold true" as much of such research was done using behaviourist-cognitivist approaches to assessing learning benefits. Alternatively, in these research projects, comparisons between technology-mediated learning environments and traditional face-to-face course deliveries using experimental or quasi-experimental methodologies were made. Some researchers like Tellez (1993), Hoepfl (1997) and Reeves (1995, 2000) claim that it is not possible to conduct true experimental designs in social science inquiries. Because of the fact that researchers are often faced with intact groups (specific classes or groups) that cannot be divided up for random assignment and the creation of experimental and control groups, true experimental designs are often simply not viable. In this regard, the question further needs to be asked what the aim of a research project is. Reeves and Hedberg (2003) point out that the reliance of experimental methodologies stem from the need to "prove" the effectiveness of a particular educational intervention, in other words, the research has a summative evaluation dimension. Many of these research projects are case studies. Case studies appear to exemplify the "Tylerian Objectives-Based Evaluation Model", which would judge a programme to be good if the set objectives were achieved (Reeves & Hedberg 2003). Case studies appear to be underpinned by 'after-the-fact' methodologies, and may seem wasteful if some contribution to theory is not made. Suitably engineered educational online interventions that are meticulously designed and that are situated within specific educational theory, may therefore be of more value to learners. Additionally, when the impact or effectiveness of such interventions is scrutinised and researched, appropriate methodologies need to be utilised that go beyond the mere exploration of cases. Cunningham (Willis 1994) claims that it is impossible to produce 'findings' that are generalisable across all possible circumstances, and specifically so within social science contexts. Constructed knowledge is not ' truth' that remains stable and dependable forever, rather, it exists within specific contexts and perspectives - knowledge that may profess to be truth for one context may very well not be 'truth' for other contexts. Therefore, we advance that empiricist designs that depend on pre-testing and post-testing using quantitative data may not be the most appropriate way of researching online learning. Subsequently, we hold the view that research design in social science can at best be quasi-experimental designs.

The third dimension that impacts on the quality of instructional technology research is the way by which scholars are rewarded for their research outputs. Reeves (2000) describes in a paper delivered at the prestigious America Educational Research Association (AERA) his experiences when appointed at a university as a junior professor. He explains how he was told to collect "lots of data" in order to publish and therefore advance in the university system. He points out that the state in which his appointment was made had a documented poor educational system, but he was not told to find solutions, through research, for those problems. This exemplifies the 'publish or perish' notion, which is a significant challenge that faces higher education. It is our experience in the higher education system that academics are under pressure to publish (do research). Publishing is incentivised by the higher education institutions, which receive financial rewards in the form of subsidy, and which in turn reward academics with promotion. Whereas we do not question the reward system for research, we would plead for a system wherein 'socially responsible' research is advanced. In this regard, Reeves (1995) refers to 'socially responsible' research as research that aims to make education better, therefore finding practical solutions to real problems. It highlights the fact that much educational research may have little value for solving the practical problems that plague education in general (also see Reeves 1995, 2000). Similarly, Stokes, (1997) in his Pasteur's Quadrant: Basic Science and Technological Innovation work explains that much of the research done in the educational field contributes little to the understanding of the theories that underpin education (and in our view also that of instructional technology) and that these studies do not advance fundamental knowledge in the relevant knowledge domains. He uses as exemplar the work done by Louis Pasteur, who found practical solutions for real-world problems and at the same time advanced fundamental (theoretical) knowledge, in this case about the preservation of fresh food. We acknowledge that our view may be contentious. Reeves (1995) points out that others in the research community will argue that the search for the sake of knowledge's sake is paramount, and that researchers should not be prescribed to as to what they should research. Although we concur that a purist agenda is important for the maintenance of independent scholarship, we would like to have - in the context of the problems that were highlighted with regard to instructional technology research, and to the further context of the educational problems that beseech South Africa in general - a research agenda developed that advances both theory and practical application. In this paper we argue that design experiments will address these dual needs.

The South African situation is unclear. The most typical application of qualitative research in instructional technology seems to be that of case studies. Van der Westhuizen (2002) conducted a meta-analysis of research topics and methodologies in South Africa that related to instructional technology research. He found that the vast majority of published research is case studies. Although the value of case studies in a developing field of knowledge is not to be underestimated, we doubt that this approach will lead to fundamental understanding of the theories that are associated with online learning. Although they may highlight practical problems, and even suggest solutions to those problems, the findings need to be incorporated into existing theory. Whether case studies yield sufficient in-depth data to advance fundamental knowledge remains to be seen. No other meta-studies that have examined the research designs of instructional technology inquiries have been found in South African literature.

 
DESIGN EXPERIMENTS

In this paper, we contend that the most appropriate way to research the effectiveness of online learning is the use of design experiments. We assert that design experiments address the concerns that we have raised in the previous section. In the first place, we argue that design experiments require rigorous designs that yield rich, in-depth data over a prolonged period of time, and therefore by virtue of the design addresses issues of quality, depth and validity. Secondly, design experiments may use any of the paradigms that underpin educational research, and in fact, will utilise both approaches in a complementary manner. Thirdly, as design experiments address real-life problems and attempt to engineer solutions to those, we believe that design experiment methodologies are socially responsible. The following section provides a definition of the concept, and outlines the goals of design experiments.

Conceptualisation

The term "design experiments" - also referred to as "formative experiments" (Barab & Kirshner 2001), "applied research" (van den Akker 1999; Reeves 2000), "use-inspired basic research" (Stokes 1997) or "development research" (Reeves 2000) - was introduced in 1992 by Brown and Collins. More recently the term "design research" has been applied to this kind of research (Barab & Kirshner 2001 and Collins 1999). The terms "design experiments" and "design research" will be used interchangeably in this paper. Design experiments are types of research that place educational experiments in real-world settings to find out what works in practice (RooseveltHaas 2001). According to Cobb et al. (2003), design experiments entail both "engineering" particular forms of learning, and systematically studying those forms of learning within the context defined by means of supporting them. This designed context is subject to test and revision, and the successive iterations are similar to systematic variation in experience. Design experiments incorporate the notion of formative and summative evaluation of learner skills and knowledge demonstrated over time, penetrating into the learning processes on a weekly schedule, as instructors and researchers negotiate instructional decisions (Brown 1992). Design experiments are pragmatic as well as theoretical in orientation in that the study of function - both of the design and of the resulting ecology of learning - is at the heart of the methodology (Cobb et al. 2003). A design science in education therefore aims at determining how the design of learning environments contributes to learning (Brown 1997).

The goals of design research

Design experiments were developed as a way of conducting formative research for testing and refining educational problems, solutions and methods (Reeves 2000; Stigler & Hiebert 1999). It is mainly used by researchers with development goals in mind (Reeves 2000). The goals of design experiments (development research) as described by Reeves are summarised in Figure 1.

 

Design experiments

Refinement of problems, solutions and methods


Figure 1: Development approach to research
(Reeves 2000)

 

However, design research is not aimed simply at refining practice. It should always have the dual goal of refining both theory and practice (Edelson 2001; Joseph 2000). Design experiments are conducted for the generation and testing of theories that target domain-specific learning processes (Cobb et al. 2003). It ideally results in greater understanding of a learning ecology - a complex, interacting system involving multiple elements of different types and levels - by designing its elements and by anticipating how these elements function together to support learning (van den Akker 1999; Brown 1997; Cobb et al. 2003; Reeves 2000). Design experiments, therefore, constitute a means of addressing the complexity that is the hallmark of educational settings (Barab & Kirchner 2001). Elements of a learning ecology typically include the tasks or problems that learners are asked to solve, the kinds of discourse that are encouraged, the norms of participation that are established, the tools and related material means provided, and the practical means by which instructors can orchestrate relations among these elements (Cobb et al. 2003).

The researcher firstly develops the broader theoretical goals of the study (a design focus), frames selected aspects of the envisioned learning (provides a theoretical framework for the study), specifies the settings in which the learning will take place as well as the means of supporting it, and develops a model of the learning tasks and instructional strategies that can support that learning (Brown & Campione 1996). The process of engineering or specifying the forms of learning being studied provides the researcher with a measure of control not obtainable in purely naturalistic investigations.

Design experiments, according to Cobb et al. (2003), have two faces: prospective and reflective. On the prospective side, designs are implemented with a hypothesised learning process and the means of supporting it in mind, in order to expose the details of that process to scrutiny. An equally important objective is to foster the emergence of other potential pathways for learning and development by capitalising on contingencies that arise as the design unfolds. The theory therefore informs the design focus and prospective design (DiSessa 1991). On the reflective side design experiments are conjecture-driven tests, assessing the critical design elements, often at several levels of analysis (Shepard 2000). Together the prospective and reflective aspects of design experiments result in an iterative design process featuring cycles of invention and revision (Cobb et al. 2003). The evaluation of the design, therefore, is an ongoing process that changes as the design changes (Brown & Campione 1996).


RESEARCH EXAMPLE: COMPLEX THINKING ONLINE

We provide as exemplar a recent design experiment which was completed at a university in Johannesburg, South Africa. During this study, the researchers explored the extent to which complex thinking skills could be facilitated in online learning environments. In this study, a one-on-one design experiment with a small number of learners was engineered. A learning programme was designed and developed for Masters students who were enrolled for a course in Instructional Technology. The aim was to create a small-scale version of a learning ecology for in-depth and detailed study (Barab & Kirshner 2001; Cobb et al. 2003) and to refine the design parameters for a new type of curriculum. The research suggested in this study looked at a complex system of interrelated factors and events, where each component, event or action has the potential of affecting the unit as a whole (Collins 1999). There is compatibility in this research between the systemic nature of the subject matter and the use of qualitative research methods. The research methodology for this study was guided by principles of interpretive inquiry outlined by researchers such as Lecompton, Preissle and Renate (1993) and Miles and Huberman (1994). The research was conducted in four phases as summarised in Figure 2.

Design experiments

Figure 2: Using a design experiment for assessing a learning programme for complex thinking development

 

Phase A: Establishing a theoretical framework for the study

The development of the qualitative/interpretive design experiment began with the establishment of a theoretical framework, the set of questions to be answered by the research. The framework address the problem to be investigated by the study, reviewing what is known about the topic, what is not known, why it is important to know it, and the specific purpose of the study (Winegardner 2000). Merriam (1992) stresses the importance of identifying the theoretical framework that forms the 'scaffolding' or underlying structure of the study. Theory should be present in all qualitative studies because no study could be designed without some question being asked explicitly or implicitly. The phrasing of that question and the development of a problem statement reflect a theoretical orientation (Merriam 1992). The literature study, therefore, formed a theoretical and analytical framework of criteria, serving as a foundation for the analysis and interpretation of the data collected during the research project, and, this, according to Vockell and Asher (1995), directs the questions asked by the researcher. It also helps the researcher identify methodological techniques used to research similar phenomena as well as contradictory findings. The aim of the literature review in this study was to identify the following: criteria for the development of complex thinking, instructional strategies that could enhance complex thinking development, and methods of using online learning for the advancement of complex thinking development in a Web-based learning environment. Course content was then designed according to these findings and presented in the Web-based learning environment.

The following objectives were realised in Phase A:

Objective 1

Researched the essential characteristics of complex thinking through a literature study and derived criteria for identifying complex thinking.

Objective 2

Through a literature study, possible instructional strategies and techniques to enhance complex thinking were thoroughly researched and a set of criteria derived.

Objective 3

Through a literature study the contribution of Web-based learning to the learning process was researched and a set of criteria derived.

The elements (criteria) identified in this phase of the study provided a framework for the design of the Web-based learning programme developed in Phases B and C of the study.

Phase B: The design and development of the Web-based learning programme

In Phase B of the study, a learning environment was designed to incorporate the criteria established in Phase A of the research. During this stage the critical elements of the design and their relevance to each other were identified. The design included a contact session, serving as an introduction to the theme. The second part of the design experiment comprised a series of Web-based learning activities, which incorporated various instructional methodologies to facilitate/enhance complex thinking. Different discussion forums were created in the Web-based learning environment to facilitate these activities. The programme was implemented in Phase C of the study.

Phase C: The implementation of the Web-based learning programme

During Phase C of the inquiry, the Web-based programme was implemented using a series of instructional strategies focussing specifically on complex thinking. Specifically, Phase C sought to answer the following questions:

  • What types of complex thinking skills did learners employ while interacting in the Web-based learning environment?
  • How did the instructional strategies and techniques employed in the Web-based learning environment impact on the facilitation of complex thinking?
  • How did the Web-based learning activities contribute to the success of the course?
  • The extent to which these complex thinking skills were evident in the student online activities could therefore easily be detected by using the comprehensive checklist and the criteria that were generated.
Phase D: Data analysis

Phase D provided an explicit account or report of the outcomes of the research, according to the criteria specified in Phase A, and types of evidence used. Data were collected from submissions and discussions in the Web-based learning environment and these were interpreted against theoretical criteria derived from the literature study. The data that were collected were reduced to several themes (complex thinking, instructional strategies and Web-based learning) with several categories and sub-categories of criteria, and provided a framework for the analysis and interpretation of the data by using a classification scheme One of the most important tasks of analysis is the identification of "patterns, commonalities, differences and processes" (Miles & Huberman 1994). Categories (criteria) were developed in terms of their properties and some categories were eventually promoted to major categories while others were demoted to sub-categories.

A practical format for the analysis of the written discussions (talk) and assignment activities (described as 'messages' by the Web-based software WebCT used to facilitate the learning) displayed in the Web-based learning environment had to be found. In this study content analysis was regarded as the most useful model for analysing the content of these recorded messages in accordance with Merriam's (1992) emphasis on the importance of observing and analysing the content of learners' conversations. The learners' discussions were divided into units of meaning as the most practical method for this study. This method counts each type of talk as it occurs (Henri 1992).

 

SUMMARY OF FINDINGS

The findings were integrated with theoretical perspectives on instructional strategies for complex thinking development and, new, unique criteria for online learning design were yielded. This research is not generalisable, and instructional practitioners, designers and learners will have to judge the applicability of the findings and recommendations made.

There are many implications for practice in the findings of this research. Most relate directly to the use of Web-based learning in higher education environments, although many will apply to other classroom settings. The implications pertain to both the design of online learning and the application of instructional strategies used in instructional designs . The contribution of this research is three-fold. It is significant in the South African context, it has practical value and design criteria for Web-based learning were generated and documented to produce design principes that may be useful to any practitioner of Web-based learning.

Significance in the South African context

The major contribution of this study is that, for the first time in the South African context, research was undertaken based on a typology that clustered the dimensions of complex thinking, instructional strategies/techniques and Web-based learning within the context of a design experiment. This research is significant for higher education in South Africa where Web-based learning is emerging as a tool to facilitate instruction. Prime reasons for using Web-based learning in South Africa are to improve the quality of learning, to provide learners with everyday information technology skills they will need in their career and personal lives, and to widen access to education and training. As Web-based learning is being implemented, an important emergent issue is to ensure that learning is adequately supported and facilitated. This study aimed at generating criteria to support meaningful learning in a Web-based learning environment and criteria were generated for providing clear learning outcomes, engaging learners, and structuring learner interactions to facilitate thinking development. In South Africa there is a need for the development of thinking skills as a general thrust in education and this research is particularly relevant as the development of critical and creative thinking skills (complex thinking skills) has been identified as a national critical outcome.

Practical value

Furthermore, this study has practical value because criteria were applied to a practical Web-based learning environment. This study focused on enhancing the practice of Web-based learning by linking the practical to the theoretical foundations and adequate literature reviews This research therefore aimed at making both a practical and scientific contribution to ensure a more productive inquiry. Furthermore, there were sufficient theoretical principles to guide the practice (Reeves 2000). The researcher aimed at explaining the phenomenon of complex thinking development through the logical analysis of learning theories and Web-based learning principles. However, because there are no sacred steps to effective instruction, this research - focusing on how Web-based instruction works - tested conclusions related to the theories of teaching, learning, thinking, assessment, social interaction, instructional design, and so forth. In addition, the primary goal of this design experiment was the development of a profile rather than testing hypotheses (Collins 1999). The overall goal of this research was therefore to solve real world problems while at the same time constructing design principles that can inform future designs (Reeves 2000). With this research goal in mind, it was considered necessary to employ a design experiment as research method.

The implications for the selection of instructional strategies

This design experiment aimed to determine the effects of Web-based instructional strategies on complex thinking development under certain controlled conditions. The principal implication for instructional designers is that the quality of the learning that takes place (whether in the Web-based learning environment or normal classroom settings) is directly influenced by the instructional strategies used. There are many advantages to be gained from implementing instructional strategies in a manner that supports the construction of knowledge and enhances complex thinking development.

A major implication for instructors and learners is that, contrary to constructivist beliefs, direct instruction plays a vital part in ensuring the quality of learning and thinking. If basic skills are not taught, learners will not be able to understand and apply these on higher levels of thinking. Learners should, for example, be taught how to apply the action words that describe the outcomes; they need to be taught the skills of co-operative learning and need knowledge on a topic to be assessed, in order to complete such complex tasks as peer assessment and group work, particularly in a complex learning environment such as the Web. Second-language users often find it difficult to understand the outcomes and assessment questions posed to them and the instructor should ensure that these are explained properly.

An important finding of the research is that the action words that describe the learning outcomes should be derived from the different complex thinking skills sets, because the outcomes employed directly affect the degree of complex thinking that takes place. It also found that time frames should not place restrictions on learning activities as it takes time to learn and think. Time frames should therefore be flexible and realistic to allow learners reasonable time to complete learning activities and work at their own pace. Furthermore, the research finds that co-operative learning strategies can be advantageous, but there are also some disadvantages. In particular, it suggested that inexperienced members should first be taught the basic skills of co-operative learning, and the instructor should ensure that these activities are clearly defined and procedures specified. Working in groups was found to take up much more time than working alone, therefore time should be given to complete group activities, especially in Web-based asynchronous environments where interaction is delayed.

An additional finding is that the instructor should apply questions that focus on the higher levels of cognitive activity (ill-structured questions) throughout the learning process to direct the discussions and to stimulate the learners' thinking. Web-based learning activities should be monitored and assessed regularly to ensure that learners are provided with the necessary feedback, motivation and guidance. This will also help the instructor to intervene and alter the learning, if and where necessary.

The implications for the design of online learning programmes

The principal implication for instructors is that instructional design models for Web-based learning can be an effective substitute for the traditional classroom design model. Contrary to concerns that Web-based learning models may place the focus on instruction and not on learning, an environment was created where learners actively used complex thinking skills in collaborative group settings. The research indicated that, generally, Web-based learning strategies could be successfully used for the facilitation of complex thinking. The seven Web-based learning criteria that were generated, may guide designers of Web-based instructional designs to a model based on outcomes-based education principles and learning theory.

A major implication for current research is that some learners may find it very difficult to adapt to new didactic methods, such as problem-solving activities and group work (peer assessment and debate). If, in this situation, they are also required to apply additional skills such as using the Web-based discussion forums effectively, the instructor must ensure that these skills are taught in advance and that the learners are familiarised with the specific Web-based learning settings, before an attempt is made to let them participate in such a complex activity. The new instructional strategies and techniques employed in the Web-based learning environment are geared to self-direction and active participation and some learners take time to adapt to these new approaches. An important implication for learners and instructors is that the Web as medium for instruction should be carefully weighed to ensure that flexible learning is provided. Time settings should be flexible, and adequate time should be given to complete group activities, especially in asynchronous Web-based learning environments where the interaction is delayed. Without some time constraints however, assignments are not completed and marked in time, and proper feedback is not provided.

Design principles for Web-based learning

This study provides a framework incorporating design principles for instructors and designers of Web-based learning environments to encourage/faciltate complex thinking. This framework includes:

  • Criteria for identifying complex thinking and providing learning opportunities where the learner is encouraged to demonstrate and develop specific abilities and skills in complex thinking;
  • Instructional criteria/requirements for the effective facilitation of complex thinking, as derived from the social and cognitive constructivist learning theories;
  • Instructional design criteria applicable to asynchronous Web-based learning environments for the facilitation of complex thinking and effective learning.

The thorough exploration of the three theoretical thrusts of this study (complex thinking, instructional strategies/techniques and Web-based learning) makes a significant contribution and the list of criteria developed is potentially of great value to other researchers, instructors and practitioners of Web-based learning.

 

CONCLUSION

In this paper our aims were three-fold. In the first place, we wanted to highlight some of the issues pertaining to instructional technology research. We concluded that a number of factors impacted on past instructional technology research, being poor quality research, problems associated with research designs and research that is not socially relevant. We then proposed that design experiment methodologies may address many of the concerns that we have identified. The design experiment is a particulary suitable strategy to research implementations in educational hypermedia, but this methodology is under-utilised in the South African context. Finally, we constructed, as exemplar, our own design experiment. The paper described a framework for the design of such an experiment in which the development of complex thinking skills in Web-based learning environments were envisaged. The meticulous application of design experiment methodology illustrated the appropriateness of this strategy for the research of instructional technology.

 

REFERENCES

Barab, S.A. & Kirshner, D. (2001), "Special issue: Rethinking methodology in the learning sciences", Journal of the Learning Sciences, vol.10, nos.1-2, pp.1-222.

Brown, A.L. (1992), "Design experiments: Theoretical and methodological challenges in creating complex interventions in classroom settings", The Journal of Learning Sciences , vol.2, no.2, pp.141-178.

Brown, A.L. (1997), "Transforming schools into communities of thinking and learning about serious matters", American Psychologist, vol.5, no.52, pp.399-413.

Brown, A.L. & Campione J.C. (1996), Psychological theory and the design of innovative learning environments. In L.. Schauble & R. Glaser (1996), Innovations in learning: New environments for education , Lawrence Erlbaum Associates, Mahwah NJ.

Clark, R.E. (1983), "Reconsidering research on learning with media", Review of Educational Research , vol.53, no.4, pp.445-459.

Clark, R.E. (1994), "Media will never influence learning", Educational Technology Research and Development , vol.42, no.2, pp.21-29 .

Cobb, P., Confrey J., Disessa, A., Lehrer, R., & Schauble, L.. (2003), "Design experiments in educational research", Educational Researcher , vol.32, no.1, pp.9-13.

Collins, A. (1999), The changing infrastructure of educational research. In J. Hawkins & A. Collins (1998), Design experiments using technology to restructure schools, Cambridge University Press, New York.

Dillon, A. & Gabbard, R. (1998), "Hypermedia as an educational technology: A review of the quantitative research literature on learner comprehension, control and style", Review of Educational Research, vol.68 no.3, pp.322 -349.

Disessa, A.A. (1991), Local sciences: Viewing the design of human-computer systems as cognitive science. In M.J. Carrol Designing interaction: Psychology at the human-computer interface, Cambridge University Press, New York.

Edelson, D.C. (2001), "Design research: What we learn when we engage in design", Journal of the Learning Sciences , vol.22, no.1, pp.105-121 .

Henri, F. (1992), Computer conferencing and content analysis. In A.R. Kaye Collaborative learning through computer conferencing, Springer-Verlag, Berlin.

Hoepfl, M.C. (1997), "Choosing qualitative research: A primer for Technology Education researchers", Journal of Technology Education, vol.9, no.1, pp.47-63.

Joseph, D. (2000), Passion as a driver for learning: A framework for the design of interest centered curricula. Doctoral thesis, Northwest University, Evanston, IL.

LeCompte, M.D., Preissle, J., & Renata, T. (1993), Ethnography and qualitative design in educational research , Academic Press, San Diego.

Lockard, J. & Abrams, P.D. (2001), Computers for twenty-first century educators , Longman, New York.

Merriam, S.B. (1992), Qualitative research and case study applications in education , Jossey-Bass Publishers, San Francisco.

Miles, M.B. & Huberman, A.M. (1994), Qualitative data analysis (2 nd ed.), Sage, Thousand Oaks.

Mitchell, P.D. (1997), The impact of educational technology: a radical reappraisal of research methods. In D. Squires, G. Conole & G. Jacobs (eds.) The changing face of learning technology, University Wales Press, Cardiff, pp.51-58.

Reeves, T.C. (1995), Questioning the questions of instructional technology research. In M.R. Simonson & M. Anderson (eds.) Proceedings of the Annual Conference of the Association for Educational Communications and Technology, Research and Theory Division, Anaheim, CA, pp.459-470.

Reeves, T.C. (2000) Enhancing the worth of instructional technology research through 'design experiments' and other development research strategies. Symposium on International perspectives on instructional technology research for the 21st century (session 41.29: New Orleans, LA, USA).

Reeves, T.C. & Hedberg, J.C. (2003), Interactive Learning Systems Evaluation, Educational Technology Publications, Englewood Cliffs, New Jersey.

Roblyer, M.D. & Knezek, G.A. (2003), "New millennium research for educational technology: A call for a national research agenda", Journal of research on Technology in Education, vol.36, no.1.

Roosevelt Haas, M. (2001), The new perspectives in technology and education series. Harvard Graduate School of Education. Online. Accessed on 19 August 2003 at: http://www.gse.harvard.edu/news/features/tie10052001.html.

Shepard, L.A. (2000), "The role of assessment in a learning culture", Education Researcher, vol.29, no.7, pp.4-14.

Soltis, J.F. (1992), Inquiry paradigms. In M.C. Alkin (ed.) Encyclopedia of educational research, Macmillan, New York, pp. 620-622.

Stigler, J. & Hiebert, J. (1999), The teaching gap: Best ideas from the world's teachers for improving education in the classroom, Free Press, New York.

Stokes, D.E. (1997), Pasteur's quadrant: Basic science and technological innovation. Brookings Institution Press, Washington, DC.

Strauss, A. & Corbin, J. (1990), Basics of qualitative research, Sage Publications Inc., Newbury park, California.

Tellez, K. (1993), Experimental and quasi-experimental research in technology and teacher education. In H.C. Waxman & G.W. Bright (eds.) Approaches to research on teacher education and technology , Association for the Advancement of Computers in Education, Charlottesville, pp. 67- 78.

van den Akker, J. (1999), Principles and methods of development research. In J van den Akker, N. Nieveen, R.M. Branch, K.L. Gustafson & T. Plomp (eds.) Design methodology and development research in education and training, Kluwer Academic Publishers, The Netherlands.

van der Westhuizen, D. (2002), Online learning in the South African context: A meta-analysis of research trends, issues and topics. Proceedings of the 2002 SASE conference. South African Society for Education, Pretoria.

Vockell, E.L. & Asher, J.W. (1995), Educational research, Prentice Hall, Englewood Cliffs, New Jersey.

Willis, B. (ed.) (1994), Distance Education: Strategies and Tools. Educational Technology Publications, New Jersey.

Winegardner, K.E. (2000), The cased study method of scholarly research. Online. Accessed May 2002 at: http://www.tgsa.edu/online/cybrary/case1.html.



This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License. It may be reproduced for non-commercial purposes, provided that the original author is credited. Copyright for articles published in this journal is retained by the authors, with first publication rights granted to the journal. By virtue of their appearance in this open access journal, articles are free to use, with proper attribution, in educational and other non-commercial settings.

Original article at: http://ijedict.dec.uwi.edu/viewarticle.php?id=43