Bolboacă, S., & Jäntschi, L. (2007, March 20). Computer-based testing on physical chemistry topic: A case study. International Journal of Education and Development using ICT [Online], 3(1). Available: http://ijedict.dec.uwi.edu/viewarticle.php?id=242.

Author names - Title of article


Computer-based testing on physical chemistry topic: A case study

Sorana-Daniela Bolboacă
"Iuliu Haţieganu" University of Medicine and Pharmacy, Romania
Lorentz Jäntschi
Technical University of Cluj-Napoca, Romania

 

ABSTRACT

According with national trends in the objective evaluation of undergraduate students’ knowledge, an auto-calibrated online evaluation system was developed. The aim of the research was to assess the knowledge on physical chemistry topic of the undergraduate first year students’ at the Faculty of Materials Science and Engineering, the Technical University of Cluj-Napoca, Romania by the use of the developed auto-calibrated system. The methodology of multiple-choice questions construction and the evaluation methodology are presented. The students performances in terms of number of correct answers and time needed to give a correct answer were collected and analyzed. The future plans of system development are highlighted.

Keywords: Auto-calibrated online evaluation, multiple choice questions (MCQs), physical chemistry, undergraduate students

 

INTRODUCTION

In universities, the cardinal premise of the end-of-course examination is to assess as objective as possible the students’ knowledge and skills acquired on the courses, practical activities and seminars.

Development of communication (Valcke & De Wever 2004) and information technologies (Kidwell, Freeman, Smith & Zarcone 2004; Matusov, Hayes & Pluta 2005) provide today the opportunity of creation interactive computer-assisted environments used in many domains, including in chemistry training (Beasley 1999; Frecer, Burello & Miertus 2005; Chen, Chen & Cao 2002) and evaluation (Timmers, Baeyens, Remon & Nelis 2003; Stewart, Kirk, LaBrecque, Amar & Bruce 2006).

In many academic domains, educational measurement has been moving towards the use of computer-based testing, define as tests or assessments that are administered by computer in either stand-alone or dedicated network, or by other technology devices linked to the Internet or the World Wide Web most of them using multiple choice questions (MCQs). The computer-assisted evaluation strategies are used in medicine (Oyebola, Adewoye, Iyaniwura, Alada, Fasanmade & Raji 2000), chemistry (Ananda, Gunasingham, Hoe & Toh 1989), language testing (Brown 1997), biology (Evans, Gibbons, Shah & Griffin 2004), computer science (Barker & Britton 2005), and economics (Judge 1999).

Currently in Romania, at the Faculty of Materials Science and Engineering, the Technical University of Cluj-Napoca, the traditional method (a combination of essay examination, practical examination and/or tutor assessment) is the most frequently used as evaluation of students’ knowledge. In the last years, the number of students increased and the conventional examination method become time consuming in term of the examination time as well as in term of papers assessment. Thus, the students received their final marks beyond the day of examination (the next day after examination in the best case and up to one week or more). A solution of examination in large classes of students is an automated testing system which to allow testing the students knowledge and displaying immediate on the screen the examination results.

According with national and international trends in objective undergraduate students’ knowledge evaluation, and starting from the experiences obtained by creation of the multiple choice examination system for general chemistry topic (Naşcu & Jäntschi 2004a; Naşcu & Jäntschi 2004b), an auto-calibrated online evaluation environment was developed (Jäntschi & Bolboacă 2006). The aim of the research was to study students’ knowledge on physical chemistry topic by the use of the developed auto-calibrated online evaluation system.

 

MATERIAL AND METHOD

Auto-calibrated online evaluation system

The auto-calibrated online evaluation system (Jäntschi & Bolboacă 2006) embodies:

  • Multiple-choice banking. The characteristics of the MCQs and respectively of the multiple-choice banking are as follows:
    • The question anatomy: a statement or a situation, a problem (steam) and a list of five suggested solutions (options). Each question had one or up to four correct options;
    • The students enrol voluntarily in the team responsible by creation of the items banking (a team of two students was responsible for creation of MCQs from the material presented at one course or one practical activity);
    • A number of four hundred and twenty-four MCQs were included into database: 49.3% with one correct option, 26.9% with two correct options, 16.5% with three correct options, and 7.31% with four correct options;
    • Score methodology: all-or-none rule (one point if the correct answer is selected (for questions with one correct option) or if all the correct options (respectively two for the questions with two, three for the questions with three, and four for the question with four correct options) and none of the distracter(s) - the incorrect option(s) presented as a choice in a multiple-choice test - are selected, and zero points otherwise).

  • Testing environment. The online testing environment embody:
    • A description of the testing methodology, with following specifications: the location of the examination (at the test centre); the type of the examination (computer- and teacher-assisted); the period and the time of the examination (according with the structure of academic year and with the students and teacher program);
    • A description of the test methodology witch contains the following specifications: the number of MCQs (thirty); the generation of the MCQs tests (double randomization from MCQs banking - randomization of the steam and randomization of the options order), the number of tests (as many times as the student wanted in the imposed period of time), the penalties (applied each time when a student give up to a begun test);
    • A description of the scores and of the final mark methodologies;
    • The testing environment.

  • Results:
    • The results of individual tests. At the end of each test, the students’ identification data, the time when the test begun and ended and the number of correct answers are displayed. There was considered that a student give a correct answer, for a question with option A and B correct, if he/she selected both options (A and B) and did not select any other option(s).
    • The test results. A page that contains the test results for the whole class of students, express as the number of correct answers and the time needed to give a correct answer can be visualized. It was considered a correct answer if for example, for a question with three correct options all the correct options were selected and none of the two distracters.
    • The auto-calibration of the final mark. The system assigned to the lower score the mark equal with 4 and to the highest score the mark equal with 10 and place each individual score between these ranges. Each time when a student gave a test, the system auto-calibrated the final marks for all students according with the distribution of individual scores of whole students’.

Assessment of students’ knowledge

There were included into the study a number of forty-two students from the Faculty of Materials Science and Engineering. The familiarization of the students with the evaluation environment was possible before the examination; the students had the possibility to use the system and to evaluate themselves as many time as he/she desired, per a period of one month.

The following variables were store into database for each evaluation: the students’ identification data, the data and the time when the test begin and end (according with the yy.mm.dd hh.mm.ss format), and the number of correct answer(s) (out of thirty). There were also calculated based on stored data the individual time needed to give a correct answer (express in seconds) and the average time needed to give a correct answer (this parameter took into consideration all students).

Data were analyzed with Statistica 6.0 at a significance level of 5%. The 95% confidence intervals for proportions were calculated by the use of an original method, based on the binomial distribution hypothesis (VLFS 2005).

 

RESULTS

The auto-calibrated online evaluation system on physical chemistry topic was created and it is available via the address: http://vl.academicdirect.org/general_chemistry/physical_chemistry/. The access to the system is restricted (by checking the IP addresses), being available just at the imposed test location.

Each student performed at least one time the online test by the use of the auto-calibrated online evaluation system. Seventeen students out of forty-two (40.5%, 95%CI [26.25, 57.09]) were content with performances obtained at first test. The distributions of the number of tests express as relative frequency and associated 95% confidence intervals are:

  • Two tests: twenty-five students’ out of forty-two (59.52% [42.91, 73.75]);
  • Three tests: ten students’ out of forty-two (23.81% [11.96, 40.42]);
  • Tour tests: five students’ out of forty-two (11.90% [4.82, 26.13]);
  • Five tests: two students’ out of forty-two (4.76% [0.06 16.61]);
  • Six and seven tests: one student out of forty-two (2.38% [0.06, 11.85]);

The intervals between first and last examination (where were applicable) varied from 1 day (minimum) to 17 days (maximum), with an average of 6 days (95%CI [3.86, 8.13]) and a median of 4 days.

Statistical characteristics associated with the number of questions at which the students gave a correct answer and the time needed to give a correct answer, according with the evaluation (1st test, … 7th test), express as average (Ave), standard deviation (StDev), mode (Mode), minimum (Min) and maximum (max) are in table 1.

 

Table 1: Statistical characteristics of the number of correct answers and of time needed to give a correct answer

 

Test

Average [95%CI]

StDev

Mode

Median

Min

Max

Number of correct answers

1st

6.40 [5.44, 7.37]

3.09

7

6

1

17

2nd

6.60 [5.02, 8.18]

3.83

7

7

1

14

3rd

5.80 [2.86, 8.74]

4.10

5

5

2

15

4th

3.60 [1.03, 6.17]

2.07

N.A.

4

1

6

5th

5.00 [-7.71, 17.71]

1.41

N.A.

5

4

6

6th

1.00 [N.A., N.A.]

N.A.

N.A.

1

N.A.

N.A.

7th

2.00 [N.A., N.A.]

N.A.

N.A.

2

N.A.

N.A.

Time needed to give a correct answer

1st

214.53 [174.46, 254.60]

128.59

N.A.

173.65

50.1

682

2nd

182.90 [121.82, 243.98]

147.96

N.A.

125.80

44

713

3rd

93.46 [51.89, 135.03]

58.12

N.A.

71.65

31

204.8

4th

85.98 [-11.49, 183.45]

78.50

N.A.

52.50

28.8

216

5th

30.35 [-104.97, 165.67]

15.06

N.A.

30.35

19.7

41

6th

65.00 [N.A., N.A.]

N.A.

N.A.

N.A.

N.A.

N.A.

7th

52.50 [N.A., N.A.]

N.A.

N.A.

N.A.

N.A.

N.A.

N.A. = not applicable

 

Seventeen students were content with the performances obtained at the first test. On this sample the statistical characteristics of the number of questions at which they give correct answers and of the time needed to give a correct answer are:

  • The number of correct answers: Ave = 8.76 (95% CI [7.11, 10.42]); StDev = 3.21; Mode = 7, Min = 5; Max = 17;
  • The time needed to give a correct answer: Ave = 137.96 (95% CI [102.20, 173.72]); StDev = 69.55; Median = 112.4; Min = 50.1; Max = 310.4.

The performances obtained by the students which performed the test more than one time, express as number of correct answers out of thirty (nca), and time needed to give a correct answer (tca­(s), express in seconds) are in table 2 and 3.

 

Table 2: Performances of the students which performed two tests

Student id

Param

Test

 

Student id

Param

Test

1st

2nd

 

1st

2nd

id_01

 nca

6

4

 

id_22

 nca

4

2

 tca­(s)

171.5

125.8

 

 tca­(s)

365.8

449

id_03

 nca

2

7

 

id_24

 nca

5

8

 tca­(s)

682

152.7

 

 tca­(s)

195.4

112.4

id_06

 nca

7

14

 

id_26

 nca

5

12

 tca­(s)

128.7

72.2

 

 tca­(s)

268

83.2

id_07

 nca

4

9

 

id_27

 nca

6

11

 tca­(s)

387.8

119.4

 

 tca­(s)

134.5

70.2

id_13

 nca

1

2

 

id_31

 nca

7

14

 tca­(s)

542

44

 

 tca­(s)

312.1

87

id_17

 nca

3

12

 

id_33

 nca

5

9

 tca­(s)

321

100.8

 

 tca­(s)

342

100.2

id_20

 nca

6

7

 

id_42

 nca

6

7

 tca­(s)

175.8

92.7

 

 tca­(s)

247.8

189.6

id_21

 nca

7

8

 

Param = parameters;
nca = number of correct answers
tca­(s) = time needed to give a correct answer

 tca­(s)

150.7

86.4

 

 

Table 3: Performances of the students which performed more than two tests

Student id

Param

Test

1st

2nd

3rd

4th

5th

6th

7th

id_05

 nca

4

2

3

N.A.

N.A.

N.A.

N.A.

 tca­(s)

159.30

245.00

31.00

N.A.

N.A.

N.A.

N.A.

id_09

 nca

7

7

15

N.A.

N.A.

N.A.

N.A.

 tca­(s)

170.60

205.70

40.00

N.A.

N.A.

N.A.

N.A.

id_18

 nca

4

3

9

N.A.

N.A.

N.A.

N.A.

 tca­(s)

212.80

281.00

85.30

N.A.

N.A.

N.A.

N.A.

id_35

 nca

4

5

9

N.A.

N.A.

N.A.

N.A.

 tca­(s)

414.80

396.40

127.90

N.A.

N.A.

N.A.

N.A.

id_41

 nca

4

4

5

N.A.

N.A.

N.A.

N.A.

 tca­(s)

168.00

198.80

52.20

N.A.

N.A.

N.A.

N.A.

id_16

 nca

4

4

2

4

N.A.

N.A.

N.A.

 tca­(s)

285.50

125.30

58.00

28.80

N.A.

N.A.

N.A.

id_34

 nca

7

5

5

5

N.A.

N.A.

N.A.

 tca­(s)

225.30

212.60

204.80

30.40

N.A.

N.A.

N.A.

id_39

 nca

3

5

5

6

N.A.

N.A.

N.A.

 tca­(s)

223.00

167.80

167.20

102.20

N.A.

N.A.

N.A.

id_30

 nca

6

3

3

2

6

N.A.

N.A.

 tca­(s)

244.70

141.30

110.70

52.50

19.70

N.A.

N.A.

id_19

 nca

3

1

2

1

4

1

2

 tca­(s)

136.00

713.00

57.50

216.00

41.00

65.00

52.50

Param = parameters; nca = number of correct answers;
tca­(s) = time needed to give a correct answer; N.A. = not applicable

 

In order to compare the number of correct answers gave by the students which performed the test more than one time, the Student test at a significance level of 5% was applied and the results are in table 4. The number of correct answers gave by the students at the evaluation tests was abbreviate as nca­-i (where i = 1 for first test, …, i = 4 for the fourth test). There were analyzed four null hypotheses as follows:

  1. There was not significant difference between the average of the number of correct answers give at the first test comparing with the second test in the sample of students which performed the test by two times (nca-1st & nca-2nd);
  2. There was not significant difference between the average of the number of correct answers give at the second test comparing with the third test in the sample of students which performed the test by three times (nca-2nd & nca-3rd);
  3. There was not significant difference between the average of the number of correct answers give at the third test comparing with the fourth test in the sample of students which performed the test by four times (nca-2nd & nca-3rd);
  4. There was not significant difference between the average of the number of correct answers give at the first test comparing with the last test in the sample of students which performed the test more than one time (nca-1st & nca-last).

 

Table 4: Results of comparison regarding the number of average correct answers gave by students which performed more than one test

 

nvalid

T

p-value

nca-1st & nca-2nd

25

64.5

0.0441*

nca-2nd & nca-3rd

10

4

0.0910

nca-3rd & nca-4th

5

4

0.7150

nca-1st & nca-last

25

35

0.0017*

nca-i  = the number of correct answers;
nvalid = the number of valid cases;
T = the parameter of Student test;
* p significant

 

 

The results of comparison regarding the time needed to give a correct answer applied to students which performed more than one test are in table 5 (where tca-i is the time needed to give a correct answer for evaluation i, where i = 1 (for 1st evaluation), …, 4 (for 4th evaluation)). There were analyzed three null hypotheses as follows:

  1. There was not significant difference between the average time needed to give a correct answer at the first test comparing with the second test in the sample of students which performed the test by two times (tca-1st & tca-2nd);
  2. There was not significant difference between the average time needed to give a correct answer at the second test comparing with the third test in the sample of students which performed the test by three times (tca-2nd & tca-3rd);
  3. There was not significant difference between the average time needed to give a correct answer at the third test comparing with the fourth test in the sample of students which performed the test by four times (tca-3rd & tca-4th);

 

Table 5: Results of comparison regarding the time needed to give a correct answer on sample of students which performed more than one test

 

tca

Mean

StdDev

nvalid

t

tca-1st & tca-2nd

tca-1st

266.604

134.33

 

 

tca-2nd

182.9

147.9643

25

2.01

tca-2nd & tca-3rd

tca-2nd

268.69

174.2923

 

 

tca-3rd

93.46

58.11506

10

2.88*

tca-3rd & tca-4th

tca-3rd

119.64

65.67909

 

 

tca-4th

85.98

78.49823

5

0.62

tca = the time needed to give a correct answer; StdDev = standard deviation;
nvalid = number of valid cases; t = parameter of Student t test; * p < 0.05

 

The average time needed to give a correct answer obtained by students at the last examination (average = 98.22 seconds, Min = 50.1 seconds, Max = 682 seconds) was significantly lower (p = 0.000002, nvalid = 25, see figure 1) comparing with the average time needed to give a correct answer obtained at the first evaluation (average = 266.60 seconds, Min = 19.7 seconds, Max = 449 seconds).

Figure 1: Distribution of time needed to give a correct answer at the first and respectively at the last test

Figure 1.
Distribution of time needed to give a correct answer at the first and respectively at the last test

 

DISCUSSION

The assessment of the students’ knowledge is a common task at the end of the semester and/or academic year. Testing methods which imply multiple-choice questions are usually used in evaluation of students’ knowledge for speed, accuracy, and fairness in grading (Toby & Plano 2004).

The proposed system offer to the students involve directly in MCQs baking the opportunity to deep understand of the information regarding physical chemistry topic using an active learning method. Thus, the students were motivated to formulate questions, to create options for each question, and to define the correct answer.

Being a new evaluation method the students’ had the possibility to use the system before evaluation, as pre-test evaluations. There were two aims of the pre-test evaluations. The first aim was allows familiarization of the students with proposed computer assisted evaluation environment. The second aim was to give the students possibility to test their physical chemistry knowledge, to identify their knowledge gaps, the difficult subjects and the information which need a special attention in preparation for the examination.

As it was described in Material and Method chapter, for obtaining the final mark for physical chemistry topic, each student had the possibility to test his/her knowledge as many times as desired. More than one third of students were content with performances obtained at first test. Analyzing their performances and comparing them with the whole sample, it can be observed that the average of the number of corrected answers is higher than the average reported to the whole sample, the minimum value is higher and the obtained values are not more disperse comparing with the whole sample. Comparing the average of time needed to give a correct answer, the student that decide to performed the test just one time obtained better results (137.96 seconds comparing with the whole sample, where the average was equal with 214.53 seconds). The minimum value for the variable time needed to give a correct answer was the same for the students that were content with results obtained at the first test, comparing with the rest of the sample. A significant difference was observed at the maximum value for the time needed to give a correct answer, where the value obtained by the sample of students which were content with the results obtained at the first test was half from the value obtained by the students which performed the exam more than once. This sample of students was more interested by the physical chemistry topic comparing with the colleagues who performed more than one test.

Looking at the period of time between first and last test, it can be observed the majority of students performed the test after one day, respectively three days. The students which test their knowledge after one day could by those students which learn the materials but did not try to see how the evaluation system works. Those students that performed the test again after more than one day look to be the ones which learn the materials but they were not content with obtained performances.

The results obtained at the second test, show that fifteen students out of twenty-five obtained better results in terms of number of correct answers (see table 2 and 3), and the differences vary from 1 point (id_13, id_20, id_21, id_42, id_35) to 9 points (id_17). In seventeen cases out of twenty-five, the time needed to give a correct answer decreased at the second test comparing with the first test (see table 2 and 3). The greatest decreasing was of almost 530 seconds (for the student with id_03, from 682 seconds to 152.7 seconds, table 2). These decreasing of the time needed to give the correct answer (see table 2) demonstrate that the students which presented at the second test were self-confident on their knowledge and were able to make better connections between their acquired knowledge and the correct option(s) in less time comparing with first test.

Five out of ten students which presented to the third evaluation were able to exceed personal previous performances in terms of number of correct answers (id_09, id_18, id_35, id_39, and id_41) and of average time needed to give a correct answer (see table 3). A decrease of the average time needed to give a correct answer from 713 seconds at second test to 57.5 seconds at third test was observed at the student with id_19.

Five out of forty-two students performed the evaluation by four times. From this sample of students, three students obtained lower performances regarding the number of correct answers comparing with first evaluation, into a range from 2 points (id_34, and id_19) to 4 points (id_30). One student out of five has improved his/her performance (id_39) with two points at second evaluation comparing with first evaluation, obtaining the same performances at third evaluation as at the second evaluation, and with one point at forth evaluation comparing with third evaluation. Regarding the time needed to give a correct answer at this sample of students, except one student, the time decreased from first to forth evaluation with 121.10 seconds (id_39), 192.2 seconds (id_30), 194.9 seconds (id_34), and respectively 256.7 seconds (id_16).

Analyzing the results obtained in terms of number of correct answers and time needed to give a correct answer it can be concluded that students had improve their performances, obtaining results significantly better at the last evaluation comparing with the results obtained at the first evaluation.

Even if some students try to cheat and to obtain performances without learning the material, the auto-calibrated online system proved to be valid and did not allow or encourages these kinds of practices.

It can be conclude that the presented system is a reliable solution in students’ knowledge evaluation on physical chemistry. In order to improve the auto-calibrated online evaluation system, the future direction of development has two directions: the creation of a homogenous distribution of the questions with one, two, three and respectively four correct options and the analysis of the answers gave by students to each question. The analysis of the students’ answers can reveal information about the level of knowledge and will allow identification of the materials which were difficult for students to understand. With the obtain information, the practical activities, seminaries and courses on physical chemistry topic could be improving.

 

CONCLUSIONS

The proposed auto-calibrated online evaluation system proved to offer a stable and valid evaluation environment on physical chemistry topic.

Students’ performances in terms of number of correct answers and time needed to give a correct answer reveal to be improved at final evaluation comparing with first evaluation, showing an improvement of acquired physical chemistry knowledge.

 

ACKNOWLEDGEMENT

The research was partly supported by UEFISCSU Romania through the project ET46/2006.

 

REFERENCES

Ananda, A. L., Gunasingham, H., Hoe, K.Y. & Toh, Y. F. (1989), "Design of an intelligent on-line examination system", Computers and Education, vol. 13, no. 1, pp. 45-52.

Barker, L. M. & Britton, C. T. (2005), "Automated Feedback for a Computer-Adaptive Test: A Case Study", Proceedings for 9th CAA Conference 2005 [Online], viewed 12 August, 2006, <http://www.caaconference.com/pastConferences/2005/proceedings/LilleyM_BarkerT_BrittonC.pdf>.

Beasley, W. (1999), "New competencies for new times: Teacher professional development beyond 2000", Pure and Applied Chemistry, vol. 71, no. 5, pp. 835-844.  

Brown, J. D. (1997), "Computers in language testing: present research and some future directions", Language Learning & Technology, vol. 1, no. 1, pp. 44-59.

Chen, C.-W., Chen, D.-Z., & Cao, G.-Z. (2002), "An improved differential evolution algorithm in training and encoding prior knowledge into feedforward networks with application in chemistry", Chemometrics and Intelligent Laboratory Systems, vol. 64, no. 1, pp. 27-43.  

Evans, C., Gibbons, N. J., Shah, K. & Griffin, D. K. (2004), "Virtual learning in the biological sciences: Pitfalls of simply "putting notes on the web"", Computers and Education, vol. 43, no. 1-2, pp. 49-61.   

Frecer, V., Burello, E. & Miertus, S. (2005), "Combinatorial design of nonsymmetrical cyclic urea inhibitors of aspartic protease of HIV-1", Bioorganic and Medicinal Chemistry, vol. 13, no. 18, pp. 5492-5501.   

Jäntschi, L. & Bolboacă, S. D. (2006), "Auto-calibrated Online Evaluation: Database Design and Implementation", Leonardo Electronic Journal of Practices and Technologies, vol. 8, pp. 178-191.

Judge, G. (1999), "The production and use of online quizzes for Economics", CHEER [Online], viewed 12 August, 2006, <http://www.economicsnetwork.ac.uk/qnbank/>.

Kidwell, P. K., Freeman, R., Smith, C. & Zarcone, J. (2004), "Integrating online instruction with active mentoring to support professionals in applied settings", Internet and Higher Education, vol. 7, no. 2, pp. 141-150.

Matusov, E., Hayes, R. & Pluta, M. J. (2005), "Using discussion webs to Develop an academic community of learners". Educational Technology and Society, vol. 8, no. 2, pp. 16-39.   

Naşcu, H. I. & Jäntschi L. (2004a), "Multiple Choice Examination System 1. Database Design and Implementation for General Chemistry", Leonardo Journal of Sciences, vol. 5, pp. 18-33.

Naşcu, H. I. & Jäntschi, L. (2004b), "Multiple Choice Examination System 2. Online Quizzes for General Chemistry", Leonardo Electronic Journal of Practices and Technologies, vol. 5, pp. 26-36.

Oyebola, D. D., Adewoye, O. E., Iyaniwura, J. O., Alada, A. R., Fasanmade, A. A. & Raji, Y. (2000), "A comparative study of students' performance in preclinical physiology assessed by multiple choice and short essay questions", African Journal of Medicine and Medical Sciences, vol. 29, no. 3-4, pp. 201-205.  

Stewart, B.; Kirk, R.; LaBrecque, D.; Amar, F. G. & Bruce, M. R. M. (2006), "InterChemNet: Integrating Instrumentation, Management, and Assessment in the General Chemistry Laboratory Course", J Chem Educ, vol. 83, no. 3, pp. 494-500.

Timmers, S., Baeyens, W. R. G., Remon, J.-P. & Nelis, H. (2003), "Newer Analytical Chemistry Teaching Approaches at the Pharmaceutical Faculty of the Ghent University", Microchimica Acta, vol. 142, no. 3, pp. 167-175.

Toby, S. & Plano, R. J. (2004), "Testing, Testing: Good Teaching Is Difficult; So Is Meaningful Testing", J Chem Educ,  vol. 81, no.2, pp. 180-181.

Valcke, M. & De Wever, B. (2004), "Information and communication technologies in higher education: Evidence-based practices in medical education", Medical Teacher, vol. 28, no. 1, pp. 40-48.

VLFS 2005, Binomial Distribution, viewed 13 August, 2006, <http://vl.academicdirect.org/applied_statistics/binomial_distribution/>.

 


Copyright for articles published in this journal is retained by the authors, with first publication rights granted to the journal. By virtue of their appearance in this open access journal, articles are free to use, with proper attribution, in educational and other non-commercial settings.
Original article at: http://ijedict.dec.uwi.edu//viewarticle.php?id=242&layout=html

 



This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License. It may be reproduced for non-commercial purposes, provided that the original author is credited. Copyright for articles published in this journal is retained by the authors, with first publication rights granted to the journal. By virtue of their appearance in this open access journal, articles are free to use, with proper attribution, in educational and other non-commercial settings.

Original article at: http://ijedict.dec.uwi.edu/viewarticle.php?id=242