Skip to end of banner
Go to start of banner

Learning Analytics

Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 10 Next »

BETA


As part of our mission to replace traditional reports with integrated data and predictive analytics across all Inspera tools, this first step is bringing Classical Test Theory (CTT) into the Item Bank for all questions in a test.

Later in 2018, we will offer more advanced reporting based on Item Response Theory (IRT) psychometrics analysis. Reporting will therefore not be limited to the context of a specific exam or test. By combining IRT with a more flexible Content Taxonomy, Inspera Assessment will provide a high quality live data analysis across topics/subjects. With this advanced data analysis, customers obtain better insight into the quality of the digitisation process.

What is the impact of Learning Analytics Beta version?

We have released Learning Analytics in Beta to gain targeted experience within data usage in combination with all the different test types that are possible to create within Inspera Assessment. There are also a few notable items in the current version:

  • Older tests can have missing or less accurate calculation of time spent per question, this and derived values could therefore be wrong.
  • The usage of advanced scoring rules, such as negative marks, marks per alternative, threshold values on questions, and bands and criteria, can affect the values in such a way that they can be hard to interpret.
  • Questions with manual scoring, can be left unmarked by a marker. In these cases, values as average score, P-value and correlation will not be correct.

The best approach for solving the above-listed issues, is something we want to investigate further before a general release. Below is a list of all available Learning Analytics-data for each question in a test.  To filter on a test use the Test-filter at the top.

Order

The position of the question in this test.

Exposed

The number of students, among those who submitted this test, who opened the question. Equals the sum of "Attempted" and "Omitted".

Attempted

The number of students who answered the question.

Avg. time

The average number of seconds spent on the question among the students who have seen the question.

Omitted

The number of students who opened the question without answering it.

Not exposed

The number of students who did not open the question.

Correct 

The number of students who answered the question correctly.

Average score

The average score on the question among the students who submitted this test.

Max score

The maximum score on the question.

P-value

The P-value is the normalized average score on a question. This means that the maximum value of the P-value is 1, which happens if all the candidates answer the question correclty. It is worth noticing that the P-value can differ from the average score, even if the question has a maximum score of 1, because the P-value only takes into account the number of candidates that were epxosed to the question.


Note: If you use advanced scoring rules with negative scores,  it may be difficult to interpret this value.

Correlation

A number between -1 and 1. The correlation is the extent to which the question score correlates to the total score. Negative or very low positive values ​​may indicate that the question does not discriminate (differ) well between students of high and low ability. See here for additional documentation: http://https://en.wikipedia.org/wiki/Point-biserial_correlation_coefficient.


Note: If you use advanced scoring rules with negative scores, it may be difficult to interpret this value.



  • No labels