CAI's White Papers

  • Write On with Cambi: The development of an argumentative writing feedback tool

    04/24/2024 | White Paper
    Every year, millions of middle-school students write argumentative essays that are evaluated against a scoring rubric. However, the scores they receive don’t necessarily offer clear guidance on how to improve their essay or what they’ve done well. With advancements in natural language processing technology, we now have the capability to provide more detailed feedback using an artificial intelligence-supported editing tool to assist students in revising their essays.
  • Automated Scoring Performance on the NAEP Automated Scoring Challenge: Item‐Specific Models

    04/11/2023 | White Paper
    Cambium Assessment, Inc. (CAI) participated in the National Assessment of Educational Progress (NAEP) Automated Scoring (AS) Challenge1, in which NAEP provided items, scoring materials, and data to participating organizations in order to examine the state of the art in modelling on these items. As noted in the provided materials, “The purpose of the challenge is to help NAEP determine the existing capabilities, accuracy metrics, the underlying validity evidence of assigned scores, and costs and efficiencies of using automated scoring with the NAEP reading assessment items.
  • Automated Speech Scoring Methods and Results

    09/30/2022 | White Paper
    Many states are moving toward using automated scoring to reduce scoring costs, return scores more quickly to teachers and students, and provide consistent scoring against program-defined rubrics. While automated scoring of writing is well-established, automated scoring of speech is less common.  In this white paper, we provide a description of Cambium Assessment, Inc’s (CAI’s) automated scoring engine performance on English language learner speaking items.
  • Explaining Crisis Alerts from Humans and Automated Scoring Engines Using Annotations

    09/01/2022 | White Paper
    Explaining how human scorers and scoring engines arrive at scores is an important yet difficult problem that requires careful attention. We examine this issue by having both the humans and scoring engines annotate—or highlight—text associated with a crisis paper “score,” called an Alert.
  • Examining Bias in Automated Scoring of Reading Comprehension Items

    08/22/2022 | White Paper
    Using automated scoring (AS) to score constructed-response items has become increasingly common in K–12 formative, interim, and summative assessment programs. AS has been shown to perform well in essay writing, reading comprehension, and mathematics. However, less is known about how automated scoring engines perform for K-12 students by key subgroups such as gender, race/ethnicity, English proficiency status, disability status, and economic status. In addition, measures for examining for the existence of bias are not well-defined.
  • Comparing the Robustness of Automated Scoring Approaches

    07/20/2022 | White Paper
    One of the frequent criticisms of automated essay scoring is that engines do not understand language and therefore can be ‘tricked’ into giving higher scores than they should. Engines have been found to be susceptible to these responses, but the impact of such responses varies by item and engine design.
  • Communicating With The Public About Machine Scoring

    08/11/2020 | White Paper
    With online testing, there is an increasing demand to return scores quickly at lower cost and increasing evidence of the ability of automated scoring systems to accurately score papers. Thus, many assessment programs are adopting or considering adopting automated scoring as a replacement or complement to human scoring. The issue of adopting automated scoring, at its core, is communication.
  • Using AI to Identify At-Risk Student Responses

    10/09/2019 | White Paper
    In large-scale assessments, student responses that contain content that indicates immediate or potential harm are identified and routed to our clients for potential intervention to help ensure the safety of a student or others. Fortunately, these responses occur very rarely. This rarity, however, does not belittle the importance in identifying them.

All Inquiries:

For statements, our administrative practices, or other issues with our website, please contact us at:


Cambium Assessment, Inc.
1000 Thomas Jefferson St., N.W.
Washington, D.C. 20007
Phone: 888-749-6178


Report a website problem:

If you have any questions about the website,
Email: website@cambiumassessment.com

A Cambium Learning® Group Brand

Cambium Learning Group