Summary
Cadmus is a web-based assessment platform that embeds inside institutional learning management systems (LMSs) such as Canvas, Blackboard, Moodle and D2L to deliver end-to-end written assessment workflows. In practice, academic authors task, release them to cohorts, monitor progress and mark submissions in a browser, while students complete work within a Cadmus-hosted writing space that sits inside the LMS. The vendor positions Cadmus as a single secure space for assignments and invigilated exams, with integrated analytics designed to reveal how a piece of work was constructed rather than showing only the final file. Australian universities that have deployed the tool use similar language, describing a consolidated workflow that spans design, delivery, completion, submission and feedback for individual written tasks. That promise, one consistent interface for very large cohorts, is appealing to institutions seeking to reduce administrative load, standardise practice across campuses, and gain better visibility into the writing process.
What Cadmus is, and how it fits into a typical course
At its core, Cadmus provides two environments: a teacher space for authoring and management, and a student space for completing and submitting work. It supports common higher-education and VET assessment types, including essays, literature reviews, lab reports, case studies, annotated bibliographies and “in-place” digital exams. Rather than replacing the LMS, it wraps around it. Academics continue to use existing graders, such as Canvas SpeedGrader, for marking and rubric application, while Cadmus handles the authoring canvas, the integrity-relevant timeline of edits and pastes, and the logistics of distributing instructions and scaffolds alongside the task. When implemented well, students encounter a single, consistent editor across subjects and devices, with resources, citation prompts and feedback co-located with the task. For large Australian subjects that enrol thousands of students, this consolidation is operationally attractive.
Cadmus also foregrounds academic integrity. Its analytics expose time on task, the balance between typed and pasted text, revision patterns and the progression of drafts. Where institutions enable it, similarity checking through Turnitin sits alongside these process traces. In vendor case studies, such as a digital exam pilot at the University of Notre Dame Australia, Cadmus is presented as delivering “zero academic integrity breaches,” with students reporting high ease-of-use. These stories are used to support the claim that a controlled, transparent process environment can deter misconduct without abandoning familiar typing-based assessment.
What the platform does well
Cadmus’ principal strength is consistency. Academics configure prompts, scaffolds, sources and parameters once, then distribute to large cohorts through the LMS. Students write in a controlled browser workspace that behaves the same way whether they are on campus or at home, on a laptop or a lab desktop. Markers stay in the LMS they know, while gaining the additional context of a construction timeline that shows when and how the submission took shape. For subjects that need to standardise practice across multiple campuses and teaching teams, that blend of control and familiarity helps reduce friction.
The environment lends itself to “in-place” digital exams. In supervised rooms, Cadmus functions as the editor while invigilators watch screens, and the platform captures the writing process end-to-end. In that context, the combination of a locked-down environment and a complete process trace narrows the opportunities for misconduct. Institutions that have published results from such deployments report positive student experience and fewer integrity incidents than paper-based equivalents. The platform’s accessibility posture is also improving: an updated 2025 statement commits to WCAG 2.2 AA and outlines ongoing work, while partner universities set expectations for screen-reader compatibility and keyboard navigation. For Australian providers operating under the Disability Standards for Education, the presence of a public accessibility roadmap is a necessary condition for high-stakes use.
Where criticism concentrates
Despite these strengths, recurring concerns appear in campus reporting, student forums and institutional FAQs. The first is user experience. Compared with full desktop word processors, Cadmus’ browser editor is more restrictive. Students writing long, structured documents have reported limitations in formatting, table spacing and layout control, creating friction for disciplines that require precise presentation. The second is stability. Because Cadmus relies on a live browser session, running the same assessment in multiple tabs or windows can confuse autosave and create conflicts; the vendor’s own support pages warn against multi-tab use. Connectivity issues, especially on congested Wi-Fi or in regional areas, can manifest as writing interruptions or gaps in the analytics trace that look odd even when the student has acted properly. The third is accessibility at the edges. A conformance target is welcome, but high-stakes usability still depends on local testing. Long documents, complex tables and formula entry remain challenging for many browser-based editors, and disability support teams need to validate that real subject templates work well with JAWS, NVDA, VoiceOver and keyboard-only navigation before mandating the tool in critical assessments.
A fourth and more cultural critique concerns “surveilled writing.” Because Cadmus’ value proposition is inseparable from its process analytics, it necessarily collects behavioural data such as session timing, paste events and editing sequences. Some students and staff have characterised this as surveillance, particularly when the platform is compulsory and there is no option to draft in a familiar editor. Privacy, proportionality and pedagogy therefore loom large in governance discussions. One high-profile example is the University of Sydney Business School’s decision in 2024 to stop using Cadmus, citing concerns about data-intensive monitoring. That single decision does not determine sector consensus, but it demonstrates that the fit is contested and that institutions are weighing integrity benefits against privacy and student experience.
What Cadmus measures, and what it does not
Cadmus is often described imprecisely as an integrity detector. Technically, it is a process-capture environment. It records how text is produced inside its editor, how much is typed, how much is pasted, when revisions occurred, how time was distributed across the writing window, and it can pair those traces with similarity scores from a separate tool. This is not keystroke biometrics, nor is it an AI content classifier. It cannot see what happens in an external word processor, and it cannot diagnose that a paragraph is AI-generated. It can, however, show that a large block arrived via paste and that relatively little editing occurred before submission.
This approach has real advantages. It is more difficult to launder a pasted wall of prose without leaving a signature, and a coherent timeline can corroborate a student’s legitimate workflow when an accusation is misplaced. In supervised exam settings, where the entire activity occurs inside Cadmus, the end-to-end capture can be extraordinarily useful. Outside that controlled context, the ceiling is lower. Students can paraphrase AI text before pasting, transcribe generated paragraphs to mimic typing, or write almost everything elsewhere and treat Cadmus as a submission conduit. For these reasons, universities that provide neutral guidance emphasise that Cadmus does not “detect” contract cheating. Rather, it surfaces evidence that an academic then interprets alongside viva voce, LMS version history and other discipline-appropriate checks.
Data, privacy and proportionality
Because behavioural analytics are central to Cadmus’ promise, procurement and governance documents must be explicit about data collection, storage, retention and access. Institutions should publish clear notices that explain exactly what is captured, where it is processed, how long it is retained and under what circumstances analytics can be inspected for an integrity investigation. They should also articulate the lawful basis for processing and the safeguards that limit function creep. From a cultural perspective, transparency helps, but proportionality matters more. Compulsory use in every subject will feel different to students than targeted use in specific contexts where process evidence is genuinely instructional or risk-reducing. Faculties that aim to build a trust-based assessment culture should consider whether always-on process monitoring aligns with their pedagogical values.
Accessibility and equity in practice
Equity extends beyond accessibility conformance. The “write inside the browser” model assumes stable bandwidth and compatible hardware. Vendor documentation and university FAQs warn that multi-tab usage can corrupt autosave and that a dropped session may require a manual refresh. Students on older devices or shared home connections are more exposed to these failure modes. Assistive tech workflows are another equity pressure point. Many students who rely on screen readers, dictation, specialist keyboards, or switch controls draft in finely tuned environments and then move content into the submission space. If Cadmus is mandatory for the entire workflow, disability advisers should verify that the browser editor supports equivalent speed and reliability for those users, especially in timed exams, before replacing a student’s optimised authoring stack.
Pedagogical fit: where Cadmus shines, and where it does not
Cadmus is strongest where the assessment is an individual written artefact and the process itself evidences learning and authenticity. Scaffolded essays, lab write-ups, reflective pieces and supervised in-place digital exams fit that profile. It is weaker for collaborative writing, design-heavy reports requiring precise layout, or media-rich artefacts that depend on tools beyond a browser editor. Even strong advocates acknowledge that templates and instructions need to be tuned to the discipline, and that some tasks remain better served by traditional authoring tools or studio software. The sensible stance is to treat Cadmus as a targeted instrument in an institution’s assessment toolkit rather than as a universal replacement for word processors.
AI and contract-cheating: effectiveness and limits of the process lens
One reason Cadmus has gained traction in the age of generative AI is that process capture sidesteps the weaknesses of content-only AI detectors. A typical AI-assisted pattern, generate, paste, submit, produces sharp signatures in the Cadmus timeline: sudden influxes of text, compressed durations and minimal revision. When paired with similarity analysis and academic judgement, those traces can support an integrity case. That said, the landscape is adversarial. Students can paraphrase before pasting, type out generated text, or interleave AI content with manual edits to soften the signal. False positives are also possible when a student legitimately drafts elsewhere and pastes in near-final text close to the deadline, or when connectivity interruptions produce gaps that look unusual. The only defensible posture is to treat Cadmus analytics as indicators that warrant human review, not as conclusive evidence.
Implementation lessons for Australian providers
The first lesson is alignment. Use Cadmus where process evidence is pedagogically valuable or risk-reducing, and avoid it where the writing surface becomes the constraint. The second is transparency. Publish an accessible privacy notice that explains precisely which usage data Cadmus collects, who can view it, how long it is kept and for what purposes it may be accessed, aligning with Australian privacy law and institutional policy. The third is accessibility testing. Do not rely on a conformance statement alone. Run screen-reader drills in live subjects with real templates, verify focus order, keyboard-only navigation and error messaging under time pressure, and confirm that long documents and complex structures remain usable. The fourth is operational hygiene. Teach students the practices that reduce false signals and lost work, single-tab use, stable connectivity, saving and submission checks, and explain how drafting in external tools interacts with Cadmus’ analytics so they can make informed choices. The fifth is the proportionate use of analytics. Treat progress views as a tool for formative check-ins and early support rather than as a detector. When concerns arise, triangulate Cadmus traces with viva voce, LMS version histories and discipline-specific questioning before escalating.
Bottom line
Cadmus is a mature, LMS-integrated environment that can standardise authoring and submission, surface useful process analytics, and, in supervised in-place exams, reduce opportunities for misconduct. Those benefits are real and documented in partner-university descriptions and vendor case studies. Equally real are the trade-offs: a constrained writing surface compared with full word processors; dependence on stable connectivity and single-tab discipline; an analytics model that informs but does not “detect”; and cultural debates about surveillance, privacy and academic freedom that some faculties consider disqualifying. Australian institutions assessing Cadmus in 2025 should therefore treat it as a focused tool for specific assessment designs, not a blanket solution for academic integrity. Where it fits the task, and where deployment is accompanied by transparent governance, rigorous accessibility testing and proportionate use of analytics, Cadmus can lift consistency and give educators a clearer sight of the learning process. Where those guardrails are absent, or where the pedagogical costs of surveilled writing outweigh the integrity benefits, redesigned assessments, traditional authoring tools, and educator-led vivas will remain the better choice.
