Around the world, millions of people devote years to learning English, shaping entire lives, careers and identities around a language that is not their first. They study, work, parent, research and dream in English. Yet their official proof that they can use this language – the certificate that unlocks visas, degrees and jobs – is treated as if it has the shelf life of supermarket yoghurt. IELTS, TOEFL, PTE Academic and other high-stakes English tests are typically recognised for only two years. Once that date passes, the result is described as “no longer valid”, as if the language itself has quietly evaporated. It is hard not to ask a blunt question: what on earth is going on here?
The logic behind this two-year expiry has never fully matched lived experience. A person can complete a bachelor's degree entirely in English, then a master's degree, then spend a decade working in an English-speaking environment, only to be told that their language proficiency has “expired”. The re-test requirement is not triggered by any evidence of decline in their ability. It is triggered by the turning of a calendar page. Proficiency, in this policy universe, is not something built over years of sustained study and use. It is something that apparently collapses on a timer, regardless of how much English the person uses every hour of every day.
Behind the neutral language of “validity periods” and “policy settings” sits a less comfortable reality. Language testing has become a global industry with its own business model, revenue streams and growth targets. Every time a certificate ages out, there is a new paying customer. Every time a government, university or employer insists on a “recent” score, that industry is fed. On paper, this is framed as quality assurance. In practice, it looks very much like a subscription model where the right to prove you speak English must be renewed on a regular cycle.
It is not difficult to see who is impacted. International students are trying to access education. Skilled workers applying for visas. Teachers and academics are building careers in English. Nurses, doctors, engineers and tradespeople who have already passed earlier tests, completed degrees and been trusted with complex responsibilities. Many of these people operate comfortably at advanced levels of English. They publish in academic journals, negotiate contracts, lead teams, teach in universities and deliver conference presentations. Yet, two years after their last test date, they are deemed linguistically suspicious until they pay for another examination. The emotional and financial strain of this retesting cycle is enormous, especially for people supporting families, paying fees and navigating precarious immigration settings.
Supporters of the current model insist that the two-year limit is not arbitrary. They point to a real phenomenon: language attrition. If people stop using a language, their proficiency can decline, sometimes very quickly. Anyone who has forgotten a school language will recognise that feeling of words retreating to the edges of memory. However, the leap from “language can fade in some circumstances” to “all test results expire at exactly 24 months” is huge. It ignores context, frequency of use and the difference between someone who once studied English at school and someone who has spent the last decade working as a lawyer, engineer or academic in English.
Context matters. A person who has hardly used English for ten years may indeed struggle with complex reading or speaking tasks. A person who has lived, worked and studied in English for ten years will often have stronger skills than when they first took the test. Yet both are treated identically by the validity rules. That lack of nuance inevitably damages the credibility of the system. It suggests that what matters most is not the reality of how a person uses English, but the date printed in the corner of a certificate.
The double standards are hard to ignore. Native speakers of English are almost never asked to sit language tests to prove their competence, even though literacy concerns in English-speaking countries are well-documented. Many native speakers routinely make serious errors in spelling, grammar and structure. They can be powerful professionals who struggle to write a clear paragraph or to understand complex written information. Yet their right to speak and work in English is never questioned, because the system assumes proficiency by birth. At the same time, multilingual speakers who write precisely, teach professionally and communicate with clarity in English can be locked out of opportunities because their test slip is more than two years old. Passport privilege quietly shapes who must keep buying proof and who is automatically trusted.
Once that pattern is visible, a harder question emerges. If this structure is really about safeguarding standards, why is it built on such a blunt instrument as expiry dates? Why is the main criterion a rigid time limit rather than ongoing evidence of language use? Why are employers, universities and governments often unwilling to consider richer indicators such as degrees completed in English, long-term professional roles, publications, teaching experience or performance in local interviews? Why is a timed test on one morning allowed to outweigh years of documented real-world communication?
The pricing of these tests intensifies the unease. International English examinations are not cheap add-ons. For many candidates, the fee represents weeks or months of income. When a person is living in a country with lower wages or unstable employment, that cost can be crushing. Add to that the expenses of travel to test centres, time off work, study materials and preparatory courses, and the financial burden multiplies. The system does not just demand proof of language. It demands proof of financial resilience, again and again.
As people look closer, another troubling pattern appears. It is not only the expiry dates that raise questions. It is the instability of the scores themselves. Many candidates report sitting IELTS, PTE or TOEFL three or four times in a short period, sometimes within days or weeks, with essentially the same level of preparation. They are the same person, with the same history and the same underlying proficiency. Yet the results can bounce unpredictably. Writing moves from 7.5 to 6.5 and back again. Speaking swings up and down by half a band or more. Reading and listening scores fluctuate in ways that do not match any real change in ability.
Testing organisations respond with technical terminology about “standard error of measurement” and “test–retest reliability”. Statistically, every score has a margin of error. Human assessment, especially in speaking and writing, can never be perfectly precise. These explanations are correct in theory. In practice, they are cold comfort to someone whose visa application is refused because their writing score fell just below a threshold the second time, even though their real-world English has not declined. Or to the candidate who misses a scholarship because this month’s speaking score is lower than last month’s, even though they spoke with the same fluency and sophistication.
If scoring is genuinely subject to this degree of variability for the same individual, can we reasonably pretend that cut-off thresholds are absolute indicators of who is fit to study, work or migrate? How fair is it to treat a single band score as a hard border between “proficient enough” and “not good enough” when the same person might cross that line in either direction in successive sittings? At what point do we admit that some of what is being measured is not only language ability but also test familiarity, topic luck, examiner differences, fatigue and random variation?
This day-to-day instability feeds a wider perception that high-stakes English exams are less like precise instruments and more like controlled lotteries. Online spaces are filled with strategies not about improving deep language skills, but about gaming the system. Candidates trade tips about which test centres are “easier”, which examiners are more generous, which tasks are more predictable, and
which months are less crowded. Energy that could have gone into building lasting communicative competence is diverted into decoding the quirks of a particular testing brand. It is difficult to square this reality with the image of neutral, scientific assessment that is marketed to governments and institutions.
Beyond technical issues, people are asking more fundamental questions about the design of these exams. Why are tasks still so far removed from the reality of how language is used? In daily life, professionals draft and redraft texts, refer to dictionaries, collaborate with colleagues, use grammar and spell-checking tools, and have time to think. Students in universities can consult readings, ask questions, check referencing, and revise essays. Workers in global teams use video calls, messaging apps, shared documents and automated translation tools. Yet many language tests still assess proficiency through highly artificial tasks: writing essays by hand in 40 minutes without resources, discussing unfamiliar topics under intense time pressure, or listening to contrived audio clips only once with no chance to clarify.
These tasks may be convenient to standardise and mark, but do they genuinely reflect the communicative demands of contemporary study, work and life? Or do they reward a narrower set of test-taking behaviours: speed, exam strategy, topic familiarity and resilience under pressure? When a system penalises small grammatical slips more heavily than it rewards logical argument, nuance or critical thinking, is it really measuring proficiency in any meaningful sense, or is it measuring conformity to a particular template?
Questions of power and perspective are becoming harder to avoid. Whose English sits at the centre of these tests? The listening and speaking tasks, the reading passages, the scoring rubrics and the “model” answers are heavily anchored in particular varieties of English and particular cultural contexts. Global Englishes, spoken by millions across Africa, Asia, the Middle East, Latin America and Europe, often appear only as accents to be decoded, if at all. The communication patterns of a Nigerian academic, an Indian engineer, a Brazilian journalist or a Kenyan nurse are often treated as deviations that must be pushed towards a narrow standard, even when those patterns are perfectly effective in most international settings.
At the same time, the burden of proving competence almost always falls on those with “the wrong passport”. If a person grew up in certain countries, their English is never questioned; their education and passports do the talking. If a person grew up elsewhere, they can spend their entire adult life proving themselves to systems that refuse to believe what is obvious in every conversation. The history of colonisation, economic inequality and linguistic hierarchy does not sit outside this story. It runs through it.
There is also the psychological and emotional toll. High-stakes language tests are not a harmless hoop to jump through once. For many, they become a recurring source of anxiety, shame and exhaustion. Candidates juggle long work hours, family responsibilities and financial pressure alongside repeated attempts to achieve an exact score. A difference of half a band can stand between them and a visa, a job, a scholarship or a safe life. Every re-test means another round of waiting, worrying and wondering if a single slip in concentration will undo months of planning.
The human stories behind the statistics are stark. Families were separated because a parent narrowly missed a score. Skilled professionals are forced to delay career moves because they must sit for yet another test. Students are losing offers due to score expiry, even though they have been studying in English since childhood. Teachers, translators and interpreters are being told to prove again that they speak the language they use to earn their living. Over time, many internalise the message that they are perpetually on trial, even when their daily reality contradicts it.
At the same time, the global economy is entering an era in which technology reshapes how language is used. Machine translation, voice recognition, AI-powered writing tools and assistive technologies are woven into workplaces and classrooms. In many real professional contexts, the question is no longer “Can you produce flawless language unaided in a short time?” but “Can you understand, evaluate, adapt and communicate meaningfully using all the tools available?” Yet most English tests remain anchored in an older vision: the solitary candidate in a silent room, stripped of all resources, proving their worth on a blank page.
All of these contradictions push us back to the core questions. What is this system truly for? If the goal is to ensure that people can function in English in study, work and public life, why is the assessment model so disconnected from real communicative practice? If the goal is to protect the integrity of institutions, why is the system built around scores that are both highly perishable and variable? If the goal is fairness, why are financial barriers and passport hierarchies so baked into the process?
There are alternative approaches. One possibility is to move away from the idea that all test results magically decay at the same rate. Validity periods could be linked to evidence of ongoing use. Someone who has not engaged with English for a long time might need a more recent demonstration of competence. Someone who has completed degrees, taught courses or worked full-time in English for many years could reasonably be treated differently. At a certain point, an individual’s record of achievement and performance ought to carry more weight than an ageing test slip.
Another option is to develop lighter, cheaper re-validation processes instead of forcing people back through full exams. If there is genuine concern that proficiency may have declined, candidates who already achieved high scores could be offered shorter, targeted checks that confirm maintenance of skills. These assessments could focus on critical areas, be delivered at low cost and acknowledge the person’s existing history. That would honour the idea of currency without trapping people in an endless cycle of full-fee high-stakes exams.
Institutions, too, can widen their understanding of evidence. Universities could accept a broader range of proofs for applicants who have already studied extensively in English. Employers could place more emphasis on interviews, work samples and probation periods rather than blindly outsourcing judgment to test scores. Governments could design migration policies that recognise long-term residence, local study and employment as legitimate indicators of language ability, rather than insisting on repeated tests for people who pass their daily lives in English.
Crucially, any reform must be built on transparency. At present, the inner workings of test design, scoring, reliability and financial structures are rarely visible to the public. Candidates are expected to treat the system as infallible, even when their own experiences suggest otherwise. If assessment organisations want to retain trust, they need to open up their processes to independent scrutiny and meaningful dialogue with the communities they serve. That includes being honest about the limits of precision, the reasons for expiry policies and the financial incentives that come with retesting.
None of this requires abandoning standards. Language matters. Students need sufficient English to succeed in academic programs. Workers need enough to communicate safely and effectively in their roles. Migrants deserve access to information and services in languages they understand. The question is not whether standards should exist, but whether the current model of enforcing them is fair, rational and aligned with how language actually works in human lives.
Right now, too much of the system looks like a maze designed without the people who must walk through it. Certificates that expire on strict timers regardless of real usage. Scores that vary noticeably across repeated sittings for the same individual. Tasks that reward test savvy more than deep communicative competence. Financial barriers fall heaviest on those with the least. And an underlying assumption that language is a commodity that must be continually re-purchased.
Languages do not function like that. English, like any other language, is built in conversations, classrooms, workplaces, friendships and families. It is reinforced through stories, instructions, emails, arguments, jokes and late-night messages. It weaves itself through the brain over years of effort and exposure. It does not silently disappear at midnight on the second anniversary of an exam. And when a person spends their life living, thinking and working in English, it is absurd to pretend that a piece of paper knows their ability better than the evidence of their everyday existence.
So the questions keep coming, and they will not disappear. What on earth is going on? Why have we normalised a system in which people must repeatedly buy permission to prove they speak the language they already use to build our universities, hospitals, companies and communities? Who is truly benefiting from these expiring certificates and fluctuating scores? Why have we allowed unstable, short-lived and often contradictory test results to hold so much power over who can study, work, migrate and belong?
Until those questions are answered with honesty and courage, the credibility of high-stakes English testing will remain in the shadows. And millions of capable, fluent speakers will continue to pay the price for a system that talks about opportunity, yet too often behaves like a subscription service for access to a language they already call home.
