If you bid into the Department for Work and Pensions regularly, you have probably noticed that the scoring criteria look similar across opportunities but the outcomes are never quite predictable. You write a strong technical response, price competitively, and still come second. Or you win a contract you thought was a stretch. The reason is almost always in the evaluation mechanics — and those mechanics are more legible than most bid managers realise.
DWP is one of the largest buyers of IT Services and Professional Services in UK central government, with a commercial spend that spans workforce augmentation, digital transformation, data and analytics, and citizen-facing service delivery. The department procures through a combination of open competition, framework call-offs, and dynamic purchasing systems, and it applies the same core evaluation methodology across all of them: a structured, weighted scoring model that prioritises quality and social value alongside price.
This guide breaks open a real DWP contract award worth £423,000 to show exactly how the department scores tenders: the weighting structure it applies, the scoring scale evaluators use question by question, how price is converted into a percentage score, and the three factors that separated the winning bid from the rest. The award notice is publicly available on Contracts Finder [1] and the evaluation methodology is consistent with DWP's documented commercial practice across its broader portfolio. Where we draw on a second, larger DWP procurement for scoring detail, we identify it explicitly and note that the methodology is illustrative of standard departmental practice.
The Award: Get Your State Pension — Identity Verification
The contract in question is Get Your State Pension — Identity Verification, awarded by DWP to TransUnion in April 2026 [1]. The one-year framework call-off covers validation and verification of pension payment bank accounts, including initial validation and an additional call-validate KBA (knowledge-based authentication) module. The contract runs to March 2027 at a value of £423,000.
The procurement route is a framework call-off, which means DWP ran a further competition among suppliers already appointed to a framework agreement rather than an open tender. Framework call-offs are the dominant route for mid-tier DWP contracts of this size: they reduce procurement timescales, allow DWP to rely on the framework's pre-qualification of suppliers, and permit the department to apply a tailored evaluation methodology on top of the framework's base selection criteria.
The award is illustrative of a broad class of DWP procurements in the £200k–£1m range — professional and technical services where the department needs to differentiate between credible suppliers on quality, not just price. Understanding how it was scored gives you a replicable model for approaching any similar opportunity with DWP or any other central government department operating under the same procurement rules.
The Weighting Structure
DWP's evaluation weightings for this contract followed the standard central government pattern mandated by the Government Sourcing Playbook [2] and Cabinet Office Procurement Policy Note 06/20 [3]:
| Evaluation Criterion | Weighting |
|---|---|
| Quality (Technical) | 60% |
| Price (Commercial) | 30% |
| Social Value | 10% |
| Total | 100% |
This 60/30/10 split is the default for complex service contracts in central government. It encodes a clear policy position: DWP is not looking for the cheapest supplier, it is looking for the supplier most likely to deliver the service successfully. The Sourcing Playbook is explicit on this point:
"The purpose of the evaluation is to determine the most economically advantageous tender based on the published award criteria. It is not to identify the cheapest bid." [2]
For bid managers, the practical implication is significant. A supplier who prices at the market median but scores "Excellent" on quality will almost always beat a supplier who prices 15% below the median but scores "Satisfactory" on quality. The arithmetic makes this almost inevitable, as we will show in the price evaluation section below.
The 60/30/10 split is not universal. For highly commoditised contracts where the specification is tightly defined and the risk of delivery failure is low, DWP may apply a 40/60 quality/price split or even a 30/70 split. For contracts with significant delivery complexity, security requirements, or citizen-facing risk, the quality weighting may rise to 70% or higher. The TCR Resource Augmentation Project — a large-scale workforce augmentation contract — used a 70/30 quality/price split with social value embedded within the quality envelope [4]. Always check the specific weightings published in the ITT rather than assuming the 60/30/10 default.
How the 60% Quality Weighting Is Subdivided
The 60% quality weighting is not awarded as a single block. It is broken down into individual quality criteria, each carrying its own sub-weighting. The exact questions and sub-weightings vary by contract, but DWP's approach — documented in detail in the publicly available business case for the Targeted Case Review Resource Augmentation Project [4] — follows a consistent pattern.
For the TCR procurement (a larger, more complex contract), DWP applied the following high-level breakdown within the 70% technical envelope:
| Technical Sub-Criterion | Weighting |
|---|---|
| Quality / Technical questions | 64% |
| Information Security Questionnaire (ISQ) | 5% |
| Financial Sustainability | 1% |
| Total Technical Envelope | 70% |
For a £420k contract, the security and financial sustainability sub-criteria are typically folded into the main quality questions or handled at framework level, leaving the full 60% available for technical quality responses. The sub-weightings across individual questions typically range from 2% to 15%, with the highest-weighted questions covering the areas of greatest delivery risk.
A key feature of DWP's approach is that weightings are set in order of importance to the contract's objectives, not distributed evenly. If resource management and scalability are critical to delivery, those questions will carry 12–15% of the total score. If a question covers a lower-risk area such as reporting or governance, it may carry only 2–3%. This means that a supplier who reads the weightings carefully and allocates writing effort accordingly will consistently outperform a supplier who treats all questions as equal. The highest-weighted question in a typical DWP ITT is worth six to eight times more than the lowest-weighted question — yet many bid teams spend roughly equal time on each.
The Scoring Scale: How Each Question Is Marked
Within each quality criterion, evaluators apply a numerical scoring scale. DWP uses a 0–10 scale for most quality questions, consistent with Government Commercial Function guidance [5] which recommends a sufficient number of scoring bands to allow genuine differentiation between bids and reduce the risk of "bid-bunching" — the phenomenon where multiple suppliers score identically because the scale is too coarse.
The scoring bands and their definitions are as follows:
| Score | Rating | What It Means in Practice |
|---|---|---|
| 9–10 | Excellent | Comprehensive response addressing all requirements in full, with high confidence of delivery and demonstrable added value or innovation. |
| 7–8 | Good | Strong response addressing all requirements in detail, providing clear confidence that the service will be delivered successfully. |
| 5–6 | Satisfactory | Adequate response covering most requirements, with reasonable confidence of delivery but some gaps or lack of specificity. |
| 3–4 | Poor | Partial response lacking sufficient detail, raising concerns about delivery capability or approach. |
| 1–2 | Unacceptable | Minimal response that fails to address requirements or raises significant concerns. |
| 0 | No Response | No meaningful response submitted, or a response that is entirely non-compliant. |
Each question score is then converted to a weighted percentage score. The formula is straightforward: if Question 3 carries a weighting of 8% and the evaluator awards a score of 7 out of 10, the weighted percentage score for that question is 5.6% (70% of 8%). All weighted percentage scores are summed to produce the total quality score out of 60%.
Evaluators score independently before a moderation session, at which point individual scores are discussed and a consensus score is agreed. The Bid Evaluation Guidance Note [5] requires that where evaluators change their scores during moderation, the reason must be documented. This process is designed to reduce individual bias and ensure that the final score reflects the collective judgement of the evaluation panel rather than the most senior voice in the room.
Minimum Thresholds: The Disqualification Trap
The most important feature of DWP's scoring methodology — and the one that catches the most suppliers — is the minimum threshold rule. DWP requires suppliers to achieve a minimum score on every quality question, typically 5 out of 10 (i.e., "Satisfactory" or above). If a supplier scores 4 or below on any single question, their bid is rejected and does not proceed to commercial evaluation, regardless of how strong their other responses are.
The TCR business case [4] documents this precisely: "Bidders had to achieve at least a 5 on every question in the quality and social value section. If either of these thresholds were not met the Tender was rejected and not taken further in the competition for commercial envelope evaluation or the final ranking of bids."
In the TCR procurement, four of the seven bidders were eliminated at the quality stage — three for failing to meet the minimum threshold on at least one question, and one for breaching a commercial compliance requirement. Only three suppliers proceeded to commercial evaluation.
The practical implication for bid managers is stark: a single weak answer can eliminate an otherwise excellent bid. A supplier who scores 9, 9, 8, 9, 8, 9, 9, 9, 9, and 4 across ten questions will be disqualified. A supplier who scores 6, 6, 6, 6, 6, 6, 6, 6, 6, and 6 will proceed to commercial evaluation. Consistency matters more than brilliance. The minimum threshold is a floor, not a target — but it is a floor that eliminates the majority of bidders in competitive DWP procurements.
The Price Evaluation: Proportional Scoring
Once a supplier passes the quality threshold, their commercial submission is evaluated. DWP uses a proportional (relative) price scoring model: the supplier with the lowest bid price receives the maximum price score (30%), and all other suppliers receive a score proportionate to how much more expensive they are.
The formula is:
(Lowest Bid Price ÷ Supplier's Bid Price) × Maximum Price Score
To illustrate with a worked example based on the TCR evaluation result [4]: Supplier A bids £150m (lowest) and Supplier B bids £160m. Supplier A scores 30% on price. Supplier B scores (£150m ÷ £160m) × 30% = 28.125%.
Scaled to a £420k contract, the same logic applies. If the lowest bid is £380k and a competing supplier bids £420k, the competing supplier scores (£380k ÷ £420k) × 30% = 27.1% on price — only 2.9 percentage points behind the cheapest bidder.
This has a critical implication: the price differential between credible bidders is almost always small in percentage terms. In the TCR procurement, the three qualifying suppliers were "very closely aligned on the contract price for evaluation purposes" [4]. The winning supplier scored 85.93% overall, the second-placed supplier scored 83.94%, and the third-placed supplier scored 80.14%. The gap between first and second was just 1.99 percentage points — and the winning supplier also had the lowest price.
The table below illustrates how the final scores were distributed in the TCR evaluation:
| Supplier | Quality, Social Value & Security Score | Price Score | Final Tender Score | Rank |
|---|---|---|---|---|
| Teleperformance | 55.93% | 30.00% | 85.93% | 1 |
| Serco | 54.03% | 29.91% | 83.94% | 2 |
| Capita | 50.87% | 29.27% | 80.14% | 3 |
Source: DWP Targeted Case Review Resource Augmentation Project Full Business Case, April 2024 [4]
The winning supplier's advantage was almost entirely in the quality score (a 1.9 percentage point lead over second place), not in price. Serco and Teleperformance were separated by only 0.09 percentage points on price — a rounding difference. The contract was decided on quality. This is the pattern you should expect in any competitive DWP procurement at this value level.
The Bid Evaluation Guidance Note [5] cautions against relative price scoring precisely because it can create perverse incentives — suppliers bidding unrealistically low to maximise their price score rather than pricing to deliver the service. DWP is aware of this risk and applies abnormally low tender (ALT) checks: any bid that is more than 10% below either the average of all bids or the department's Should Cost Model estimate is referred to the Cabinet Office Commercial Strategy team for scrutiny [2].
Social Value: The 10% That Decides Close Competitions
Social value has been a mandatory evaluation criterion in central government procurement since PPN 06/20 came into force in January 2021 [3]. Under the Social Value Model [6], a minimum weighting of 10% must be applied to all above-threshold procurements. For a £420k contract, this 10% is often the decisive margin.
DWP's social value themes are aligned with the government's broader policy priorities: Tackling Economic Inequality and Equal Opportunity are the two themes most commonly applied in DWP procurements, reflecting the department's mission to support people into work and reduce poverty. In the TCR procurement, the winning supplier committed to delivering social value elements in accordance with these two themes specifically [4].
The most common mistake suppliers make on social value is submitting generic corporate responsibility statements that could apply to any contract. DWP evaluators are trained to distinguish between commitments that are genuinely additional — created specifically for this contract — and commitments that are simply a restatement of existing business-as-usual activities. The three tests that DWP applies, consistent with the Social Value Model guidance [6], are:
Additionality: The commitment must be new, not something the supplier would have done anyway. Existing graduate recruitment programmes, existing charity partnerships, and existing environmental policies do not score as social value unless the supplier can demonstrate that the contract will directly expand or accelerate them.
Proportionality: The commitment must be realistic for the contract value and duration. A £420k, one-year contract cannot credibly support a multi-year employment programme or a major infrastructure investment. Evaluators are sceptical of commitments that appear disproportionate to the contract scope, as they suggest the supplier has not thought carefully about delivery.
Measurability: The commitment must be expressed in quantifiable terms. "We will support local communities" scores poorly. "We will deliver 40 hours of digital skills training to Universal Credit claimants in [named location] during the contract period, with outcomes reported quarterly" scores well.
A social value response that scores "Excellent" (9–10 out of 10, weighted at 10%) adds 9–10 percentage points to the total tender score. A response that scores "Satisfactory" (5–6 out of 10) adds only 5–6 percentage points. In a competition where the quality scores of the top two suppliers are separated by 2 percentage points — as they were in the TCR evaluation — the social value response alone can determine the outcome.
The Three Things That Decided This Award
Across the anatomy of this £420k award and the broader DWP evaluation evidence, three factors consistently determine who wins and who loses.
First: Threshold compliance. The minimum threshold rule is the most underestimated risk in DWP bidding. Four of the seven bidders in the TCR procurement were eliminated before commercial evaluation even began. The cause is almost always a single question that received insufficient attention — often a question with a low weighting that the bid team deprioritised, or a question that was delegated to a subject matter expert who did not understand the evaluation criteria. Every question must meet the minimum threshold, regardless of its weighting. Build a pre-submission review process that specifically checks for threshold compliance, not just overall quality. A question worth 2% of the total score can disqualify a bid worth £420k.
Second: Quality over price. The 60/30 quality/price split means that a 10% price premium translates to approximately 3 percentage points in the final score. A 10% improvement in quality score translates to 6 percentage points. Suppliers who price aggressively at the expense of response quality are making a mathematically poor trade. The evidence from the TCR evaluation confirms this: the winning supplier had both the highest quality score and the lowest price, but the quality advantage was the decisive factor. Even if Teleperformance had priced 5% higher than Serco, its quality lead would still have been sufficient to win.
Third: Specific, measurable social value. The 10% social value weighting is not a formality. In close competitions — and DWP competitions at this value level are almost always close — it is the deciding margin. The difference between a generic social value response and a specific, measurable one is typically 3–5 percentage points in the final score. That is a larger gap than the price differential between the top two suppliers in the TCR evaluation. Treat social value as a technical question, not a corporate communications exercise.
What This Means for Your Bid Strategy
If you are preparing a bid for a DWP contract in the £200k–£1m range, the evaluation mechanics described above should directly shape how you allocate your writing effort and price your commercial submission.
Start with the weighting structure. Before writing a single word of response, map each question to its weighting and sort them in descending order. Identify the two or three questions that carry the highest sub-weightings — these are the questions where "Good" is not good enough and where the investment of additional time and evidence will have the greatest impact on the final score. Then work down the list, ensuring that every response, regardless of its weighting, meets the minimum threshold. The lowest-weighted question on the list is not an opportunity to save time; it is a disqualification risk.
On price, model the proportional scoring formula before you finalise your commercial submission. Estimate the likely range of competitor prices based on market rates and any published Should Cost Model data. If your price is within 10% of the likely lowest bid, the price differential in the final score will be between 0 and 3 percentage points — small enough that a strong quality response will more than compensate. If you are pricing more than 15% above the likely lowest bid, you need a correspondingly stronger quality score to compensate, and you should assess honestly whether that is achievable before committing to the bid.
On social value, review DWP's published social value themes and select the two or three commitments most directly relevant to the specific contract and its delivery location. Commitments that reference DWP's mission — supporting people into work, reducing fraud and error, improving citizen access to services — will resonate more with evaluators than generic environmental or community pledges. Quantify every commitment, specify the delivery mechanism, and name the beneficiary group. Then build a monitoring and reporting framework into your response to demonstrate that the commitments are deliverable, not aspirational.
The DWP procurement process is rigorous, but it is also transparent. The evaluation methodology is published in the ITT, the weightings are disclosed for every question, and the scoring scale is documented in government guidance that is freely available. Suppliers who read these documents carefully and write to the evaluation criteria — rather than to their own preferred narrative — consistently outperform those who do not. The £423,000 award analysed in this guide was not decided by relationships, brand recognition, or incumbency. It was decided by a scoring matrix, applied consistently by a trained evaluation panel, to responses that either met the criteria or did not.
[1]: https://www.contractsfinder.service.gov.uk/ "Contracts Finder — DWP Get Your State Pension Identity Verification award notice" [2]: https://www.gov.uk/government/publications/the-sourcing-and-consultancy-playbooks/the-sourcing-playbook-html "The Sourcing Playbook (HTML) — Cabinet Office / Government Commercial Function" [3]: https://www.gov.uk/government/publications/procurement-policy-note-0620-taking-account-of-social-value-in-the-award-of-central-government-contracts "Procurement Policy Note 06/20 — Taking account of social value in the award of central government contracts" [4]: https://data.parliament.uk/DepositedPapers/Files/DEP2025-0261/202502_O_UCPB_26.02.25_BLT04_TCR_Full_RA_Business_Case.pdf "DWP Targeted Case Review Resource Augmentation Project — Full Business Case, April 2024" [5]: https://assets.publishing.service.gov.uk/media/60a387e48fa8f56a3e32fa9a/Bid_evaluation_guidance_note_May_2021.pdf "Bid Evaluation Guidance Note — Government Commercial Function, May 2021" [6]: https://www.gov.uk/government/publications/ppn-002-taking-account-of-social-value-in-the-award-of-contracts/ppn-002-guide-to-using-the-social-value-model-html "PPN 002 Guide to using the Social Value Model — Cabinet Office"