12 Secrets all Winning Research Grants Have in Common [2025 Analysis]
Did you know that 70-90% of grants in Germany in 2024-2025 were not accepted?
And since scientists have no shortage of good ideas, and AI has already arrived in research, there’s even more competition than ever.
To help you win your next grant and impress the reviewers, Wildtype One:
analyzed every 2024-2025 German research grant proposal we could access,
then separated successful vs. unsuccessful proposals
and compared 12 key attributes (factors) of each.
The analysis focuses on cancer and human disease research in Germany, but the learnings transfer to any other research and country.
The 12 factors with the most influence on research grant success in 2024-2025 were:
Institutional affiliations
Research team composition & size
Publication track record
Prior grant experience
Communicating innovation & novelty
Alignment with funder priorities
Research methodology strength
Impact on health outcomes (clinical translation)
Writing quality
Interdisciplinary collaboration
Project feasibility & risk management
Researcher reputation
But two factors were removed from the list.
What you should not worry about
In fact, we first analyzed 14 factors of winning proposals, not 12.
The we removed two factors from the list:
Requested budget
Project durations
While many researchers stress calculating their budgets or reporting a feasible duration, we found that those two factors did not influence research proposal success.
Successful and unsuccessful proposals often request similar budgets and have similar time spans.
So unless your budget is absurdly high or the project duration is unrealistic, do not spend a long time on those two attributes of your proposal.
Now, back to the list of 12.
🧫 Join our network of 400+ elite researchers by signing up for Wildtype One’s FREE newsletter.
1. Institutional affiliations
This may sound obvious. But winner grants have a higher representation of prestigious research institutions and Germany’s top universities and centers.
Technische Universität München (TUM)
Ludwig-Maximilians-Universität (LMU)
Munich, Heidelberg University (with the German Cancer Research Center, DKFZ)
Charité–Universitätsmedizin Berlin
University of Cologne
…among others.
What can smaller institutes do? Build partnerships or demonstrate niche expertise.
Our data shows that smaller universities and clinics secure grants by partnering with larger centers or showing niche expertise.
2. Research team composition & size
There is a stark contrast here. Large, successful projects assemble substantial teams. Examples:
The DECIPHER-M metastasis consortium (BMBF) lists 7 principal investigators (PIs) leading subprojects (from AI specialists to clinical oncologists)
The ResCPa CAR-T therapy consortium includes academic experts plus an industry partner.
DFG Collaborative Research Centres typically involve 15–25 research group leaders and dozens of junior scientists, reflecting their network scale.
Even mid-size translational grants require multiple PIs. The DKTK “Innovation 2024” call mandated at least 3 partner sites per project. Teams often spanned 5–10 senior researchers across institutions.
By contrast, smaller grants (DFG individual grants, most foundation project grants) are led by a single PI with a core team of 2–5 people (e.g., a PhD student, a postdoc, and technical staff).
Unsuccessful proposals often have either:
too limited a team (missing key expertise)
an unwieldy team (unfocused, too many collaborators without clear roles)
The sweet spot is a team sized and composed to deliver the project’s aims, no more and no less. We’ll explain more about that in a minute.
3. Publication track record
There’s a strong statistical correlation between past productivity and grant success. Studies of grant peer review have found that grant reviewer scores positively correlate with applicants’ bibliometric indicators (publication counts, citation impact).
At the time of writing (April 2025), virtually all large grant awardees from 2024 and 2025 had prior publications in the proposal’s domain—often references to their own seminal work were included to demonstrate feasibility.
Pro tip: Make sure you use the same keywords that refer to your past work. If the keywords you’re using to describe your grant are different from the ones in your publications track record, you may come across as a newbie to the field.
4. Prior grant experience
funded researchers had prior grant experience. Some were previous DFG or EU grant winners.
This doesn’t mean all first-time applicants are going to fail. A few less-published teams got funded by emphasizing innovative ideas. This often happens in calls explicitly encouraging young investigators or high-risk projects.
Here’s a summary of the data:
5. Communicating innovation & novelty
All research is new discovery. But how well do you communicate yours?
We found that the successful proposals used the following words >3 times more often than unfunded ones:
"innovative"; "cutting-edge"; "pioneering"; "unprecedented"; "revolutionizing"; "novel"; "breakthrough"; "first-ever"; etc.
Note: The words were only used as long as they were justified by the idea.
In fact, reviewers expect novelty. A BMBF panel statement noted that only projects addressing “drängende ungelöste Fragen” (pressing unsolved questions) in oncology would be funded.
The winner DECIPHER-M consortium describes “a unique approach that uses a new form of artificial intelligence (AI)—so-called multimodal Foundation models” to study cancer metastasis.
Unsuccessful proposals often failed to convince reviewers that the work was novel enough. This happens sometimes due to vague language or a lack of clarity on how it differs from prior research.
6. Alignment with funder priorities
Funders often publish priority topics or missions, and successful applicants explicitly tailor their proposals to these.
Examples from our analysis:
Deutsche Krebshilfe’s focus areas in 2024 included “Translationale Onkologie” and emerging topics like “Krebs und Armut” (cancer and poverty). And to nobody’s surprise, many granted proposals that study cancer outcomes in socioeconomically deprived populations were prioritized.
In contrast, many failed 2024–25 applications were solid science but a “mismatch” for the call’s intent (e.g., too fundamental for a clinically oriented call, or vice versa)..
Responsiveness to the funder’s specific call is key.
7. Research methodology strength
In our analysis, funded proposals distinguished themselves by the depth of their methodology descriptions.
For instance, the DECIPHER-M metastasis project combines radiology, pathology, and genomics data analysis via AI. The consortium then outlines how multimodal AI models will integrate these data types. The plan was cutting-edge but also grounded in a feasible, clear technique. (Some PIs are AI experts, giving weight to the plan.)
In contrast, unsuccessful proposals methodology sections were either vague (“we will study X” without specifics on how), overly ambitious without evidence, or simply using outdated methods.
Particularly in 2024–25, reviewers were attentive to modern techniques: proposals using CRISPR, single-cell, AI, novel imaging tools, etc. tended to score higher.
🧫 Join our network of 400+ elite researchers by signing up for Wildtype One’s FREE newsletter.
8. Impact on health outcomes (clinical translation)
The COVID experience underscored translating research quickly.
Funders favor projects that ultimately benefit patients and explain their downstream clinical applications.
In the period we analyzed, granted proposals used phrases like:
“accelerating translation”; “clinical implementation”; “therapeutic innovation”; “improving patient outcomes.”
Two examples:
The ResCPa consortium (BMBF Grand Challenge) not only studies CAR-T cells in a lab, but also aims to develop a CAR-T therapy (CD318-targeted) for pancreatic ductal adenocarcinoma, with an industrial partner.
DECIPHER-M justified its AI approach by stating it will answer questions about metastasis and produce tools that adapt screening and treatment for high-risk patients—a direct nod to clinical impact.
9. Writing quality
Researchers are used to writing in a dry, scientific language similar to peer-reviewed literature. But the best proposals read almost like a story in a few short sentences.
In our analysis, we noticed the following commonalities in the writing style of successful grants:
They highlight the problem, solution, and impact, all in a few sentences
They use as many headings, figures, and formatting as needed to guide the reviewer through the narrative.
Their phrasing is both confident and realistic.
Often use active statements like “We will test the hypothesis that…” or “This project will deliver…”, rather than hedging.
At the same time, they avoid exaggeration that isn’t backed by the plan, striking a balance between ambition and credibility.
They include well-chosen keywords that matched review criteria (e.g., “interdisciplinary,” “sustainable,” “evidence-based”)
Do not have overly lengthy background sections that take away room from the actual project plan
At the same time, they also do not assume the importance of the work was obvious without explicitly stating it.
10. Interdisciplinary collaboration
Our analysis found a strong correlation between field-spanning collaborations and grant success.
Successful 2024–2025 projects combined fields (e.g., molecular biology with bioinformatics, or immunology with engineering) to tackle complex cancer problems.
For instance, a funded project on colorectal cancer organoids and the microbiome (DKTK 2024 call) brought together cancer biologists and microbiologists to explore bacteria-tumor interactions.
The ResCPa project mentioned earlier unites immunologists, oncologists, and an industrial biotech partner to develop CAR-T cells.
Proposals stuck within one narrow field risked being seen as lacking breadth, unless they were in a basic science program.
This makes sense because complex disease questions require multi-faceted approaches. But it’s not enough to jam collaborators in the project. Successful grants showed true integration. I.e., and how team members have complementary roles and not just collaborators on paper.
11. Project feasibility & risk management
Funders want to know if the proposed research is feasible. Can you realistically achieve your goals with the time and resources? Have you thought through the risks?
A detailed timeline with milestones is pretty standard in proposals. But we noticed an extra detail: top proposals explicitly discuss contingency plans. If a certain aim might not work, the applicants mention alternative approaches.
While we often cannot see the full text of proposals, guidance documents indicate that funders expect such discussion. The DFG, for instance, asks applicants to discuss how they will deal with “unexpected results” or failures in their work plan.
This translates to statements like:
“If technique X fails to yield sufficient sensitivity, we will switch to technique Y.”
“Should recruitment of patients fall short, we have access to additional sites via collaborator Z.”
Obviously, don’t shoot yourself in the leg by creating imaginary risks. But just be aware of the limitations and have a plan. Risk mitigation sections signal that the team is experienced.
An example of a red flag is developing a new model without acknowledging the challenge. This is unrealistic.
Also, if your ambitious project is bigger than the timeframe, acknowledge that. Then set a smaller deliverable for the allocated timeframe.
Last point, the 2024–25 funding committees were especially keen on seeing focused proposals. It is better to have a few clear aims with solid approaches than an unfocused wish list.
12. Researcher reputation
Finally, while funders evaluate the proposal’s merit, in practice, the applicants’ CVs and track records have a lot of weight on the decision.
Our data showed that repeat winners are common because reviewers know they can always deliver. For instance, Prof. Jakob Nikolas Kather, who won a Krebshilfe grant in 2022, is now coordinating the DECIPHER-M consortium.
That said, early-career researchers are not excluded. But strong mentorship or collaboration with established groups boosts success. We found that proposals from relatively junior PIs had higher odds when a senior co-PI was on board or a letter of support from a renowned institution was included.
In short,
People win grants as much as ideas do. Funders give opportunities to promising newcomers, but not at the expense of quality. So young researchers must show they are rising stars by building their profiles and partnerships.
🧫 Join our network of 400+ elite researchers by signing up for Wildtype One’s FREE newsletter.
References and Data Sources
This analysis examines biology and biochemistry grant applications in Germany (2024–2025), focusing on those targeting human diseases, including cancer as a primary context. We gathered data from funding bodies including, but not limited to: the DFG (Deutsche Forschungsgemeinschaft) for competitive research grants, the BMBF (Federal Ministry of Education & Research) for national health initiatives, Deutsche Krebshilfe (German Cancer Aid, a leading cancer research charity), relevant EU Horizon Europe health-related programs, and select private foundations/industry programs. Data include publicly announced funded projects and, where available, information on application totals and success rates (to infer unsuccessful application numbers). Key sources were official databases and press releases (e.g., DFG’s GEPRIS project database, BMBF’s project portal, Deutsche Krebshilfe funding announcements, and Horizon Europe statistics). Unsuccessful grant details are typically confidential, but aggregate metrics (like success rates or counts of proposals vs. funded awards) are used as proxies for the “unsuccessful” pool. Both quantitative metrics (funding amounts, durations, team size, etc.) and qualitative factors (proposal content and strategy) were analyzed. The period of interest is late 2024 and 2025 grants, capturing the latest funding patterns.