Frequently Asked Questions
How does Impacter work?
Impacter uses state-of-the-art Natural Language Processing to compare your text to our
existing corpus of successful and unsuccessful proposals. This helps to spot problems or
shortcomings in the text. A part of Impacter feedback takes place at the proposal level.
The rest of the feedback is provided inside the document, where Impacter poses questions
for parts of the text to help researchers be more concise, clear and specific.
When do I use Impacter?
You can use Impacter during the development of a research grant proposal at two stages.
First, when you have an initial idea of abstract by using that to find related funded
projects in the past. Second, when you have a full draft of the proposal to check for
common pitfalls.
How do I use Impacter?
You use Impacter through selecting the call that you are writing for, and subsequently
uploading the docx draft of your proposal. This proposal is then automatically scanned
and evaluated by Impacter, and within 2 minutes the feedback is provided to you.
Does Impacter improve my chances of funding?
Over the past years, we have found that the group of Impacter applicants has a 3 to 4
percentage point improved success rate over non users. While this correlation does not
prove causation, we believe that Impacters’ checks contribute significantly to a better
plan for societal impact and helps to prevent common pitfalls in knowledge utilization.
Does Impacter guarantee the success of proposals with a full score?
Unfortunately, no. The algorithms Impacter uses can be gamed. A full score just means
that everything that Impacter measures, seems to be present in your proposal. A check in
Impacter is a good first step to take before consulting for example the research support
office, or a peer. Consulting Impacter first will ensure that some of the ambiguity is
already addressed before your colleagues contribute.
Does Impacter lead to bland, uniform or one-size-fits all proposals?
It seems intuitive that using software in proposals leads to proposals that look like eachother.
However, the algorithms in Impacter do the opposite: because they are trained on historical
data, they detect the common and overused phrases and concepts. Subsequently, the feedback
intends to help you make those generic statements specific to your research proposal.
How is my score determined?
Impacter works with Natural Language Processing, AI and keyword recognition. Impacter
uses a variety of indicators which are then compared to baselines. These
baselines are often call specific – for smaller grants, the expected outputs are
different than for large multi-partner programs. You can read more about some specific
analyses in our blogs:
What happens to my proposal when I upload it?
Impacter respects and understands the confidential nature of the ideas present in your
research proposal. This means that your proposal will under no circumstance leave our
servers, and that strict security measures are in place to safeguard the confidentiality
of your proposal.
We do use the proposals we have to improve the analyses in Impacter. Comparisons of
successful and unsuccessful proposals tell us a lot about the characteristics of winning
grant proposals. A nice example of an analysis that we were able to improve in this way
is our readability analysis.
More information about our privacy policy can be found via
this link.
Do I have to pay?
Impacter is paid for by your institution, so for individual researchers
it is free of charge, for as many proposals as you like. Is your institution not a
customer of Impacter yet, send us a quick
e-mail!