By: Colin Nelson
Previous articles in this series focusing on collaborative enterprise innovation have covered topics such as the benefits of software platforms in enterprise innovation, internal communications, increasing engagement and driving collaboration. In this article we share insights on evaluating large numbers of ideas generated as part of enterprise programs.
Your initial assessment of ideas generated from an idea campaign or on-going ideation process is the first and arguably the most important gate an idea will pass through. Despite best intentions, ideas dismissed at this stage are rarely considered again if they don’t make the cut, so selecting the best few for further consideration and ultimately implementation is crucial.
Innovation programs take many forms, sometimes ideas are created on small tactical needs, sometimes in-line with the company vision, and from time to time, you may receive many more ideas than you were expecting. Perhaps it’s because asked a question relevant to a large group of people, or the topic of the subject of the campaign really is in an exciting area. All ideas need to be processed effectively to ensure continued engagement of participants and sustainability of the program.
- More ideas than expected – It’s always recommended to agree review team members in advance of the campaign and ensure time is allocated in their diaries, however this means making a judgment on how many ideas you’ll receive. The more expected, the more reviewers will be required.
- Availability of good evaluators – Sponsors nominate suitable evaluators in advance, ideally those that understand the scope of the challenge most closely and have time available to consider ideas. Often it can be difficult to get time allocated, especially for the review team to meet and talk through submitted content.
- It’s not just about the idea – Review team members need to consider all comments against an idea, not just the idea itself. Often the initial idea is far from the final implemented concept and it could be comments that help take it in a new direction or make an idea practical. This of course takes more time, so campaigns with a lot of collaboration require more evaluation time.
- Parallel ideation campaigns – If you work for a complex multi-national organization, parallel campaigns may be asked in different languages, so you need to provide an evaluation team for each language and then build into knowledge transfer or translation into the program to share the best ideas from each community.
- Evaluators are too slow – Evaluations completed alone can often take far longer than when review team members meet in person, waiting for one or more to complete all the reviews allocated to them can be a slow and frustrating process for all.
- Avoiding group think – Facilitating review team members to meet in person can help speed up the process, but the risk is the discussion is dominated by one opinion and propagate ‘group-think’.
Tactics to maximize the effectiveness and efficiency of your evaluation teams:
- Ask better questions – The best way to avoid very large numbers of ideas is to ask a more specific question, perhaps sharing more detail on the evaluation criteria to reduce the number of ideas submitted that won’t make the grade.
- Use the community view – Set thresholds for ideas to be evaluated based upon interaction from the community. Ask the invitees to seek a number of comments that build and improve on the idea before it will be evaluated. Not only does this improve engagement and idea quality, it helps to ensure the evaluation team members spend their time on the most engaging ideas.
- Triage average and poor ideas out – Ask each reviewer to vote for ideas they wish to consider in more detail, take the majority opinion on which ideas to keep for detailed evaluation. This should remove 50% – 80% of the content depending on the nature of the campaign.
- Pre-define some standard responses – Remember that all ideas should receive feedback to ensure submitters aren’t disillusioned with the process. Consider the likely reasons for non-selection and offer standardized feedback to ideas that sit in each category of non-selection. For example: ‘Your idea is good, but we don’t have enough resources available to implement it at this time…’
- Use the most time on the best ideas – Once ideas have been triaged down, decide if every reviewer should look at each idea and associated comments or each idea could be considered by 3 different reviewers. It may depend on how much content you have to consider, look to maximize the efficiency of the team and its skills.
- Allow wildcards – In order to mitigate any ‘group-think’ that may arise due to evaluation discussions between team members, allow each individual a small number of wild cards for ideas they feel passionately about.
- Decentralize the evaluation process – For very large, or multi-national programs, consider having one team per department or country to look at both local and global impact of an idea, the best ones are promoted to a centralized team for wider implementation.
The best way to ensure an efficient and effective evaluation process is to think about it before running the campaign, ensure the question will steer people towards quality not quantity, select evaluation team members and allocate time in their schedules after the campaign is finished.
Use pre-selection filters such as the community view and a quick vote-based triage to remove ideas that simply won’t make the grade; this will help ensure your team spends their time on only the best content.
Finally, review all the comments and make a quality based assessment, allowing evaluation team members to promote a small number of ideas which lack wider support to minimize group think and allow more radical content to filter through.
About the author
Colin Nelson is Director of Strategic Consulting at HYPE Innovation. Colin is a subject matter expert and thought leader, helping clients engage their enterprise to support existing or newly established programs on Innovation, Cost Reduction and Business Transformation. Recent clients include Abbot Labs, BASF, General Mills, Metso & Swisslog.
HYPE Innovation Learning Program Articles
HYPE Innovation is producing a series of five articles to help innovation practitioners, and those new to collaborative innovation, understand how to build a successful and sustainable enterprise program. Each article will address a different theme, will focus on clear actions any company can take, and highlight pitfalls to avoid.
|Article #1||How to Get the Most from your Innovation Software: Key Process Considerations|
|Article #2||Using Communications to Drive Innovation – How to Develop an Engaging and Sustainable Program|
|Article #3||Increasing engagement in enterprise innovation|
|Article #4||Driving Collaboration – Diverse Opinion is the Key to Innovation|
|Article #5||Making the Right Idea Investment Choices – Idea Evaluation at Scale|
Photo: male hand pointing at business document from shutterstock.com