An exploration into the effectiveness of icons in mega menu design
Background
I was recently involved in a large scale rebrand of a financial institution. My role centred around design support for the development teams. During the course of the mega menu development, questions were raised around the use of icons within the mega menu and whether their use is appropriate in this context, with the assumption being that the icons add to the cognitive load for users and negatively impact on the user experience. A simpler mega menu layout had been designed with no icons to address this assumption.
However, we lack enough data to understand if the new proposed, simpler, menu will indeed
improve the usability of the mega menu, increase user satisfaction and the overall experience.
For that reason, further research was needed.
We ran user testing sessions with a focus on usability. Below are the test artefacts we used, and in this case study, I’ll run through the our test preparation, the results of the sessions and how these were interpreted.


Test preparation
Testing Question and Hypotheses
The primary objective of this user test is to understand the impact of using icons within the mega menu. The test aimed to answer the following question:
Does the use of icons in the mega menu decrease levels of user satisfaction and reduce the overall experience for users?
To that end, the following hypothesis were put forward:
- The menu icons are creating unnecessary cognitive load on users and negatively impact the overall experience when users are trying to identify the right path through the bank site using the mega menu.
- Null hypothesis: The menu icons do not cause unnecessary cognitive load for users and do not negatively impact the overall experience when users are trying to identify the right path through the bank site using the mega menu.
Methodology
We were right in the middle of Covid, which meant that some options were off the table for us in terms of how to conduct the actual test. Due to the pandemic, the in-house testing lab that our colleagues in UX ran was shut. Because of this dilemma, we chose Usertesting.com to run the testing sessions.
With Usertesting.com, we automatically get the following outputs of the testing session:
- Time to complete for each task
- The ability to add pre-test and post-test questionnaires and are supplied with a summary of results.
- Video recordings of each participant, with the ability to leave comments at specific parts of the playback of the video during analysis.
- We had the option to use usertesting.com for participant recruitment or use our own participants and administer the test manually.
Given the constraints with Covid, we went ahead with an unmoderated, between-subjects, fully remote test using Usertesting.com.
Test artefacts
Taking the above into account, we considered two possibilities for the actual test artefacts:
- Interactive prototypes using Invision or Adobe XD
- Coded prototypes
Initially we decided to use Invision to develop a full prototype of the mega menu. However, after we completed a first draft of this, concerns were brought up around the loading times for prototypes. Given we were going down the unmoderated remote testing route with Usertesting.com, the risk of complications was too high.
So we shifted focus to a coded prototype. With this, there were a few considerations we took in to account:
The brand
Because the new brand was not yet live, it was not feasible for us to test this, as all environments where the new brand was available were internal at this stage. As the focus of the test was purely on the mega menu, to understand whether the inclusion of icons helped or hindered users to find what they needed, it was decided that as long as both artefacts were the exact same except for this icon difference, we could use the current brand to test the theory.
We planned to use copies of the current homepage as the basis for the test.
Tracking Considerations
- Artefact A: A clone of the current live menu with icons would be altered to include tracking parameters on all link
- Artefact B: A version of the proposed menu with no icons would be built out from scratch and tracking parameters added to all links
Apart from the above, we reviewed the homepage to ensure it was fit for purpose and no other differences appeared across either artefact. We prepared both prototypes and pushed them to non-indexed test URLs on our live site.
Testing script
While prototypes were being developed, we worked on the test script and the metrics we wanted to measure. The test script covered:
- An introduction so participants could understand the domain of the test (banking) what would be involved in the test. As the test was unmoderated and fully remote, it was important to remind them to speak aloud and be honest with their opinions.
- 6 scenario based tasks
- A post test questionnaire
Here’s an example of some of the scenario-based tasks we asked participants to complete:
You’re thinking about buying your first home and would like to understand how much of a mortgage you could get from the bank. Using the main navigation menu as your starting point, find a way to do this.
You need help with your finances and want to understand your options. Using the main navigation bar as your starting point, where would you find this type of information?
Test metrics
- Time to complete each task: this was the main metric to come from the scenario based tasks that participants were asked to complete complete.
- Usability: This was addressed mainly through the post-test likert-style question and through analysis of the comments made by participants throughout the user testing session.
- Visual appeal: This was addressed mainly through the post-test question asking users to describe the design presented to them in three words, and through analysis of the comments made by participants throughout the user testing session. This could help us understand if the icons have had an impact on participant responses.
Participant selection
Within the bank, there are very select departments that have the authority to speak with customers about their experiences with the bank and unfortunately, our team don’t have this authority. For that reason, we chose a selection of people between the ages of 25 – 45 from the Usertesting.com panel. We felt that as the audience for retail banking was quite broad and spread across a large age group with diverse backgrounds, there was little need for a niche group to be sourced.
The results
To summarise, we tested the following groups:
- 5 participants with the menu with icons (Artefact A)
- 5 participants with the menu without icons (Artefact B).
- Participants were from a range of countries, between the ages of 25 and 45.
- 33% of participants were female, 67% male.
We were seeking to answer the question Do the icons within the menu improve or reduce the experience for customers?
Results of task breakdown
The results were mixed. On the face of it, the results for the metrics Task Success Rate and Time on Task point to Artefact B (menu without icons) performing slightly better.


Further to this, Artefact B (menu without icons) also fared slightly better in terms of perceived usability of the mega menu – we asked participants to rate the ease of use of the menu on a scale of 1 – 7 (1 being very difficult, 7 being very easy):

Sentiment
We asked participants to use three words to describe the designs they were presented with. These are the results:
Artefact A (with icons)
- Well organised
- Self explanatory
- Raw
- Simple
- Institutional
- Clear navigation
- Informational
- Confusing labels
(One participant called out some primary menu labels as confusing, particularly Financial Wellbeing and Ways to Bank)
Artefact B (no icons)
- Professional
- User friendly
- Well designed
- Simple
- Straight forward
- Well distributed
- Minimalistic
- Trustworthy
- Cold
- Serious
What does it all mean?
Before we come to conclusions based on this testing session, a few concerns are worth noting:
Not enough participants
In both groups, 2 participants fell out of the flow of the test pages and unfortunately we could not take the results of those participants into account. Due to this, the results are based upon:
- 3 participants for Artefact A (menu with icons)
- 3 Participants for Artefact B (menu without icons)
This in itself is a concern, as best practice for user testing of this nature is to have at least 5 participants within a test group in order to reach a comprehensive conclusion. Overall, a recommendation was made to run more tests, but this was not acted upon.
Time to complete outlier
For Artefact A (menu with icons), one participant took a lot of time speaking out loud about their own personal financial situation and, due to this, their timings differ massively from the other participants and have had the effect of increasing the overall average timings for Artefact A. It’s worth noting as their timing should be considered an outlier.
Participant sentiment
Words participants used to describe the design highlights some concerns worth exploring, such as possible language issues with menu labeling.
One participant noted the design was ‘cold’. This points to a possible need to review the visual design and language to soften the approach and create a more welcoming feel.
Conclusion
Looking at the quantitative results, Artefact B (no icons) was the slight favourite. But when you consider the timings outlier for Artefact A, this isn’t necessarily that cut and dry.
Overall, in my opinion the testing was inconclusive and it was very hard to make a call based on this session. I felt there needed to be more testing of, perhaps 2 or 3 more participants in each group to gather more data.
Ultimately, the testing session was worthwhile, as it highlighted rich information around aspects of the menu that are worth exploring. Observing participants as they worked through the scenarios uncovered surprise findings about the menu outside our testing hypothesis, such as the labelling and how they may be interpreted.