Background 
BV simulator is a web application from bank Banco BV that enables users to simulate conditions for financing purchase of vehicles.
The application provides options of installments with Banco BV based on the vehicle and the purchaser information, who once knowing the possibilities can apply for the financing directly in the vehicle dealership where the seller submits the application and once approved, the purchaser can buy the vehicle and sign the financing contract.
The Simulator is a 3 steps journey that generates leads and the flow starts with a form with questions about the vehicle and the person who wants to apply for the financing, then directing the user to a results page with different options of payment for the financing, and as this service is available in other banks as well the users can simulate conditions with competitors to compare them and once in the vehicle dealership, they can decide which bank they want to apply for the financing to purchase the vehicle.
Problems
The goal of the simulator as a product is to convert simulations into financing application requests, and after analysing data the problems identified were:
The simulator had a low conversion rate with little representation in final application requests considering the traffic and total number of simulations done.
There was a considerable loss of leads in the journey from the first to the last step.
Goals
- Increase conversion of financing application requests by providing a better experience where leads intentionally choose Banco BV.
- Increase number of leads reaching the last step.
- Decrease bounce rate in the last step of the flow.
Process
As UX Designer in the Squad I started researching and analysing data related to performance, sessions, engagement, interactions and the results from a qualitative research done with customers who had recently used the simulator and the main findings were:
- From 16 participants in interviews, only 1 could remember that he used BV simulator before applying for the financing.
- Insurances for the vehicle and the financing itself were included in the application but it was not clear for customers as the seller was the person to submit the application and customers were being informed about total amount and installments.
- Users considered the installments amount the most important information to decide if a vehicle fits or not in their budget.
From the findings I identified main users pain points to help me to define the requirements for a first iteration:
- The simulator did not generate results that were remembered at the moment of applying for the financing in the vehicle dealerships.
- There was a lack of transparency and the users were not aware of what was included in the financing application as they were focused on the installments amount and how long it would take for them to finish paying the bank.
Then analysing the simulator journey step by step alongside with user pain points I worked with the Product Designer Ana Pego to define what to change and/or include in the first iteration:
- Chatbot instead of form could provide a more personal experience and help users to remember.
- Show results in card as a summary.
- Provide a more intuitive results page with a slider of down payment amount to enable users to easily make changes and see on the screen the updates reflected in the installments amount.
- Show insurance options provided by the bank that can be included, in a way that is clear for users to understand that they are added and are part of the total amount of the financing.
- There was also opportunities to add value to the results page that would also be beneficial for the business:
- Provide a search for near Banco BV patterns vehicle dealerships as it could increase the chances of leads remember the simulation and apply for the financing with Banco BV.
- Connect MCN offers of vehicles that were compatible with the information provided for the simulation with results (MCN is the bank online car dealerships portal).

Once the first iteration was defined I built a prototype for a pilot validation test that would help me to identify the main gaps to iterate and I found some issues:
- The results summary in the collapsed card (to not make the page too long to scroll) went completely unnoticed.
- The user commented that he did not remember all the details about the vehicle when seeing results page (after chatbot).
- The order of the elements made the user confused as he expected to see first installments amounts and total amount to compare the result with other banks.
- User did not notice the insurances the first time he scrolled down the page.

Once there was no negative feedback about the chatbot step I decided to keep it in the next iteration to be tested, and after analysing the issues mentioned I developed 2 versions of iterations:
Version 1 - Improvement of how results were shown and the order of elements in the page.
Version 2 - Hypothesis with a flow that would guide the user through items that can be included in the financing and helpful information for the application with:

- Steps separated in different screens to minimise cognitive efforts.
- Sticky bottom bar to keep summary information about the vehicle and the total price visible.
- Option of sending the summary via e-mail.
The 2 iterations were then tested with prototypes in a remote usability testing platform (Testaisso) with 2 different groups of 6 testers each who had the intention of applying for a vehicle financing with a bank in a near future, and also other demographics characteristics from the customers data base, and the results were:
- In both versions testers considered the flow easy understand and the task easy to complete.
- The NPS in version 2 was higher (100) than in version 1 (75) but the difference was not so big considering the number of testers.
- The positive feedbacks were very similar.
As the platform had limitations to track interactions during the test and collect testers feedback, and considering that no severe usability issue was identified the result in the end was not very conclusive and I still needed to ensure the final version would have higher potential to increase conversion and achieve the goals, so I planned 2 steps to improve in prototype and also get more data: a moderated usability test combined with interview to get in depth insights about user’s perception of both hypothesis (results in a single screen and results in a steps flow) so we could implement the more feasible iterations to then set up an A/B test in production with a percentage of the traffic to get quantitative data to make the final decision of iterations and roll out to 100% of the traffic.

STEP 1 - Moderated usability test
The usability test was then planned by me with the support of the UX Writer, and consisted of 2 prototypes to be tested by 2 groups of 5 people each and they built by me in Figma and integrated on Maze, where:
- Chatbot was the first step of filling of information for both.
- Improvements on both results page based on the Testaisso test.

One prototype had the Version 1 of results page in a single screen also including:
Improved order of elements starting with down payment and installments, summary to download, insurances, MCN offers and the search of vehicle dealership partners.
- An anchor link in the summary results to scroll to the search of dealership.
- The option to download the summary was kept.
- Carousel for insurance options.
- Anchor link in the summary card to scroll to insurance carousel.

The other prototype had the Version 2 of results split into 4 screens including:
- 2 steps in 2 screens (reduced from the 4 used in Testaisso test) grouping down payment with installments and insurance in one screen, and the search and MCN offers in the other.
- The sticky bar triggered the full view of results summary that would also be shown in the end of the flow.
- The insurance options presented in collapsed cards all at once in the screen (no carousel).
- The last step with the option to view and download the results summary also had an option to send it via e-mail.
And the main criteria to compare and analyse results were based on:
- Understanding on the slider interaction and connection of down payment with installments.
- Understanding of amounts and the relation with included insurances.
- Viewing and interacting with the insurance options.
- Understanding of the search feature.
- Viewing and interacting with MCN offers.
- Viewing, interacting and understanding the option to save the results summary.

Then, based on test and interview the results and opportunities of improvements identified were:
- The slider was easy to understand and intuitive to use so should be kept.
- The connection of the insurances selection and the change on the total amount of the financing was intuitive and easy to understand.
- Show MCN offers was positive as testers mentioned they really liked to see vehicles options.
- Show detailed information of installments, amounts and interest increased trust.

- The search of partners vehicles dealerships was easy and intuitive to understand and also seen as a very positive feature.
- The option to download the results summary with the icon was more intuitive and faster for testers to notice and understand, and also had more clicks on it when testers were asked to save the results to use later.
- The flow presented in Version 2 (results in 2 steps in 2 screens) was finished faster and based on clicks and perceptions from the interviews having the results split into more than 01 screen would ensure users would see all elements of the journey. 

After the results analysis the next step was to use all findings to design the iteration for the A/B test and my approach was:
- Combine what performed better on both versions of the prototypes.
- Define how to end up with a final version with highest potential for impact in conversion with less efforts.

In this stage of the process I discussed with the PM and Tech Lead and we realised that the most effective next step would be 2 A/B tests where we would validate the entering of information part of the flow comparing chatbot with form that was already in production and we would have statistical significance to define this step, and then put engineering efforts in the results page comparing the version in production with the new one I designed based on all tests.
STEP 2 - A/B tests
As mentioned, we had 2 rounds:
A/B test 1 - Form vs. Chatbot 
- Ran between January 2021 and April 2021 with 10% of visitors traffic for each version and the result was:
Variant A - Form version of entering information had higher performance than the chatbot with a 3.7% difference. 
This result meant that more users who started the Simulator journey with the chatbot reached the results page, which was opposite to the first findings in my tests with much less people.

A/B test 2 -  Results page being used vs. Iterated results page
As the form performed better, it was time to test the results step comparing the page being used with the new one I designed based on the moderated usability test and interviews, and the design consisted of:
- Single screen version because it would be faster for engineers to implement.
- Different UI for down payment and installment selection with a slider and live update of amounts.
    - New features:
    - Insurances selection to be added in the financing application.
    - Option to save the summary with icon.
    - Offers from MCN.
    - Search of partners dealerships.

The test also tracked interactions with the new features through clicks on the saving summary option, and clicks on MCN offers, and during 45 days it was possible to get results with statistical significance:
✓ The new features were catching users interest as they were interacting with them.
✓ The bounce rate in the new page was 9.7% lower compared to the old one.
✓ The conversion rate of Variant B (new results page) was 1.4% higher than Variant A.​​​​​​​
Outcome
Analysing both A/B tests results it was decided that the simulator would keep the form as the first step of the flow where users enter informations about themselves and the vehicle in a form to be filled, and the results page would be the new version designed with the different UI and new features. 
I also teamed up with the UX Writer to organize all the discoveries during the process to present them to the team and stakeholders where I used the MoSCoW framework to facilitate the workshop to discuss, organize and evaluate findings and other opportunities considering efforts and blocks, to create a backlog of features and improvements for the product.
Learnings
As usually happens the process is never smooth and linear, and we faced some issues such as the data analytics tool with constrained configurations that delayed the A/B tests and completely different findings between different stages of research, which is natural and proves the importance of validating hypothesis and assumptions whenever possible.
Also, my biggest learning in this project was the practice of adapting processes according to the situation, managing to do tests and research to inform decisions with minimal risks for business and most of all it was great to be able to show stakeholders that having a design vision at the core of the projects can bring tangible results.

Special thanks
This whole project was possible because of the people who worked with me: the Product Designer Ana Pego and the UX Writer Glaucia Lopes.

You may also like

Back to Top