Experiential Design - Task 3: Project MVP Prototype

1.6.2025 - 6.7.2025 ( Week 6- Week 11)
Gam Jze Shin / 0353154
Experiential Design / Bachelor of Design in Creative Media
Task 3: Project MVP Prototype


Index

1. Instructions
2. Task 3: Project MVP Prototype
3. Feedback
4. Reflection


Instructions

fig 1.1  Experiential Design MIB


Task 3

Once our proposals are approved, they will begin developing a prototype of their project. This prototype phase allows them to uncover potential limitations that they may not have anticipated, encouraging them to think creatively to find solutions and bring their ideas to life. The main objective of this task is to test the key functionalities of their project. The outcome does not need to be a fully polished or visually complete product; instead, students will be evaluated based on the functionality of their prototype and their ability to explore creative alternatives to achieve their intended goals.


Week 7


fig 2.1 Week 7 Problem Faced

This week, I encountered a few problems, with the most serious being an issue when exporting my AR exercise to my phone. While the UI interface displayed correctly, the AR camera did not appear on my device. However, when I tested the same project on my lecturer's phone, it worked perfectly. This showed that my phone is not compatible with the AR camera feature, even though it is running on Android version 15. As a result, I had to switch to another phone to continue testing my project.

Week 8


fig 2.2 Week 8 Class Exercise

This week, we learned how to first scan the image target, followed by scanning the ground plane, which then triggers the appearance of the 3D object.


Week 10


fig 2.3 Week 10 Class Exercise

In Week 10’s class, I learned how to use ProBuilder to create a wall and embed a video onto it. When scanned, the video will play on the wall surface.


fig 2.4 Week 10 Class Exercise Video

I also learned how to scale objects up and down, allowing them to appear larger or smaller through the AR camera when scanned.


Task 3 Progress


fig 2.5 Import Font

Before starting the AR project in Unity, I imported the Sora font into the project to use it in my design. I learned how to import a new font into Unity by following a tutorial on YouTube.

Learning from: https://youtu.be/gMd0xDEFE20

I began building the homepage and learning page layout by adding text, images, shapes, and a navigation bar.


fig 2.6 UI Added

When I added a UI image and dragged it into the GameObject, a grey shadow appeared on the right side of the white box. This issue was caused by the image I had exported from Figma. To fix it, I re-exported a new version of the image without the shadow.


fig 2.7 Bottom Navigation Bar

For both pages, I added a bottom navigation bar and connected them using OnClick buttons along with a SceneControl script.


fig 2.8 - 2.9 Image Added


I adjusted the image to be darker so that it improves the overall page layout and creates better contrast with the white text, making it more visually appealing.



fig 2.10 - 2.11 Progress bar

In the learning page, I added a progress bar at the top to show users their progress after scanning and completing a road sign. I implemented a script to control the progress bar and tested it in Unity using a button to simulate progress updates. This allowed me to adjust and verify the progress bar’s value and functionality.


fig 2.12 AR scenes

For the AR scanning page, this serves as the initial layout where users can either return to the learning page by clicking the icon at the top right or by tapping the button provided.


fig 2.13 - 2.14 Database in Vuforia Engine

Next, I added the necessary database in the Vuforia Engine and tested it by attaching a cube to the image target. I scanned it to check if it worked — and fortunately, it did, so I was able to proceed with the project.


fig 2.15 ProBuilder Install

Since I needed to create a 3D object and had learned about ProBuilder in class, I decided to install it and use this tool to build the 3D model for my project.



fig 2.16 - 2.19 Stop Sign Build


I created the stop road sign using a 3D polygon in ProBuilder and customized the number of sides to match the desired shape, since Unity doesn’t provide a pentagon shape by default—it only includes basic shapes like spheres, cubes, and cylinders. After shaping it, I assigned materials to the road sign and added the text onto it.


fig 2.20 Edit Graph

In addition to using scripts to control the button, I also used Unity’s visual scripting (Graph Mechanic) to edit the graph and connect the button, enabling it to activate and display the GameObject.


fig 2.21 Scenario Page

For the scenario page, I included three step buttons and had to carefully double-check that all related GameObjects were correctly connected. If any connection was missed, it could cause issues such as text appearing at the wrong time. To prevent this, I reviewed the setup multiple times to ensure everything functioned as intended.


fig 2.22 - 2.24 Script Added

In the scenario page, I included three scripts to enhance the visual experience: one for the car movement animation, another to rotate a GameObject, and a third for animating the text.


fig 2.25 Change Font

Based on Mr. Razif’s feedback, I added additional translations for the word “Stop” in both Malay and Chinese. However, the Chinese characters did not display correctly with the default Unity font. To solve this, I had to create and import a custom font that supports Chinese characters so they could be displayed properly in the project.


fig 2.26 -2.27 Script Used

I used a script to cycle through the text, so when the 3D object is clicked, the word changes from English ("Stop") to Malay, then to Chinese, and loops back to English again.


fig 2.28 Database

After setting up the first image target, I added another one using a traffic light road sign image in Vuforia Engine. However, it was difficult to find an image with a high recognition rating. I tried multiple images, but none could achieve the highest rating. Eventually, I decided to use the one with the highest available rating, which was 3 stars, as my image target.


fig 2.29 Scenario 2

In addition to adding the new image target, I also created a scenario to go with it, which displays a traffic light scene.



fig 2.30 All scene in Unity

These are all the scenes I have completed for Task 3 so far. Next, I will continue working on the Quiz Page by adding a 3D quiz, as well as creating a showroom for road signs. The showroom will feature four categories, each displaying videos of road signs in a 3D view.

Presentation Slide
Click here to view Task 3 Presentation Slide in Canva.

fig 2.31 Task 3 Presentation Slide

Presentation Video
Click here to view Task 3 Presentation Video in YouTube.


fig 2.32 Task 3 Presentation Video

AR Project Prototype Video Walkthrough


fig 2.33 AR Walkthrough Video



Feedback

Week 10

Specific Feedback
When scanning the image target, the road sign could offer interactive features rather than simply displaying a 3D version of the same sign. For example, tapping on the stop sign could change the text to other languages, such as (Berhenti) in Malay.


Reflection

Experience
Over the past few weeks of learning Unity, I have gained more experience compared to when I first started. In each weekly class, the knowledge shared by my lecturer, Mr. Razif, has been very helpful in applying what I’ve learned to my AR project. For example, I now understand how to scan an image target and display a 3D object using Unity. Of course, there are areas that require self-exploration, such as improving the interface by importing suitable fonts and writing custom scripts that suit our individual projects. Since each project has its own unique requirements, it’s important to explore and experiment on our own. In this process, I have found AI tools like ChatGPT very useful in helping me troubleshoot and solve Unity-related issues.

Observation
Through this learning experience, I observed that both UI elements and game objects are important, but the scripting behind them is just as crucial to ensure functionality. It’s not enough to just design the interface — we also need to pay close attention to coding and logic. Additionally, careful object linking is essential, especially when setting up navigation pages or button functions. A small mistake, like an unconnected button or misplaced script, can cause the entire project to malfunction.

Findings
At first, I felt anxious about working on an AR project in Unity. However, after a few weeks of learning and exploring, I realized it is not as difficult as I imagined. Once I understood the basic logic — such as how buttons work with OnClick events, and how the AR camera, game objects, and canvas interact — everything became clearer. I am proud of the outcome I’ve achieved so far and feel more confident after overcoming the errors I encountered. It’s satisfying to see how much progress I’ve made through hands-on practice and problem-solving.


Comments