1.6.2025 - 6.7.2025 ( Week 6- Week 11)
Gam Jze Shin / 0353154
Experiential Design / Bachelor of Design in Creative Media
Task 3: Project MVP Prototype
Index
1. Instructions
2. Task 3: Project MVP Prototype
3. Feedback
4. Reflection
Once our proposals are approved, they will begin developing a prototype of their project. This prototype phase allows them to uncover potential limitations that they may not have anticipated, encouraging them to think creatively to find solutions and bring their ideas to life. The main objective of this task is to test the key functionalities of their project. The outcome does not need to be a fully polished or visually complete product; instead, students will be evaluated based on the functionality of their prototype and their ability to explore creative alternatives to achieve their intended goals.
Week 7
This week, I encountered a few problems, with the most serious being an issue when exporting my AR exercise to my phone. While the UI interface displayed correctly, the AR camera did not appear on my device. However, when I tested the same project on my lecturer's phone, it worked perfectly. This showed that my phone is not compatible with the AR camera feature, even though it is running on Android version 15. As a result, I had to switch to another phone to continue testing my project.
Week 8
fig 2.2 Week 8 Class Exercise
This week, we learned how to first scan the image target, followed by scanning the ground plane, which then triggers the appearance of the 3D object.
Week 10
In Week 10’s class, I learned how to use ProBuilder to create a wall and embed a video onto it. When scanned, the video will play on the wall surface.
fig 2.4 Week 10 Class Exercise Video
I also learned how to scale objects up and down, allowing them to appear larger or smaller through the AR camera when scanned.
Task 3 Progress
Before starting the AR project in Unity, I imported the Sora font into the project to use it in my design. I learned how to import a new font into Unity by following a tutorial on YouTube.
Learning from: https://youtu.be/gMd0xDEFE20
I began building the homepage and learning page layout by adding text, images, shapes, and a navigation bar.
When I added a UI image and dragged it into the GameObject, a grey shadow appeared on the right side of the white box. This issue was caused by the image I had exported from Figma. To fix it, I re-exported a new version of the image without the shadow.
For both pages, I added a bottom navigation bar and connected them using
OnClick
buttons along with a SceneControl
script.
I adjusted the image to be darker so that it improves the overall page layout and creates better contrast with the white text, making it more visually appealing.
fig 2.10 - 2.11 Progress bar
In the learning page, I added a progress bar at the top to show users their progress after scanning and completing a road sign. I implemented a script to control the progress bar and tested it in Unity using a button to simulate progress updates. This allowed me to adjust and verify the progress bar’s value and functionality.
For the AR scanning page, this serves as the initial layout where users can either return to the learning page by clicking the icon at the top right or by tapping the button provided.
Next, I added the necessary database in the Vuforia Engine and tested it by attaching a cube to the image target. I scanned it to check if it worked — and fortunately, it did, so I was able to proceed with the project.
Since I needed to create a 3D object and had learned about ProBuilder in class, I decided to install it and use this tool to build the 3D model for my project.
I created the stop road sign using a 3D polygon in ProBuilder and customized the number of sides to match the desired shape, since Unity doesn’t provide a pentagon shape by default—it only includes basic shapes like spheres, cubes, and cylinders. After shaping it, I assigned materials to the road sign and added the text onto it.
In addition to using scripts to control the button, I also used Unity’s visual scripting (Graph Mechanic) to edit the graph and connect the button, enabling it to activate and display the GameObject.
For the scenario page, I included three step buttons and had to carefully double-check that all related GameObjects were correctly connected. If any connection was missed, it could cause issues such as text appearing at the wrong time. To prevent this, I reviewed the setup multiple times to ensure everything functioned as intended.
In the scenario page, I included three scripts to enhance the visual experience: one for the car movement animation, another to rotate a GameObject, and a third for animating the text.

fig 2.25 Change Font
I used a script to cycle through the text, so when the 3D object is clicked, the word changes from English ("Stop") to Malay, then to Chinese, and loops back to English again.
After setting up the first image target, I added another one using a traffic light road sign image in Vuforia Engine. However, it was difficult to find an image with a high recognition rating. I tried multiple images, but none could achieve the highest rating. Eventually, I decided to use the one with the highest available rating, which was 3 stars, as my image target.
In addition to adding the new image target, I also created a scenario to go with it, which displays a traffic light scene.
Presentation Slide
Click
here to view Task 3 Presentation Slide in Canva.
fig 2.31 Task 3 Presentation Slide
Presentation Video
Click here to
view Task 3 Presentation Video in YouTube.
fig 2.32 Task 3 Presentation Video
AR Project Prototype Video Walkthrough
fig 2.33 AR Walkthrough Video
Week 10
Specific Feedback
When scanning the image target, the
road sign could offer interactive features rather than simply
displaying a 3D version of the same sign. For example, tapping on
the stop sign could change the text to other languages, such as
(Berhenti) in Malay.
Reflection
Over the past few weeks of learning Unity, I have gained more experience compared to when I first started. In each weekly class, the knowledge shared by my lecturer, Mr. Razif, has been very helpful in applying what I’ve learned to my AR project. For example, I now understand how to scan an image target and display a 3D object using Unity. Of course, there are areas that require self-exploration, such as improving the interface by importing suitable fonts and writing custom scripts that suit our individual projects. Since each project has its own unique requirements, it’s important to explore and experiment on our own. In this process, I have found AI tools like ChatGPT very useful in helping me troubleshoot and solve Unity-related issues.
Through this learning experience, I observed that both UI elements and game objects are important, but the scripting behind them is just as crucial to ensure functionality. It’s not enough to just design the interface — we also need to pay close attention to coding and logic. Additionally, careful object linking is essential, especially when setting up navigation pages or button functions. A small mistake, like an unconnected button or misplaced script, can cause the entire project to malfunction.
OnClick
events, and how the AR camera, game objects, and canvas
interact — everything became clearer. I am proud of the
outcome I’ve achieved so far and feel more confident
after overcoming the errors I encountered. It’s
satisfying to see how much progress I’ve made through
hands-on practice and problem-solving.
Comments
Post a Comment