Experiential Design - Task 4: Final Project

7.7.2025 - 3.8.2025 ( Week 11- Week 15)
Gam Jze Shin / 0353154
Experiential Design / Bachelor of Design in Creative Media
Task 4: Final Project


Index

1. Instructions
2. Task 4: Final Project
3. Feedback
4. Reflection


Instructions

fig 1.1  Experiential Design MIB


Task 4

Students will synthesise the knowledge gained in task 1, 2 and 3 for application in task 4. Students will create integrate visual asset and refine the prototype into a complete working and functional product experience. 

Task 4 Progress


fig 2.1 Parts to be done

After completing Task 3, I continued working on the remaining parts of the project. I still need to complete the Level 1 and Level 2 quizzes on the Quiz page, as well as the Road Sign Showcase on the Learning page.


fig 2.2 Load scene

Since I already have the homepage and bottom navigation set up, I just need to add the graph editor and link the new page to load the scene from the previous page.


fig 2.3 - 2.4 UI in Level 1 Quiz Page

This is the UI I added for the Level 1 quiz. Users are required to scan the target image first. Once the image is successfully scanned, two answer options will appear for the user to choose from.


fig 2.5 Effect added

To enhance the user experience, I plan to add visual effects that respond to the user's answers. When the user selects the correct answer, a ribbon animation will pop up, while an explosion effect will appear for a wrong answer. I found suitable animation assets from the Unity Asset Store and have successfully imported them into my Unity project.


fig 2.6 Script on linking effect and button

After adding the assets in Unity, I used a script to link them to the answer buttons. When the correct answer is selected, the corresponding animation plays and a 'Correct' reminder message is displayed.


fig 2.7 Progress in Level 2 Quiz

Next, I continued by adding the Level 2 quiz, which features a 3D quiz mode. I started by setting up the target image, and then added the Plane Finder and Ground Plane components. In this level, the intended flow is for the user to first scan the target image. Once the image is recognized, the ground plane will be activated, allowing the user to scan a surface using the AR camera.


fig 2.8 - 2.9 Assets used

Then, I needed to add a 3D environment, so I searched the Unity Asset Store for a suitable road environment model. I also looked for 3D models of road signs to include in the scene.


fig 2.10 Materials missing

When I added the 3D environment models into my Unity project, I noticed that some of them appeared pink. This indicated that the materials were missing or incompatible. I consulted ChatGPT for help and discovered that the issue was due to the models being created with an older version of Unity. To resolve this, I upgraded the project's render pipeline to URP (Universal Render Pipeline). After updating the materials to be compatible with URP, the textures were successfully restored. Through this process, I also learned that assets from older Unity versions often require material or shader upgrades to display correctly in newer Unity projects.


fig 2.11 - 2.12 Environment prefab added 

In my 3D view, I added an environment to represent a road scenario. Then, I placed the necessary 3D models as prefabs within the scene, which are essential for the quiz level functionality.


fig 2.13 - 2.14 Road Sign added

The most important part of this quiz level is the road sign. I added the 3D road sign model into the environment and designed the interaction so that users can explore the scene and visually search for the correct road sign. When a user clicks on a road sign, a message will appear indicating whether their choice is correct or incorrect. To enable this interaction, I added a Box Collider to the road sign and adjusted its size to define the clickable area. This collider acts as the interactive zone, allowing the camera to detect user input when the sign is tapped or clicked.


fig 2.15 Script for box collider

A script also needs to be added in Visual Studio to link the Box Collider with the reminder message. Above show the script I have written to handle that interaction.


fig 2.16 - 2.17 Image as materials

On my Prohibition Road Sign page, I first added the 3D road sign model. I then learned how to use an image to create a material in Unity and applied it to the specific object, allowing the road sign to display the correct visual.


fig 2.18 Video showcase

Besides the road sign, I also added a plane in the scene to display a video. Depending on the selected road sign, a different video will be shown on the plane. I included a 'Play Video' button to start the video and a 'Close' button to stop or hide the video display.
Video From: https://www.youtube.com/watch?v=JU0PXiLPw3U


fig 2.19 - 2.20 Script for Raycast

I also added a script to handle raycasting, which detects when the user interacts with objects like the road sign. I used the console to display debug messages to confirm that the raycast is working correctly.


fig 2.21 - 2.22 Script for on boarding page

To make my app more complete, I added a boarding (onboarding) page with scripted animations. The text appears with a scaling effect, starting small and growing larger, while the graphic fades in from 0% to 100% opacity. These animations are controlled through a script to create a smooth and engaging introduction for users.


fig 2.23 App logo

This is my app logo, which I uploaded in the Player Settings before building the app. This ensures that when the app is exported and installed on a phone, the logo will appear as the app icon.


fig 2.24 - 2.25 Build and run project

After completing everything, I started the build and run process to test the app on an Android phone. The initial build took a longer time because many assets needed to be processed and exported. However, after the first build, subsequent builds became faster—even when I added new elements—because Unity only needed to recompile the updated changes.


fig 2.26 Adjust position for prefab

After testing my app on an Android phone, I noticed that the 3D view in the Level 2 quiz was not displaying correctly—the environment appeared in the wrong position. I reviewed the scene in Unity and found that the environment was placed too high above the ground plane. To fix this, I adjusted the Y position of the environment to 0 so that it would align properly with the ground. I also repositioned the other prefabs to ensure everything displayed correctly during AR scanning.


fig 2.27 Successfully export

During the first export attempt, an error occurred and the build process failed. I checked the Console in Unity to identify the issue, reviewed the error message, and fixed the problem accordingly. After resolving it, the export was successful. I tested the app on an Android phone and confirmed that everything was working as intended. With the app functioning properly, I was finally able to proceed with recording the walkthrough video and preparing my presentation. I am glad that the app now runs smoothly from Unity to the Android device.

Google Drive

Click here to view project file in Google Drive.

Presentation Video
Click here to view Task 4 Presentation Video in YouTube.


fig 2.28 Task 4 Presentation Video

AR Project Final Video Walkthrough
Click here to view Task 4 Video Walkthrough in YouTube.


fig 2.29 AR Walkthrough Video


Feedback

Week 14
General Feedback
Mr. Razif mentioned that we should ensure the features we include are functional—if a feature doesn't work properly, we shouldn't proceed with it. Instead, we should focus on simpler features that work reliably.


Reflection

Experience
Over the past few weeks, I have become more familiar with using Unity. However, importing 3D assets was initially challenging for me. To make my app more functional and engaging, I had to explore how to find, download, and import assets into Unity. Sometimes the imported models didn’t display correctly due to missing materials, which required me to troubleshoot and find solutions. Another challenge was exporting the app to my phone—errors would occur during the build process, and I had to identify and resolve them before I could successfully run the app. Despite the difficulties, I felt a great sense of accomplishment when everything worked, and I learned a lot through this AR project.

Observation
There are many details to pay attention to, especially when linking UI elements with script functions. It’s important to test them repeatedly to ensure everything works correctly. I also observed that the AR camera in Unity, when connected to a ground plane, doesn't always appear directly in front of the user—it depends on where the ground plane is detected. Sometimes the AR content appears too near or too far, which can affect the user experience. Although this issue limits the placement accuracy, most of the functionality still works as intended.

Findings
I have realized that the AR camera has limitations, especially in displaying too much content at once. Since phone screens have limited space, layout design becomes crucial to ensure a clean, user-friendly experience. Overall, I’m proud of what I’ve accomplished in this module. I never expected to achieve the outcome I have. This project has improved my thinking and problem-solving skills, and I now have a deeper understanding of augmented reality.


Comments