The implementation of an augmented reality display and a virtual reality controller on glasses

Student: Jiuhe Tian
Table: ENG1
Experimentation location: School
Regulated Research (Form 1c): No
Project continuation (Form 7): No



1. Random Nerd Tutorials, ESP8266 NodeMCU with MPU-6050 Accelerometer, Gyroscope and Temperature Sensor (Arduino). url:

2. Arvind Sanjeev, How to Interface Arduino and the MPU 6050 Sensor, 2018. url:

3. Seokil Moon, Chang-Kun Lee, Seung-Woo Nam, Changwon Jang, Gun-Yeal Lee , Wontaek Seo, Geeyoung Sung, Hong-Seok Lee & Byoungho Lee. (2019). Augmented reality near-eye display using Pancharatnam-Berry phase lenses. Nature, 9:6616.

4. Wikipedia, Augmented Reality.

5. Wikipedia, Virtual Reality.

6. Stambol, AR Glasses for Consumer & Enterprise Users. 2020.

7. Ray Optics Simulation,

8. Large Type,*hello*

9. Nreal Air’s AR Glass Amazon shopping page,

10. C-Thru’s AR helmet for firefighters from CBS mornings,


Additional Project Information

Presentation files:
Research paper:
Project files:
Project files

Research Plan:

The research will be divided into five stages with five different objectives. 

The first objective is to successfully display the graphical information from a server (laptop) on a large scale model. It can be in a single color, but it will be better if it is in multiple colors. To approach this objective, a projector, lenses, and a thin, transparent board to project will be used a laptop screen on the board. An optical table will be used to build the model. When a clear image is formed on the screen, the first objective is achieved.

The second objective is to build a scale-downed model for the glass. The proper scale of this objective is in the size of regular glasses, and weigh at most 250g. This requires a tiny projector. A small projector will be purchased and implemented. To hold the structure, a 3D printer will print some parts and a laser cutter will cut some plywood to build the frame. The sign of success to this objective is a clear projection can be seen on the glasses when the projector is connected to the server (laptop). 

The third objective is to know the position and the rotation of the projector. A piece of ESP and a piece accelerometer will be used here. The ESP and accelerometer will combine with the frame, then the ESP will be connected with the server (laptop), and the data from the accelerometer will be displayed on the server. The acceptable lag should be less than 0.1 seconds, and the display will be in 60 FPS, and the resolution will be in 1920 ⨉ 1080, the same with the laptop. Considering the effective range and situation, the limit of rotational speed will be set to 166.7 rpm and acceleration to ±8g. The third objective is accomplished when the correct acceleration and rotation is shown on the server. The displacement will be calculated by integrating the acceleration twice.

The fourth objective is to build a simple user controller. The second and the third objectives will be repeated. And a cylinder container which contains ESP and accelerometer inside will be built. Success condition is still showing the displacement and rotation on the server. 

The fifth objective is to create a simple scene to give user feedback when he is moving head or the controller around. A Unity scene with a colored cylinder and a camera will be created to receive the data from two ESPs, which connect with the server in different ports. The camera shows what the user should see, and the cylinder should be accurately in the position of the controller. (excursion less than 1 mm) The goal of this phase is to allow users to both see what's in the real world and the colored cylinder connected with the controller. 


Questions and Answers


1. What was the major objective of your project and what was your plan to achieve it? 
This project is created to integrate augmented reality (AR) and virtual reality (VR), because current AR sets can move freely, but do not have the function to generate real-time 3D display. People can enjoy the vivid virtual world but they are constrained into a small, closed area. This project is a bold trial to change AR functions, enabling people to enjoy the virtual world in a free, broader real area. This project could also be a small step of VR technology revolution, by having a totally different way of position tracking. 

2. What were the major tasks you had to perform in order to complete your project?
To complete the project, I have to build an AR near-eye projection model and design a different way of position tracking. 
For the AR near-eye projection, I use a projector to generate a laptop screen, and also use optics to reflect and refract the light to human eye. 
For position tracking, both hardware and software are required to be created myself. For the hardware part, I use ESP8266 console to transit data received from MPU6050 accelerometer to the laptop. For software part, it is for processing the data from ESP8266 to generate a Unity scene for displaying the result of position tracking with models showing the rotation and displacement. 

3. What is new or novel about your project?
Comparing to the projects I worked before, I tried to use platforms different from what it was designed to achieve my final goal. For example, I used Unity for displaying data. Though the implementation was just a simple use of the tool, it reminded me that the tool is just a tool, the result was determined by how people used it. 
Comparing to the products I saw before, my project have more function than them. An example is Nreal's AR glass, which is just projecting laptop, phone, or game console screens in front of people's eyes. My project advanced it by adding a position tracking function to provide users a better experience using AR. 
To confirm my work is unique, I made some deep dive into some product available in the market or reported in documentaries by looking what functions they have, how they realize the function. Another example is C-Thru's AR headset for firefighters, which generates real-time outline of the environment. Its advantage is also its disadvantage, which relies on the real world situation and cannot create a totally new object on a flat ground. In my project, if a start point is given, everything else can be then generated automatically without anything else. 

4. What was the most challenging part of completing your project?
I encountered challenging problems both on the AR part and the position tracking part. 
When working on AR near-display, I first thought to display information from the projector on a transparent screen. But the screen was always blurred due to human eyes' limitation of closest eye sight. I solved the problem by changing the basic idea to reflecting the light at the transparent screen. From this problem, I learn that sometimes when we cannot achieve the main goal, the basic idea can be wrong, then we need to change the basic idea to a totally different one. 
Later on, I started working on the Unity scene. My program precisely followed the formula I conducted before, but the model just kept moving without any sense. I once thought to change the idea like what I did before, but the right formula was just there, no one can change it! So I gave up changing the idea and kept trying to fix the bug for about a week. At last, I fixed the bug by adding a antigravitational force to the product, because the accelerometer data included a gravitational force which is balanced by human hand upward. From this problem, I learn that sometimes when we cannot achieve the main goal, it could because we have not put enough effort on it. 
Now I am reviewing the whole completion of my project, I realize another idea of overcoming tough problems. As the saying goes, there are always a different way to solve a problem. When we find a problem hard to solve, we could change our strategy working on. Changing path could always be a simple way to reach target. But sometimes, when we believe our strategy is definitely correct, we need to fix on to it till the end. To sum up, I learn from my project that when we are trapped somewhere, we can either choose a second path or walk to the end. Which way is correct? People never knows the definite answer until everything is end. Before everything ends, we need to keep trying finding solutions and trying to make progress. 

5. If you were going to do this project again, are there any things you would you do differently the next time?
The shorter period of data refreshing is, the more sensitive of the position tracking system will be, the better performance the whole set will have. If I can do this project again, I will try my best to get a better accelerometer and a better transiting robot to have a higher data refreshing rate for better position tracking work. 

6. Did working on this project give you any ideas for other projects? 
We could add different functions on the product to apply the model to do different jobs. For example, if we have an external robot waiting to be controlled by human motion, people will also be able to do jobs from far away, like doctors can do surgery to patients even though they are on a vacation, workers in factories can product things that are either too large or too tiny by hand, and if we think out of the earth, people could explore other planets without landing on it with a short delay. In more regions, the product could be implemented in similar ways. 

7. How did COVID-19 affect the completion of your project?
Since I worked on my project on my own in the school, COVID-19 did not have direct influence on the completion of my project.