

Our Team
We are a group of dedicated individuals who share a common goal: to collaborate and produce exceptional work. Our team is made up of individuals from diverse backgrounds, experiences, and perspectives, which allows us to approach challenges from multiple angles and generate innovative ideas.
We believe that teamwork is essential to achieving success, and we strive to create a supportive and inclusive environment where everyone feels valued and respected. Each member of our team brings unique strengths and skills, and we work together to leverage our collective expertise to produce outstanding results.





Jonathan E. Garcia
Computer Studies and Engineering
Institure of Computer Engineering of the Phillippines – JRU Chapter
jonathan.garcia@my.jru.edu
Jeminteza L. Inopia
Computer Studies and Engineering
Institure of Computer Engineering of the Phillippines – JRU Chapter
jeminteza.inopia@my.jru.edu


BACHELOR OF SCIENCE AND COMPUTER ENGINEERING
Jasmine Mae Q. Loyola
Computer Studies and Engineering
Institure of Computer Engineering of the Phillippines – JRU Chapter
jasminemae.loyola@my.jru.edu

Dave Harley Bermas
Computer Studies and Engineering
Institure of Computer Engineering of the Phillippines – JRU Chapter
daveharley.bermas@my.jru.edu
Development of Smart Robot Cane with Computer Vision Technology
This research aimed to develop a Smart Robot Cane with Computer Vision Technology is designed to identify objects, read the texts using computer vision technology and GPS locator. An object is captured using a USB Webcam and then goes through image preprocessing before being sent to the YOLOv5 algorithm, which is imported in the Raspberry Pi 4. The output of the captured image is shown on the result. Additionally, the system includes a GPS module that receives coordinates and converts them into a reverse geocoder. This means that the output of the GPS locator is a location or place name. The result of the YOLOv5 model in training and validation is 4163 images of smartphone, computer, book, electric fan, table, and text of object detection were split into 70% for training, 30% for validation. This model has a 0.83 to 0.96 true positive rate detection in 6 objects that are used. This smart robotics cane with object recognition and text recognition capabilities that can help visually impaired people to gain access to this kind of education. It's designed to help VIPs find their way using object obstacle avoidance, detect objects, and read text and educational materials. This would allow for a better chance at success both in school and life of a person who face an uphill battle against their peers who do not have visual impairments. Keywords—YOLOv5, Computer Vision, Reverse Geocoder, Image Processing, Ventilation, Machine Learning, EasyOCR.

1.4 SCOPE AND LIMITATIONS
This section focused on the scope, boundaries, and constraints of the study. This study included the following features:
● A smart robotic cane assistant system that would detect an object through image processing with the use of a camera.
● An ultrasonic sensor that can be used for obstacle avoidance.
● A beep sound that would notify the user of a potential obstacle in their surroundings.
● The voice output for reading the text when the text reading mode is activated.
● A switch to power ON/OFF the system. 6
● Has two (2) button options: 1- to activate the text recognition, 2 - to activate the GPS locator.
● There will be a 2- or 3-inches barrier in the ultrasonic sensor so that the user does not shatter the ultrasonic sensor.
● Can track the location of the user.
● An 18650 rechargeable battery for power supply. However, the study was limited to the following:
● The camera cannot rotate.
● The GPS can only detect the city location of the user through GPS and has no navigation functions.
● For prototype implementation, the robot cane assistant can only identify 6 distinct objects and can avoid obstacles.
● The robot cane assistant is built in a fixed height of 3 feet.
● The GPS will work if the user has a stable signal.
● The text recognition can read English text and numbers and the light of the room may affect the object detection and reading the text.
● A battery that can last 8 - 10 hours

1.1 Objective of the study
● Develop a smart robotics cane to detect obstacles and notify the user.
● Design two buttons and one switch on the Smart Robotics Cane to activate the text recognition, GPS and power switch.
● Develop an object recognition system with an integrated camera to the smart robotics cane to detect and recognize objects.
● Develop a system for recognizing and reading the text.
● Test the accuracy and efficiency of the system based on the testing result and obstacle avoidance.

1.5 SIGNIFICANCE OF THE STUDY
Blind Community. The study would directly benefit the blind community which includes people who is blind, losing vision, parents of blind children, blind parents, seniors, educators, employers, and students. With the implementation of this study, it will help those who have some kind of handicap in this circumstance to know the text and the objects in front of them. 7 General Public. Another direct beneficiary of the study are the people who cannot read texts and cannot identify some specific objects. Also, schools and universities will increase awareness and expand concepts on how they will adapt to the rapid growth of technology advancement. A system that will be convenient and efficient to use for detecting objects and recognizing text will increase and promote the business. Establishments. Essentially it provides establishments like warehouses, plants, and malls by alerting and notifying the people during the event of a fire, thus shortening the time and reducing property damage caused by a fire. In addition, it hopes to provide a cheaper and more reliable fire detection system compared to what is currently available. Future Researchers. For the researchers who will pursue this, it will be used as a research topic to obtain some understanding of how the system functions. They were able to build a more complex system that will soon help others by using the information as useful reference material.
DEMO VIDEO
