This system is intended for clinician use
only. Self-Diagnosis is not promoted.
Due to performance, Spline is disabled on your device
CONTACT US
Have questions about A-EYE or want to collaborate with us? We'd love to hear from you.
Whether you're a researcher, medical professional, or simply curious about our work, our team is here to
connect.
Reach out to us for inquiries, partnerships, or feedback, and let's work together toward advancing eye care
through innovation.
group4-aeye@gmail.com
ABOUT THE WEBSITE
ABOUT THE MODEL
A-EYE IS DEVELOPED FOR THE DEPLOYMENT OF OUR PROPOSED MODEL - LOCALLY-AWARE
VISUAL TRANSFORMER MODEL FOR SEVERITY CLASSIFICATION.
THIS SYSTEM IS DEVELOPED BY COMPUTER SCIENCE STUDENTS FROM POLYTECHNIC
UNIVERSITY OF THE PHILIPPINES - STA MESA CAMPUS UNDER THE SUPERVISION OF MR. MELVIN ROXAS.
Cataracts remain a leading cause of visual impairment worldwide, with severity levels that critically inform treatment decisions. Traditional assessment relies on slit-lamp biomicroscopy and expert grading, which can be time-consuming and subject to inter-observer variability. To address these challenges, we propose a clinically assistive deep-learning system that leverages a Vision Transformer (ViT) architecture—enhanced with local self-attention—to automatically classify cataract severity into three discrete levels (mild, moderate, severe) using pupil-based imaging. By capturing both fine-grained local patterns and broader structural features, our locally-aware ViT model overcomes the data dependency of standard transformers and outperforms conventional CNN-based approaches in ophthalmic classification tasks. Designed explicitly to augment ophthalmologist expertise rather than replace it, this system delivers rapid, reproducible grading that can streamline clinical workflows and support early intervention.
ABOUT THE TOOL
A-Eye is our deployment of the locally-aware Vision Transformer system for automated cataract severity grading. Built around a streamlined pipeline—image preprocessing, patch embedding, local-global self-attention layers, and a three-class classification head—A-Eye processes pupil-based photographs to deliver real-time severity scores. Its user interface presents clear visual feedback and confidence metrics, enabling clinicians to review and validate each assessment. Lightweight enough for edge-device inference, A-Eye integrates seamlessly into routine ophthalmic examinations, offering a decision-support tool that empowers practitioners with fast, objective, and consistent cataract grading without supplanting their clinical judgment.
MODEL ARCHITECTURE
CNN: Extracts fine-grained, local features (edges, texture, opacity) around the pupil area.
ViT: Captures global spatial relationships and patterns that CNNs may overlook.