top of page





LUX MOTUS
SMART BULBS WITH MOTIONS SENSORS & AI SYSTEMS
VISION
Ensuring timely and accessible assistance for seniors is a critical challenge often hindered by trust issues and financial barriers in securing adequate caregiving. This concern is particularly troubling, as CDC research indicates that over 25% of older adults experience injuries annually, with less than half reporting these incidents to healthcare professionals. Delayed professional intervention exacerbates injuries and increases treatment costs.
We aim to address this pressing issue by integrating advanced home technology. Our innovative solution, an AI-powered LED light bulb, detects unusual movements and provides immediate notifications for swift assistance. This project offers older adults and their families peace of mind by enabling early treatment support and potentially reducing medical expenses through prompt attention to injuries. We are dedicated to bridging the gap in care accessibility for seniors, leveraging modern technology to foster trust, affordability, and efficiency in their support systems.

FULL MODEL
SMART BULBS TO SAVE ELDERLY INJURIES
FIGURE 1: OVERVIEW OF PROPOSED SYSTEM

APPROACH
The aging population underscores the growing risk of falls, posing a significant public health concern with physical, emotional, and financial consequences [2, 3]. Traditional home safety methods lack real-time monitoring, leaving older adults vulnerable to accidents. This highlights an urgent need for a comprehensive in-home safety system that surpasses conventional approaches to specifically address the danger of hip fractures. Scientific literature emphasizes the multifaceted challenges, including the burden on healthcare systems, the emotional and financial strain on families, and the diminished quality of life for affected individuals [4, 5].
Existing in-home safety solutions primarily rely on bulky wearable devices equipped with basic motion sensors. While these provide a level of monitoring, they lack the sophistication to differentiate normal activities from potential fall-inducing hazards. Their reactive nature and absence of personalized interventions, such as those based on individual gait patterns, further limit their effectiveness.
The proposed solution leverages the widespread adoption of smart home technology by integrating AI directly into a smart light bulb system to enable real-time gait analysis and prevent hip fractures. This innovation involves embedding advanced motion detection sensors and cameras with AI algorithms into the infrastructure of smart light bulbs. These enhancements enable intelligent detection, analysis, and response to movements that pose fall-related risks.
The primary goal of this solution is to address the reluctance of older adults to use bulky or uncomfortable hip fracture prevention devices. By utilizing a familiar and non-intrusive element of the home environment—the light bulb—this system provides a seamless and proactive safety solution. It becomes an integral part of daily life, enhancing safety while minimizing disruption and improving overall quality of life for seniors.
PROJECT & LOGISTICS ORGANIZATION
To develop an optimized gait analysis model, we aim to compare two distinct approaches to gait assessment: (1) a system utilizing 3D voxel reconstruction from silhouettes [6, 7], and (2) a system employing Mask R-CNN [8, 9]. Each approach is grounded in unique technical methodologies and computational processes, which we will analyze to achieve optimal gait analysis results.
1) System Using 3D Voxel Reconstruction from Silhouettes:
Silhouette extraction is a crucial component of this system, isolating the human body’s silhouette in each video frame to represent the individual’s outline during various walking phases. The 3D voxel reconstruction method builds upon this by back-projecting silhouettes from multiple calibrated webcam views and intersecting them in 3D space to construct a "voxel person." This three-dimensional model provides detailed spatial information about the individual’s body and movements during walking.
FIGURE 2: GAIT ASSESSMENT SYSTEM LEVERAGING THE 3D VOXEL MODEL [10]

A 3D voxel model (short for "volumetric pixel") represents a three-dimensional object or space using a grid of volumetric pixels. Each voxel is analogous to a pixel in 2D images but extends into the third dimension, incorporating depth. This model divides the space into small, cube-shaped units (voxels) with specific spatial coordinates and properties. Key characteristics of a 3D voxel model include:
- Grid Structure: The space is organized in a grid, with voxels arranged along three axes (X, Y, and Z), defining the model’s spatial resolution.
- Coordinates: Each voxel is uniquely identified by its X, Y, and Z coordinates, indicating its position within the 3D space.
- Properties: Voxels can contain various attributes such as color, density, or material characteristics, depending on the application. These attributes contribute to a detailed representation of the 3D object.
- Volume Representation: Unlike 2D pixel-based representations, voxels enable volume modeling, making them ideal for depicting complex 3D shapes and structures.
The system extracts gait parameters—including walking speed, step time, and step length—from the 3D voxel reconstruction generated using two calibrated webcam views. Validation is conducted in both laboratory and senior housing environments, simulating daily activities. This approach facilitates continuous gait assessment in unstructured home settings, offering promising applications for fall risk assessment and ongoing gait monitoring.
2) System Using Mask R-CNN:
For this approach, the Mask R-CNN framework is employed. Similar to a forensic investigator analyzing a photograph, Mask R-CNN effectively identifies and delineates key marker features through a hierarchical process of recognition, localization, and masking. This method precisely defines components such as facial features.
In the context of video footage capturing human locomotion, Mask R-CNN tracks and identifies the subject, highlighting critical joints to create a virtual skeletal structure superimposed on the video subject. By analyzing spatial relationships between these skeletal elements and the camera, Mask R-CNN extracts valuable insights into movement dynamics, including walking speed and step length. Specifically, the algorithm identifies and delineates 17 key anatomical landmarks on the human body. Stereo vision techniques are then applied to extract distance data, translating it into a three-dimensional world coordinate system, ensuring accurate representation of anatomical features for gait analysis.
The system computes gait features such as speed, step time, and step length, with validation performed through the detection of anatomical features in both laboratory and real-world settings. Key advantages include the integration of deep learning technology for precise anatomical feature recognition, enabling a comprehensive and detailed analysis of walking characteristics.
FIGURE 3: GAIT ASSESSMENT SYSTEM USING MASK R-CNN AI MODEL [11, 12]

Figure 4: THE STRUCTURE OF THE MASK R-CNN MODEL [13]

COMPARISON & OPTIMIZATION GOALS
The 3D Voxel system emphasizes cost-effectiveness and privacy, making it particularly suitable for unstructured home environments. Conversely, the Mask R-CNN system leverages deep learning to achieve precise anatomical feature detection, offering comprehensive analysis in both laboratory and real-world settings. Selecting between these systems depends on specific requirements, such as cost, privacy, and application scenarios, to optimize gait analysis results.
With an estimated cost of approximately $1,000, the proposed system shows significant potential for widespread health monitoring deployment, particularly for elderly individuals. However, challenges remain, including the management of assistive devices and addressing issues related to long-term, unstructured home monitoring. Future efforts will focus on refining resident identification algorithms and resolving occlusion challenges in clustered home environments.
To implement our AI-integrated motion sensor light bulbs within the program budget of $1,000, we have outlined the anticipated costs for essential components:
1. High-Performance GPU
- Purpose: Mask R-CNN training
- Estimated Cost: $500–$700
2. Smart Light Bulbs
- Purpose: Skeletal system integration
- Estimated Cost: $1–$5 per bulb
3. Camera
- Purpose: Gait detection
- Estimated Cost: $100–$400
4. App & Data Management
- Purpose: User system preferences and data handling
- Estimated Cost: $99/year
MILESTONES & PROJECT TIMETABLE
Week 1: System Implementation and Calibration
- Set up motion sensors, webcams, and GPU.
- Calibrate the system using the camera calibration toolbox to ensure accurate parameter studies.
Week 2: Implementation of Motion Detection Algorithms
- Test the precision and sensitivity of the motion detection system.
- Evaluate adaptability in various simulated environments.
- Refine algorithms based on initial test results.
- Optimize the system for improved adaptability.
Weeks 3–4: Integration with Motion Detection Sensors
- Incorporate data from existing motion detection sensors into smart light bulbs as part of the gait assessment system.
- Synchronize motion sensor data with the webcam-based gait analysis system to enhance gait pattern recognition.
Weeks 5–9: Implementation of Silhouette Extraction Algorithm
- Segment silhouettes to form the basis for 3D voxel reconstruction.
- Extract gait parameters, including walking speed, step time, and step length.
- Fine-tune silhouette extraction and 3D voxel reconstruction algorithms.
Weeks 10–14: Implementation of Mask R-CNN Model
- Train the Mask R-CNN model to identify 17 key anatomical landmarks on the human body.
- Extract distance data using stereo vision techniques and translate it into 3D coordinates.
- Optimize the model and fine-tune its performance.
Week 16: Comparison and Validation
- Compare the 3D Voxel and Mask R-CNN models.
- Validate the system using real-world settings.
Week 17: Collection of User Feedback and System Improvement
- Conduct in-home testing with volunteer elderly residents, capturing gait patterns in realistic scenarios using motion detection sensors.
- Use feedback to identify areas for improvement and refine the system further.
REFERENCES
1. Centers for Disease Control and Prevention, National Center for Injury Prevention
and Control (2023) Old Adult Fall Prevention
2. Bergen G, Stevens MR, Burns ER. Falls and Fall Injuries Among Adults Aged ≥ 65
Years — United States (2014) MMWR Morb Mortal Wkly Rep 2016;65:993–998.
DOI: http://dx.doi.org/10.15585/mmwr.mm6537a2
3. Momin, M. S., Sufian, A., Barman, D., Dutta, P., Dong, M. & Leo, M. (2022).
In-Home Older Adults’ Activity Pattern Monitoring Using Depth Sensors: A Review.
Sensors, 22(23), 9067. https://doi.org/10.3390/s22239067
4. Kim, I.K., Kim, CS. Patterns of Family Support and the Quality of Life of the Elderly.
Social Indicators Research 62, 437–454 (2003).
https://doi.org/10.1023/A:1022617822399
5. Stoltz, P., Udén, G. & Willman, A. (2004). Support for family carers who care for an
elderly person at home – a systematic literature review. Scandinavian Journal of
Caring Sciences, 18(2), 111–119. https://doi.org/10.1111/j.1471-6712.2004.00269.x
6. G. K. M. Cheung, T. Kanade, J. . -Y. Bouguet and M. Holler, "A real-time system for
robust 3D voxel reconstruction of human motions," Proceedings IEEE Conference on
Computer Vision and Pattern Recognition. CVPR 2000 (Cat. No.PR00662), Hilton
Head, SC, USA, 2000, pp. 714-720 vol.2, doi: 10.1109/CVPR.2000.854944.
7. B. M. Smith, V. Chari, A. Agrawal, J. M. Rehg and R. Sever, "Towards Accurate 3D
Human Body Reconstruction from Silhouettes," 2019 International Conference on 3D
Vision (3DV), Quebec City, QC, Canada, 2019, pp. 279-288, doi:
10.1109/3DV.2019.00039.
8. M. Zhassuzak, A. Turegali, Y. Amirgaliyev and Z. Buribayev, "Gait Based Person
Recognition," 2021 IEEE International Conference on Smart Information Systems and Technologies (SIST), Nur-Sultan, Kazakhstan, 2021, pp. 1-5, doi:
10.1109/SIST50301.2021.9465886.
9. L. Gong, J. Li, M. Yu, M. Zhu, and R. Clifford, "A novel computer vision based gait
analysis technique for normal and Parkinson’s gaits classification," 2020 IEEE Intl
Conf on Dependable, Autonomic and Secure Computing, Intl Conf on Pervasive
Intelligence and Computing, Intl Conf on Cloud and Big Data Computing, Intl Conf
on Cyber Science and Technology Congress
(DASC/PiCom/CBDCom/CyberSciTech), Calgary, AB, Canada, 2020, pp. 209-215,
doi: 10.1109/DASC-PICom-CBDCom-CyberSciTech49142.2020.00045.
10. E. E. Stone and M. Skubic, "Silhouette classification using pixel and voxel features
for improved elder monitoring in dynamic environments," 2011 IEEE International
Conference on Pervasive Computing and Communications Workshops (PERCOM
Workshops), Seattle, WA, USA, 2011, pp. 655-661, doi:
10.1109/PERCOMW.2011.5766970.
11. Li, Y., Zhang, P., Zhang, Y. & Miyazaki, K. (2019). Gait Analysis Using Stereo
Camera in Daily Environment. 2019 41st Annual International Conference of the
IEEE Engineering in Medicine and Biology Society (EMBC), 00, 1471–1475.
https://doi.org/10.1109/embc.2019.8857494
12. Wang, F., Stone, E., Skubic, M., Keller, J. M., Abbott, C. & Rantz, M. (2013). Toward
a Passive Low-Cost In-Home Gait Assessment System for Older Adults. IEEE
Journal of Biomedical and Health Informatics, 17(2), 346–355.
https://doi.org/10.1109/jbhi.2012.2233745
13. He, Kaiming, Georgia Gkioxari, Piotr Dollár, and Ross Girshick. "Mask R-CNN."
(2018) ArXiv:1703.06870 [Cs]. https://arxiv.org/pdf/1703.06870.
bottom of page