How FaceScan Works
Scan | FaceScan |
---|---|
Duration | <60s |
Data Returned | Vital signs & indicators, like blood pressure, heart rate, heart rate variability Health risks, like Cardiovascular disease and heart attack |
Processing | Cloud-based |
Platforms | iOS, Android |
Total submodule size | ~6.5MB. |
Min. Requirements | iPhones: iOS 12.1 or above. Android: Android 8 or above, 64-bit, and OpenGL 3.1 support. |
Requirements to Operate | Internet Connection, User Input |
Requiring only a function call to launch, our FaceScan is designed to be triggered from anywhere within your app.
Below is a typical journey a user will go through to access the scan and subsequently their scan results, consisting of:
We first conduct a 3 second calibration allowing us to lock onto key areas of the face and to adjusted the contrast of the image for a perfect reading.
Once calibrated, a 30 second scan begins measuring those key points on the face, securely sending the recorded measurements to the cloud.
Behind The Scenes
We use advanced signal processing to read the minute changes in color of the sub-dermal layer of the skin. This works on any skin type and tone.
After the scan process has completed, the recorded measurements are then processed in the cloud into results.
Behind The Scenes
Using our cloud-based deep learning neural network, we process the recorded measurements and provide a result