Frequently Asked Questions

Of course. Computer vision can be used for a variety of purposes, and our engineers are easily able to build our technology into other platforms. Give us a call to discuss your proposition.

Each test has a unique link you can distribute to users. You can also embed the video test directly into an html page using our provided embed code. It is as easy as sharing a YouTube video. Data can be merged between two systems using unique panelist IDs.

We provide structured tests with clear analytical frameworks, ensuring that there is very little ambiguity in test results.

We charge a small fee per video, plus any hard costs for things like paying panelists to view videos. See our pricing page for more info.

Absolutely. Data security is our highest priority. As an EmotionReader customer, you will benefit from our technical infrastructure that is designed to meet the requirements of the most security-sensitive organizations. Our infrastructure has no single point of failure and is fully certified ISO 27001 compliant. Plus your data has redundancy across different cloud regions so it is always secure. Our system runs in AWS and makes use of the common cloud security mechanisms - security groups, IAM roles and policies. Each environment is isolated and running in a VPC with developers having zero access to any machine. To protect the API the system uses a combination of tools like JWT, ReCaptcha

Using AI, computer vision and machine learning, we created patent pending algorithms that recognizes facial structures, and tracks tiny movements, allowing us to accurately infer emotional response.

Just ask. We’re happy to help. For larger projects, we can provide an analyst for an hourly fee.

Our algorithms are designed to infer emotion, age and gender, not identity. We do not do facial recognition, which tries to match a face to an identity. Facial coding can be done anonymously.

Anyone recording a video of themselves must first opt-in and has access to our full privacy policy. To sum it up simply, while there are exceptions, any videos recorded and uploaded to our server are analyzed by our computer algorithm, then deleted. This ensures we are not storing personally identifiable information (PII).

In addition to age and gender, our algorithms report on attention, joy, surprise, and dislike. We also have a summary emotional response metric called valence. Valence tracks overall negative or positive response rather than individual metrics. We will continue to add more emotions in the future.

Our research team uses the state of the art in deep learning; this is a machine learning technique that was inspired by the human brain, and is used to understand emotions. The algorithms work on all ages and races, in all real light lighting conditions, to achieve great results.

Free Demo

Are you interested to know more?

Please don’t hesitate to get in touch with us to schedule a free demo or ask any questions!

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form