AI-based Guard tool explores privacy policies

AI explores privacy policies
Written by Valdis Koks

Developer from Spain introduced in the network the Guard program, which is based on AI and will read and analyze the privacy policies of various services and applications instead.

Few users while creating an account in any service or application really read privacy policies. As a rule, a tick in the box “I accept” is put without reading all the conditions.

In order to make life easier for users, security researcher Javi Rameerez has created a tool designed to analyze the privacy policies of popular applications and find items that could pose a privacy risk.

“AI that reads privacy policies for you. Tinder shares your conversations and matches, Twitter sells your info and Instagram can’t even guarantee your data security. There’s much more. Come check it out on”, — presents Javi Rameerez Guard in his Twitter.

According to Ramirez’s idea, the Guard should be an AI-based application available for download, but for now the Guard is presented as a free website (the beta version of the application is available to participants in the testing program).

The site allows to analyze the privacy policies of popular services such as Twitter, Instagram, Tinder, Whatsapp, Netflix, Spotify, Reddit and Duolingo. The tool has not yet analyzed all existing applications, but users can offer new services for verification (for example, Facebook).

Read also: Safety of Windows 7 and Server 2008 will ensure ACROS Security

Currently, the Guard website provides valuable information about the potential threats posed by each individual service, estimates the level of their danger and reports on data leakage scandals in which the analyzed services were involved.

Each analyzed application receives an assessment in the form of a percentage and a letter. For example, Twitter received only 15% and a rating of “D”, Instagram – 21% and a rating of “D”, and YouTube – 37% and a rating of “C”. It is noteworthy that Telegram received as much as 105% and the highest rating of A+.

Javi Rameerez, the Madrid-based developer who created Guard’s software, is interested in AI systems dedicated to natural language processing (NLP). He’s also interested in AI ethics.

Read also: Instagram eliminated the threat of user privacy

Rameerez describes Guard as an academic experiment — it’s actually his thesis in progress on AI and NLP. He says the aim is to teach machines how humans think about privacy. To do that, Guard needs input from lots and lots of humans.

The developer asks the audience to help in the training of AI, passing on the quiz website and solving ethical dilemmas.

“Each data point helps the AI understand what’s acceptable and what’s not in regard to human privacy. The end goal is to teach machines so they can keep us safe in an increasingly dangerous internet”, — the website says.

In Guard’s case we’ve got an example of an NLP system that stands to help us — both by alerting us to our apps’ problematic privacy policies and by giving us a chance to learn by developing an AI from the ground up.

About the author

Valdis Koks

Security engineer, reverse engineering and memory forensics

Leave a Comment