AI-based Guard tool explores privacy policies

Developer from Spain introduced in the network the Guard program, which is based on AI and will read and analyze the privacy policies of various services and applications instead.

Few users while creating an account in any service or application really read privacy policies. Como una regla, a tick in the box “I accept” is put without reading all the conditions.

In order to make life easier for users, security researcher Javi Rameerez has created a tool designed to analyze the privacy policies of popular applications and find items that could pose a privacy risk.

“AI that reads privacy policies for you. Tinder shares your conversations and matches, Twitter sells your info and Instagram can’t even guarantee your data security. There’s much more. Come check it out on”, — presents Javi Rameerez Guard in his Gorjeo.

According to Ramirez’s idea, the Guard should be an AI-based application available for download, but for now the Guard is presented as a free website (the beta version of the application is available to participants in the testing program).

The site allows to analyze the privacy policies of popular services such as Twitter, Instagram, Tinder, Whatsapp, Netflix, Spotify, Reddit and Duolingo. The tool has not yet analyzed all existing applications, but users can offer new services for verification (Por ejemplo, Facebook).

Leer también: Seguridad de Windows 7 y servidor 2008 garantizará la seguridad de ACROS

Actualmente, the Guard website provides valuable information about the potential threats posed by each individual service, estimates the level of their danger and reports on data leakage scandals in which the analyzed services were involved.

Each analyzed application receives an assessment in the form of a percentage and a letter. Por ejemplo, Gorjeo received only 15% and a rating ofD”, Instagram – 21% and a rating ofD”, y YouTube – 37% and a rating of “C”. It is noteworthy that Telegrama received as much as 105% and the highest rating of A+.

Javi Rameerez, the Madrid-based developer who created Guard’s software, is interested in AI systems dedicated to natural language processing (NLP). He’s also interested in AI ethics.

Leer también: Instagram eliminó la amenaza a la privacidad del usuario

Rameerez describes Guard as an academic experiment — it’s actually his thesis in progress on AI and NLP. He says the aim is to teach machines how humans think about privacy. Para hacer eso, Guard needs input from lots and lots of humans.

The developer asks the audience to help in the training of AI, passing on the quiz website and solving ethical dilemmas.

“Each data point helps the AI understand what’s acceptable and what’s not in regard to human privacy. The end goal is to teach machines so they can keep us safe in an increasingly dangerous internet”, — the website says.

In Guard’s case we’ve got an example of an NLP system that stands to help us — both by alerting us to our apps’ problematic privacy policies and by giving us a chance to learn by developing an AI from the ground up.

Sobre el Autor

Valdis Kok

ingeniero de seguridad, ingeniería inversa y análisis forense de la memoria

Deja un comentario