WhatsApp on Tuesday launched a dedicated resource hub called ‘Safety in India’ that highlights safety measures and processes available on the platform. The hub emphasises the benefits of using end-to-end encryption and gives details about features including two-step verification and the ability to add a security layer to protect WhatsApp chats with fingerprint or facial recognition. The Meta-owned company has also listed India-specific processes that it uses to help reduce abuse on WhatsApp. India is notably the biggest market for WhatsApp and the company seems to be working on growing its user base while also retaining the confidence of its existing users with moves such as introducing a resource hub.
Available as a standalone webpage, the ‘Safety in India’ resource hub lists topics around online safety, privacy, and security. It also includes material on ways WhatsApp users can safeguard themselves from cyber scams. The resource hub is aimed at building awareness about the various safety measures and built-in features that users can use on the platform, the company said.
WhatsApp has divided the information featured on the resource hub into different sub-sections. It has details about end-to-end encryption — the app’s marque feature that is marketed to help protect a user’s messages, photos, videos, voice messages, documents, status updates, and calls from third parties.
The hub also highlights product features including two-step verification, the ability to lock WhatsApp with Touch ID and Face ID on the iPhone and fingerprint lock on Android. Further, it talks about features such as forward limits and additional limits for viral messages that WhatsApp had introduced first in India to limit the circulation of fake news and misinformation.
WhatsApp’s resource hub also self-promotes features such as account blocking and reporting, message-level reporting, and admin controls for groups. Additionally, it mentions recently introduced features including Disappearing messages, View Once, and End-to-end encrypted backups.
Alongside detailing different user security and privacy-focused features, the resource hub promotes measures that WhatsApp has to take under the IT rules to prevent abuse in the country. These include the appointment of a grievance officer and the release of monthly reports.
WhatsApp also claims that it has “zero tolerance” for child sexual abuse material (CSAM) and other sexual abuse material on the platform. It also states that it reports violating content and accounts to the National Center for Missing and Exploited Children (NCMEC), which refers these CyberTips to Law Enforcement globally and in India, specifically to the National Crime Records Bureau (NCRB).
The resource hub also details the ways by which WhatsApp is addressing misinformation. The platform has, however, not yet been able to succeed in curbing fake news on the platform.
To help protect users against different scams, spams, and impersonation, WhatsApp’s hub has various recommendations, including avoiding sharing one-time passwords with unknown third parties and carefully scrutinising messages that ask for sensitive information, money, or other assistance.
The hub also includes a few clarifications about different WhatsApp-focussed incidents, including chat leaks and traceability. Furthermore, it concludes the information by saying that WhatsApp is working closely with law enforcement agencies in the country to review their requests based on applicable laws and the policies of the app.
“We hope this resource will equip users with the information they need to safeguard their privacy and navigate the internet safely,” said Abhijit Bose, Head of India, WhatsApp, in a prepared statement. “Over the years, we have made significant product changes to help enhance user security and privacy. Besides continuous product innovations, we have also consistently invested in state-of-the-art technology, artificial intelligence, data scientists, experts, and in processes, to support user safety.”
WhatsApp has a strong base of over 400 million users in India. Therefore, it makes sense for the company to have a dedicated hub to detail the measures and steps it takes to provide user safety. But despite its efforts, WhatsApp is being criticised for not being able to entirely curb the spread of fake messages on the platform, which has in the past also been used to spread hatred.
Earlier this month, WhatsApp announced that it banned over two million accounts in the country in December. The company also noted that it received 528 grievance reports during the month.