James Thew - Fotolia
Google recently announced a new detection model that removes bad Android apps from the Google Play Store. How does this system work and what type of bad Android apps is it stopping?
The introduction of app stores was a paradigm shift from the traditional model of installing and maintaining software and applications on an endpoint. In the past, people had to know where they could download an application, how to install it and how to keep it updated. With the app store distribution model, users have a dependable place to download applications and automatic updates to their endpoint.
Rather than evaluating whether a website or CD-ROM was legitimate and safe to use, mobile device users have to assess if the software published in the app store is trustworthy. Unfortunately, many people believe that all the applications in an app store are vetted and secure and do not consider the possibility of harmful apps.
The digital distribution model, first introduced in 2008 by Apple as a way for developers to reach iPhone users, allowed the company to screen iOS apps, promote iPhone software and develop a shared revenue model. The Apple App Store is accessible via a mobile app preinstalled on the devices.
Unlike Apple, Google did not maintain strict control over the applications developed for an Android device using the Android software development kit -- the Android operating system, which is based on the Linux kernel, is largely open source technology that runs on proprietary devices.
Google initially used basic checks to ensure that the apps populated by the Google Play Store, formerly the Android Market, were legitimate. However, malware authors and security researchers found ways around the checks to inject malware or to get test apps published in the store. As a result, Google has been plagued by privacy and security questions about tracking code in mobile apps.
Recently, Google made significant improvements to ferret out bad Android apps and their developers, according to a 2018 blog by Andrew Ahn, the product manager of Google Play. In 2017, the company removed 700,000 apps that violated its policies, a 70% increase over the previous year. It also identified and disabled the accounts of 100,000 bad developers, including repeat offenders and developer networks.
A new detection model is automating more checks before an app is published in the Google Play Store, and it is enabling Google to better verify test environments to see if an app exhibits malicious behaviors. According to Ahn's blog, the checks look for "impersonation, inappropriate content, or malware -- through new machine learning models and techniques" in the apps. Google also removed copycats that impersonated well-known apps, apps with inappropriate content and potentially harmful applications (PHAs).
According to the company, it uses machine learning to scan devices, apps and data for potential threats. According to Ahn, this new service has dropped PHA installs by 50% year over year.
Google has also developed detection models to better identify malware authors in order to stop them from publishing bad Android apps that are then installed by the users, ultimately helping Google Play Store users.
Ask the expert:
Have a question about enterprise threats? Send it via email today. (All questions are anonymous.)
Dig Deeper on Mobile application security best practices
Related Q&A from Nick Lewis
Cloud penetration testing presents new challenges for information security teams. Here's how a playbook from the Cloud Security Alliance can help ... Continue Reading
Many cloud providers are tight-lipped about internal security control details. Learn how to evaluate cloud security providers with certifications and... Continue Reading
Enterprises new to the cloud can write new security policies from scratch, but others with broad cloud usage may need an update. Consider these ... Continue Reading