6.6 Example: Apple Music Usage
On August 18, 2015, consumer market research firm MusicWatch released a study about a new music service launched by Apple, Inc. called Apple Music. The service was a new streaming music service designed to give users streaming access to a large catalog of music for $9.99 per month. However, there was a free trial period that lasted for 3 months. At the time there was much speculation over how many users would ultimately continue to pay the $9.99 per month once the free trial ended.
MusicWatch’s study claimed, among other things, that
Among people who had tried Apple Music, 48 percent reported they are not currently using the service.
This would suggest that almost half of people who had signed up for the free trial period of Apple Music were not interested in using it further and would likely not pay for it once the trial ended. If it were true, it would be a blow to the newly launched service.
But how did MusicWatch arrive at its number? It claimed to have surveyed 5,000 people in its study. Shortly before the survey by MusicWatch was released, Apple claimed that about 11 million people had signed up for their new Apple Music service (because the service had just launched, everyone who had signed up was in the free trial period). Clearly, 5,000 people do not make up the entire population, so we have but a small sample of users.
What is the target that MusicWatch was trying to answer? It seems that they wanted to know the percentage of all people who had signed up for Apple Music that were still using the service. Because it would have been enormously expensive to survey all 11 million people, they had to resort to a much smaller sample of 5,000. Can they make inference about the entire population from the sample of 5,000?
Let’s consider the three ingredients for inference:
Population: We are interested in the behavior of the entire Apple Music user base, which is approximately 11 million people, according to Apple.
Sampling process: It’s not clear from the press release how the study was conducted and the data collected. It’s likely this was a telephone survey and so people were randomly selected to be called and asked about their use of the service. Do you think this process led to a sample of respondents that is representative of the entire population of Apple Music users?
Model for the population: Given the relatively small size of the sample relative to the entire population, it’s likely that the individuals in the survey could be thought of being independent of each other. In other words, it’s unlikely that one respondent in the survey could have influenced another respondent.
If the sample is representative and the individuals are independent, we could use the number 48% as an estimate of the percentage in the population who no longer use the service. The press release from MusicWatch did not indicate any measure of uncertainty, so we don’t know how reliable the number is.
Interestingly, soon after the MusicWatch survey was released, Apple released a statement to the publication The Verge, stating that 79% of users who had signed up were still using the service (i.e. only 21% had stopped using it, as opposed to 48% reported by MusicWatch). Now, the difference between Apple and MusicWatch is that Apple has easy access to the entire population of Apple Music users. If they want to know what percentage of the population of users is still using it, they simply need to count the number of active users of the service and divide by the total number of people who signed up. There is no uncertainty about that particular number, because no sampling was needed to estimate it (I assume Apple did not use sampling to estimate the percentage).
If we believe that Apple and MusicWatch were measuring the same thing in their analyses (and it’s not clear that they were), then it would suggest that MusicWatch’s estimate of the population percentage (48%) was quite far off from the true value (21%). What would explain this large difference?
Random variation. It’s true that MusicWatch’s survey was a small sample relative to the full population, but the sample was still big with 5,000 people. Furthermore, the analysis was fairly simple (just taking the proportion of users still using the service), so the uncertainty associated with that estimate is unlikely to be that large.
Selection bias. Recall that it’s not clear how MusicWatch sampled its respondents, but it’s possible that the way that they did it led them to capture a set of respondents who were less inclined to use Apple Music. Beyond this, we can’t really say more without knowing the details of the survey process.
Measurement differences. One thing we don’t know is how either MusicWatch or Apple defined “still using the service”. You could imagine a variety of ways to determine whether a person was still using the service. You could ask “Have you used it in the last week?” or perhaps “Did you use it yesterday?” Responses to these questions would be quite different and would likely lead to different overall percentages of usage.
Respondents are not independent. It’s possible that the survey respondents are not independent of each other. This would primarily affect the uncertainty about the estimate, making it larger than we might expect if the respondents were all independent. However, since we do not know what MusicWatch’s uncertainty about their estimate was in the first place, it’s difficult to tell if dependence between respondents could play a role.