Many times those conducting surveys are told that larger samples are always preferable to smaller ones. This is not always necessarily the case. An array of factors, including degree of variability in the population, the degree of accuracy desired, and the analysis the results will be subject to, should be considered when deciding upon a sample size. Show
Degree of accuracy desired: Related to the subject of Power Analysis (which is beyond the scope of this site), this method requires the researcher to consider the acceptable margin of error and the confidence interval for their study. The online resource from Raosoft and Raven Analytics uses this principle. Degree of variability (homogeneity/heterogeneity) in the population: As the degree of variability in the population increases, so too should the size of the sample increase. The ability of the researcher to take this into account is dependent upon knowledge of the population parameters. Number of different variables (subgroups) to be examined: As the number of subgroups to be examined increases, so too should the size of the sample increase. For example, should a researcher wish to examine the differences between ethnicities for a given phenomenon, the sample must be large enough to allow for valid comparison between each ethnic group. Sampling ratio (sample size to population size): Generally speaking, the smaller the population, the larger the sampling ratio needed. For populations under 1,000, a minimum ratio of 30 percent (300 individuals) is advisable to ensure representativeness of the sample. For larger populations, such as a population of 10,000, a comparatively small minimum ratio of 10 percent (1,000) of individuals is required to ensure representativeness of the sample. Response rate and oversampling: Are all the individuals in your sample likely to complete your questionnaire? If not, oversampling (sampling more individuals than would otherwise be necessary) may be required. Here the goal is to ensure that a given minimum raw count of respondents is met. While this is straightforward for a project using Simple Random Sampling, this can become increasingly complex as the number of variables to be examined grows, since the researcher must ensure that each critical subgroup attains the required response rate. Statistical analysis desired: Specific minimum sample sizes are required for some statistical procedures, particularly those involving the investigation of multiple variables. Other Online Resources Sample Size Calculator (Raosoft) Further Reading Nardi, P.M. (2003). Doing
survey research: A guide to quantitative methods. Boston, MA: Allyn and Bacon. A practical question when designing a customer feedback survey or experiment is to work out the required sample size. That is, what is the smallest number of data points required in the survey or experiment? There are three basic approaches: rules of thumb based on industry standards, working backwards from budget, and working backwards from confidence intervals. Each of these approaches is useful in some circumstances. Working out sample size based on rules of thumbDifferent industries have different rules of thumb when it comes to testing. These rules of thumb are not entirely made up; their logic relates to the confidence intervals analyses described later in this article. Some examples of common rules of thumb are:
Working out sample size from costsA second common approach is to identify the budget and work backwards, using the following formula: Sample size = (Total budget - fixed costs)/cost per data point This may sound crude, but the budget for a study is a way of working out the appetite for risk of the organization that has commissioned the study, and, as discussed in the next section, this is at the heart of determining sample size. Working out sample size from confidence intervalsOne of the reasons that the minimum sample sizes guidelines vary so much is that the true minimum sample size required for any study depends on the signal to noise ratio of the data. If the data intrinsically has a high level of noise in it, such as political polls and market research, then a large sample is required. In tightly controlled environments, such as those used in sensory studies, there is less noise and thus smaller sample sizes are acceptable. When testing medical devices, the outcome is to see if the device is problem free or not, rather than to estimate any specific rate, so an even smaller sample size is appropriate. One formal method for working out sample sizes is to have researchers specify the required level of uncertainty they can deal with, expressed a confidence interval, and work out the sample size required to obtain this. For example, see here and here for examples and discussions, respectively. This is the textbook solution to working out sample size, and there are lots of nice theoretical concepts to help (e.g., power analysis). However, in practice, the approach only works when you have a good idea what the likely result will be and what the likely uncertainty will be (i.e., sampling error), and this is often not the case, outside of the world of clinical trials. Learn how to statistically test Net Promoter Score in Displayr [1] https://www.fda.gov/downloads/MedicalDevices/NewsEvents/WorkshopsConferences/UCM424735.pdf How many sample size is enough for experimental research?Studies should involve sample sizes of at least 100 in each key group of interest. For example, if you are doing an AB test, then you would typically want a minimum sample size of 200, with 100 in each group. An exception to this is when testing anything where the actual rate being tested is small.
How do you determine sample size in an experiment?In general, several factors must be known or estimated to calculate sample size: the effect size (usually the difference between two groups), the population standard deviation (for continuous data), the desired power of the experiment to detect the postulated effect, and the significance level.
Is 30 a good sample size for quantitative research?In most cases, we recommend 40 participants for quantitative studies.
Why is 30 the best sample size?A sample size of 30 often increases the confidence interval of your population data set enough to warrant assertions against your findings.4 The higher your sample size, the more likely the sample will be representative of your population set.
|