Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

An Introduction to t Tests | Definitions, Formula and Examples

Published on January 31, 2020 by Rebecca Bevans . Revised on June 22, 2023.

A t test is a statistical test that is used to compare the means of two groups. It is often used in hypothesis testing to determine whether a process or treatment actually has an effect on the population of interest, or whether two groups are different from one another.

  • The null hypothesis ( H 0 ) is that the true difference between these group means is zero.
  • The alternate hypothesis ( H a ) is that the true difference is different from zero.

Table of contents

When to use a t test, what type of t test should i use, performing a t test, interpreting test results, presenting the results of a t test, other interesting articles, frequently asked questions about t tests.

A t test can only be used when comparing the means of two groups (a.k.a. pairwise comparison). If you want to compare more than two groups, or if you want to do multiple pairwise comparisons, use an   ANOVA test  or a post-hoc test.

The t test is a parametric test of difference, meaning that it makes the same assumptions about your data as other parametric tests. The t test assumes your data:

  • are independent
  • are (approximately) normally distributed
  • have a similar amount of variance within each group being compared (a.k.a. homogeneity of variance)

If your data do not fit these assumptions, you can try a nonparametric alternative to the t test, such as the Wilcoxon Signed-Rank test for data with unequal variances .

Here's why students love Scribbr's proofreading services

Discover proofreading & editing

When choosing a t test, you will need to consider two things: whether the groups being compared come from a single population or two different populations, and whether you want to test the difference in a specific direction.

What type of t-test should I use

One-sample, two-sample, or paired t test?

  • If the groups come from a single population (e.g., measuring before and after an experimental treatment), perform a paired t test . This is a within-subjects design .
  • If the groups come from two different populations (e.g., two different species, or people from two separate cities), perform a two-sample t test (a.k.a. independent t test ). This is a between-subjects design .
  • If there is one group being compared against a standard value (e.g., comparing the acidity of a liquid to a neutral pH of 7), perform a one-sample t test .

One-tailed or two-tailed t test?

  • If you only care whether the two populations are different from one another, perform a two-tailed t test .
  • If you want to know whether one population mean is greater than or less than the other, perform a one-tailed t test.
  • Your observations come from two separate populations (separate species), so you perform a two-sample t test.
  • You don’t care about the direction of the difference, only whether there is a difference, so you choose to use a two-tailed t test.

The t test estimates the true difference between two group means using the ratio of the difference in group means over the pooled standard error of both groups. You can calculate it manually using a formula, or use statistical analysis software.

T test formula

The formula for the two-sample t test (a.k.a. the Student’s t-test) is shown below.

\begin{equation*}t=\dfrac{\bar{x}_{1}-\bar{x}_{2}}{\sqrt{(s^2(\frac{1}{n_{1}}+\frac{1}{n_{2}}))}}}\end{equation*}

In this formula, t is the t value, x 1 and x 2 are the means of the two groups being compared, s 2 is the pooled standard error of the two groups, and n 1 and n 2 are the number of observations in each of the groups.

A larger t value shows that the difference between group means is greater than the pooled standard error, indicating a more significant difference between the groups.

You can compare your calculated t value against the values in a critical value chart (e.g., Student’s t table) to determine whether your t value is greater than what would be expected by chance. If so, you can reject the null hypothesis and conclude that the two groups are in fact different.

T test function in statistical software

Most statistical software (R, SPSS, etc.) includes a t test function. This built-in function will take your raw data and calculate the t value. It will then compare it to the critical value, and calculate a p -value . This way you can quickly see whether your groups are statistically different.

In your comparison of flower petal lengths, you decide to perform your t test using R. The code looks like this:

Download the data set to practice by yourself.

Sample data set

If you perform the t test for your flower hypothesis in R, you will receive the following output:

T-test output in R

The output provides:

  • An explanation of what is being compared, called data in the output table.
  • The t value : -33.719. Note that it’s negative; this is fine! In most cases, we only care about the absolute value of the difference, or the distance from 0. It doesn’t matter which direction.
  • The degrees of freedom : 30.196. Degrees of freedom is related to your sample size, and shows how many ‘free’ data points are available in your test for making comparisons. The greater the degrees of freedom, the better your statistical test will work.
  • The p value : 2.2e-16 (i.e. 2.2 with 15 zeros in front). This describes the probability that you would see a t value as large as this one by chance.
  • A statement of the alternative hypothesis ( H a ). In this test, the H a is that the difference is not 0.
  • The 95% confidence interval . This is the range of numbers within which the true difference in means will be 95% of the time. This can be changed from 95% if you want a larger or smaller interval, but 95% is very commonly used.
  • The mean petal length for each group.

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

t test in research methodology pdf

When reporting your t test results, the most important values to include are the t value , the p value , and the degrees of freedom for the test. These will communicate to your audience whether the difference between the two groups is statistically significant (a.k.a. that it is unlikely to have happened by chance).

You can also include the summary statistics for the groups being compared, namely the mean and standard deviation . In R, the code for calculating the mean and the standard deviation from the data looks like this:

flower.data %>% group_by(Species) %>% summarize(mean_length = mean(Petal.Length), sd_length = sd(Petal.Length))

In our example, you would report the results like this:

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Chi square test of independence
  • Statistical power
  • Descriptive statistics
  • Degrees of freedom
  • Pearson correlation
  • Null hypothesis

Methodology

  • Double-blind study
  • Case-control study
  • Research ethics
  • Data collection
  • Hypothesis testing
  • Structured interviews

Research bias

  • Hawthorne effect
  • Unconscious bias
  • Recall bias
  • Halo effect
  • Self-serving bias
  • Information bias

A t-test is a statistical test that compares the means of two samples . It is used in hypothesis testing , with a null hypothesis that the difference in group means is zero and an alternate hypothesis that the difference in group means is different from zero.

A t-test measures the difference in group means divided by the pooled standard error of the two group means.

In this way, it calculates a number (the t-value) illustrating the magnitude of the difference between the two group means being compared, and estimates the likelihood that this difference exists purely by chance (p-value).

Your choice of t-test depends on whether you are studying one group or two groups, and whether you care about the direction of the difference in group means.

If you are studying one group, use a paired t-test to compare the group mean over time or after an intervention, or use a one-sample t-test to compare the group mean to a standard value. If you are studying two groups, use a two-sample t-test .

If you want to know only whether a difference exists, use a two-tailed test . If you want to know if one group mean is greater or less than the other, use a left-tailed or right-tailed one-tailed test .

A one-sample t-test is used to compare a single population to a standard value (for example, to determine whether the average lifespan of a specific town is different from the country average).

A paired t-test is used to compare a single population before and after some experimental intervention or at two different points in time (for example, measuring student performance on a test before and after being taught the material).

A t-test should not be used to measure differences among more than two groups, because the error structure for a t-test will underestimate the actual error when many groups are being compared.

If you want to compare the means of several groups at once, it’s best to use another statistical test such as ANOVA or a post-hoc test.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Bevans, R. (2023, June 22). An Introduction to t Tests | Definitions, Formula and Examples. Scribbr. Retrieved August 14, 2024, from https://www.scribbr.com/statistics/t-test/

Is this article helpful?

Rebecca Bevans

Rebecca Bevans

Other students also liked, choosing the right statistical test | types & examples, hypothesis testing | a step-by-step guide with easy examples, test statistics | definition, interpretation, and examples, what is your plagiarism score.

Microbe Notes

Microbe Notes

T-test: Definition, Formula, Types, Applications

The t-test is a test in statistics that is used for testing hypotheses regarding the mean of a small sample taken population when the standard deviation of the population is not known.

  • The t-test is used to determine if there is a significant difference between the means of two groups.
  • The t-test is used for hypothesis testing to determine whether a process has an effect on both samples or if the groups are different from each other.
  • Basically, the t-test allows the comparison of the mean of two sets of data and the determination if the two sets are derived from the same population.
  • After the null and alternative hypotheses are established, t-test formulas are used to calculate values that are then compared with standard values.
  • Based on the comparison, the null hypothesis is either rejected or accepted.
  • The T-test is similar to other tests like the z-test and f-test except that t-test is usually performed in cases where the sample size is small (n≤30).

Table of Contents

Interesting Science Videos

T-test Formula

T-tests can be performed manually using a formula or through some software.

T-Test Formula

One sample t-test (one-tailed t-test)

  • One sample t-test is a statistical test where the critical area of a distribution is one-sided so that the alternative hypothesis is accepted if the population parameter is either greater than or less than a certain value, but not both.
  • In the case where the t-score of the sample being tested falls into the critical area of a one-sided test, the alternative hypothesis is to be accepted instead of the null hypothesis.
  • A one-tailed test is used to determine if the population is either lower than or higher than some hypothesized value.
  • A one-tailed test is appropriate if the estimated value might depart from the sample value in either of the directions, left or right, but not both.

One Sample T-Test Formula

  • For this test, the null hypothesis states that there is no difference between the true mean and the assumed value whereas the alternative hypothesis states that either the assumed value is greater than or less than the true mean but not both.
  • For instance, if our H 0 : µ 0 = µ and H a : µ < µ 0 , such a test would be a one-sided test or more precisely, a left-tailed test.
  • Under such conditions, there is one rejection area only on the left tail of the distribution.
  • If we consider µ = 100 and if our sample mean deviates significantly from 100 towards the lower direction, H 0 or null hypothesis is rejected. Otherwise, H 0 is accepted at a given level of significance.
  • Similarly, if in another case, H 0 : µ = µ 0 and H a : µ > µ 0 , this is also a one-tailed test (right tail) and the rejection region is present on the right tail of the curve.
  • In this case, when µ = 100 and the sample mean deviates significantly from 100 in the upward direction, H 0 is rejected otherwise, it is to be accepted.

Two sample t-test (two-tailed t-test)

  • Two sample t-test is a test a method in which the critical area of a distribution is two-sided and the test is performed to determine whether the population parameter of the sample is greater than or less than a specific range of values.
  • A two-tailed test rejects the null hypothesis in cases where the sample mean is significantly higher or lower than the assumed value of the mean of the population.
  • This type of test is appropriate when the null hypothesis is some assumed value, and the alternative hypothesis is set as the value not equal to the specified value of the null hypothesis.

Two Sample T-Test Formula

  • The two-tailed test is appropriate when we have H 0 : µ = µ 0 and H a : µ ≠ µ 0 which may mean µ > µ 0 or µ < µ 0 .
  • Therefore, in a two-tailed test, there are two rejection regions, one in either direction, left and right, towards each tail of the curve.
  • Suppose, we take µ = 100 and if our sample mean deviates significantly from 100 in either direction, the null hypothesis can be rejected. But if the sample mean does not deviate considerably from µ, the null hypothesis is accepted.

Independent t-test

  • An Independent t-test is a test used for judging the means of two independent groups to determine the statistical evidence to prove that the population means are significantly different.
  • Subjects in each sample are also assumed to come from different populations, that is, subjects in “Sample A” are assumed to come from “Population A” and subjects in “Sample B” are assumed to come from “Population B.”
  • The populations are assumed to differ only in the level of the independent variable.
  • Thus, any difference found between the sample means should also exist between population means, and any difference between the population means must be due to the difference in the levels of the independent variable.
  • Based on this information, a curve can be plotted to determine the effect of an independent variable on the dependent variable and vice versa.

T-test Applications

  • The T-test compares the mean of two samples, dependent or independent.
  • It can also be used to determine if the sample mean is different from the assumed mean.
  • T-test has an application in determining the confidence interval for a sample mean.

References and Sources

  • R. Kothari (1990) Research Methodology. Vishwa Prakasan. India.
  • 3% – https://www.investopedia.com/terms/o/one-tailed-test.asp
  • 2% – https://towardsdatascience.com/hypothesis-testing-in-machine-learning-using-python-a0dc89e169ce
  • 2% – https://en.wikipedia.org/wiki/Two-tailed_test
  • 1% – https://www.scribbr.com/statistics/t-test/
  • 1% – https://www.scalelive.com/null-hypothesis.html
  • 1% – https://www.investopedia.com/terms/t/two-tailed-test.asp
  • 1% – https://www.investopedia.com/ask/answers/073115/what-assumptions-are-made-when-conducting-ttest.asp
  • 1% – https://www.chegg.com/homework-help/questions-and-answers/sample-100-steel-wires-average-breaking-strength-x-50-kn-standard-deviation-sigma-4-kn–fi-q20558661
  • 1% – https://support.minitab.com/en-us/minitab/18/help-and-how-to/statistics/basic-statistics/supporting-topics/basics/null-and-alternative-hypotheses/
  • 1% – https://libguides.library.kent.edu/SPSS/IndependentTTest
  • 1% – https://keydifferences.com/difference-between-t-test-and-z-test.html
  • 1% – https://keydifferences.com/difference-between-t-test-and-f-test.html
  • 1% – http://www.sci.utah.edu/~arpaiva/classes/UT_ece3530/hypothesis_testing.pdf
  • <1% – https://www.thoughtco.com/overview-of-the-demand-curve-1146962
  • <1% – https://www.slideshare.net/aniket0013/formulating-hypotheses
  • <1% – https://en.wikipedia.org/wiki/Null_hypothesis

About Author

Photo of author

Anupama Sapkota

2 thoughts on “T-test: Definition, Formula, Types, Applications”

Hi, on the very top, the one sample t-test formula in the picture is incorrect. It should be x-bar – u, not +

Thanks, it has been corrected 🙂

Leave a Comment Cancel reply

Save my name, email, and website in this browser for the next time I comment.

This site uses Akismet to reduce spam. Learn how your comment data is processed .

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Korean J Anesthesiol
  • v.68(6); 2015 Dec

T test as a parametric statistic

Tae kyun kim.

Department of Anesthesia and Pain Medicine, Pusan National University School of Medicine, Busan, Korea.

In statistic tests, the probability distribution of the statistics is important. When samples are drawn from population N (µ, σ 2 ) with a sample size of n, the distribution of the sample mean X ̄ should be a normal distribution N (µ, σ 2 / n ). Under the null hypothesis µ = µ 0 , the distribution of statistics z = X ¯ - µ 0 σ / n should be standardized as a normal distribution. When the variance of the population is not known, replacement with the sample variance s 2 is possible. In this case, the statistics X ¯ - µ 0 s / n follows a t distribution ( n-1 degrees of freedom). An independent-group t test can be carried out for a comparison of means between two independent groups, with a paired t test for paired data. As the t test is a parametric test, samples should meet certain preconditions, such as normality, equal variances and independence.

Introduction

A t test is a type of statistical test that is used to compare the means of two groups. It is one of the most widely used statistical hypothesis tests in pain studies [ 1 ]. There are two types of statistical inference: parametric and nonparametric methods. Parametric methods refer to a statistical technique in which one defines the probability distribution of probability variables and makes inferences about the parameters of the distribution. In cases in which the probability distribution cannot be defined, nonparametric methods are employed. T tests are a type of parametric method; they can be used when the samples satisfy the conditions of normality, equal variance, and independence.

T tests can be divided into two types. There is the independent t test, which can be used when the two groups under comparison are independent of each other, and the paired t test, which can be used when the two groups under comparison are dependent on each other. T tests are usually used in cases where the experimental subjects are divided into two independent groups, with one group treated with A and the other group treated with B. Researchers can acquire two types of results for each group (i.e., prior to treatment and after the treatment): preA and postA, and preB and postB. An independent t test can be used for an intergroup comparison of postA and postB or for an intergroup comparison of changes in preA to postA (postA-preA) and changes in preB to postB (postB-preB) ( Table 1 ).

Treatment ATreatment B
IDpreApostAΔAIDpreBpostBΔB
1637714118110120
2698819128710316
3769014137710730
4789517148011434
5809616157611640
689967168611630
79010212179811618
89210412188712033
910311071910512015
101121153206912758

ID: individual identification, preA, preB: before the treatment A or B, postA, postB: after the treatment A or B, ΔA, ΔB: difference between before and after the treatment A or B.

On the other hand, paired t tests are used in different experimental environments. For example, the experimental subjects are not divided into two groups, and all of them are treated initially with A. The amount of change (postA-preA) is then measured for all subjects. After all of the effects of A disappear, the subjects are treated with B, and the amount of change (postB-preB) is measured for all of the subjects. A paired t test is used in such crossover test designs to compare the amount of change of A to that of B for the same subjects ( Table 2 ).

Treatment ATreatment B
IDpreApostAΔAIDpreBpostBΔB
163771417310330
269881927410430
376901437610731
478951748410824
5809616wash out58411026
68996768611024
7901021279211321
8921041289511419
91031107910311815
101121153101151205

Statistic and Probability

Statistics is basically about probabilities. A statistical conclusion of a large or small difference between two groups is not based on an absolute standard but is rather an evaluation of the probability of an event. For example, a clinical test is performed to determine whether or not a patient has a certain disease. If the test results are either higher or lower than the standard, clinicians will determine that the patient has the disease despite the fact that the patient may or may not actually have the disease. This conclusion is based on the statistical concept which holds that it is more statistically valid to conclude that the patient has the disease than to declare that the patient is a rare case among people without the disease because such test results are statistically rare in normal people.

The test results and the probability distribution of the results must be known in order for the results to be determined as statistically rare. The criteria for clinical indicators have been established based on data collected from an entire population or at least from a large number of people. Here, we examine a case in which a clinical indicator exhibits a normal distribution with a mean of µ and a variance of σ 2 . If a patient's test result is χ, is this statistically rare against the criteria (e.g., 5 or 1%)? Probability is represented as the surface area in a probability distribution, and the z score that represents either 5 or 1%, near the margins of the distribution, becomes the reference value. The test result χ can be determined to be statistically rare compared to the reference probability if it lies in a more marginal area than the z score, that is, if the value of χ is located in the marginal ends of the distribution ( Fig. 1 ).

An external file that holds a picture, illustration, etc.
Object name is kjae-68-540-g001.jpg

This is done to compare one individual's clinical indicator value. This however raises the question of how we would compare the mean of a sample group (consisting of more than one individual) against the population mean. Again, it is meaningless to compare each individual separately; we must compare the means of the two groups. Thus, do we make a statistical inference using only the distribution of the clinical indicators of the entire population and the mean of the sample? No. In order to infer a statistical possibility, we must know the indicator of interest and its probability distribution. In other words, we must know the mean of the sample and the distribution of the mean. We can then determine how far the sample mean varies from the population mean by knowing the sampling distribution of the means.

Sampling Distribution (Sample Mean Distribution)

The sample mean we can get from a study is one of means of all possible samples which could be drawn from a population. This sample mean from a study was already acquired from a real experiment, however, how could we know the distribution of the means of all possible samples including studied sample? Do we need to experiment it over and over again? The simulation in which samples are drawn repeatedly from a population is shown in Fig. 2 . If samples are drawn with sample size n from population of normal distribution (µ, σ 2 ), the sampling distribution shows normal distribution with mean of µ and variance of σ 2 / n . The number of samples affects the shape of the sampling distribution. That is, the shape of the distribution curve becomes a narrower bell curve with a smaller variance as the number of samples increases, because the variance of sampling distribution is σ 2 / n . The formation of a sampling distribution is well explained in Lee et al. [ 2 ] in a form of a figure.

An external file that holds a picture, illustration, etc.
Object name is kjae-68-540-g002.jpg

T Distribution

Now that the sampling distribution of the means is known, we can locate the position of the mean of a specific sample against the distribution data. However, one problem remains. As we noted earlier, the sampling distribution exhibits a normal distribution with a variance of σ 2 / n , but in reality we do not know σ 2 , the variance of the population. Therefore, we use the sample variance instead of the population variance to determine the sampling distribution of the mean. The sample variance is defined as follows:

In such cases in which the sample variance is used, the sampling distribution follows a t distribution that depends on the 0degree of freedom of each sample rather than a normal distribution ( Fig. 3 ).

An external file that holds a picture, illustration, etc.
Object name is kjae-68-540-g003.jpg

Independent T test

A t test is also known as Student's t test. It is a statistical analysis technique that was developed by William Sealy Gosset in 1908 as a means to control the quality of dark beers. A t test used to test whether there is a difference between two independent sample means is not different from a t test used when there is only one sample (as mentioned earlier). However, if there is no difference in the two sample means, the difference will be close to zero. Therefore, in such cases, an additional statistical test should be performed to verify whether the difference could be said to be equal to zero.

Let's extract two independent samples from a population that displays a normal distribution and compute the difference between the means of the two samples. The difference between the sample means will not always be zero, even if the samples are extracted from the same population, because the sampling process is randomized, which results in a sample with a variety of combinations of subjects. We extracted two samples with a size of 6 from a population N (150, 5 2 ) and found the difference in the means. If this process is repeated 1,000 times, the sampling distribution exhibits the shape illustrated in Fig. 4 . When the distribution is displayed in terms of a histogram and a density line, it is almost identical to the theoretical sampling distribution: N(0, 2 × 5 2 /6) ( Fig. 4 ).

An external file that holds a picture, illustration, etc.
Object name is kjae-68-540-g004.jpg

However, it is difficult to define the distribution of the difference in the two sample means because the variance of the population is unknown. If we use the variance of the sample instead, the distribution of the difference of the samples means would follow a t distribution. It should be noted, however, that the two samples display a normal distribution and have an equal variance because they were independently extracted from an identical population that has a normal distribution.

Under the assumption that the two samples display a normal distribution and have an equal variance, the t statistic is as follows:

population mean difference (µ 1 - µ 2 ) was assumed to be 0; thus:

The population variance was unknown and so a pooled variance of the two samples was used:

However, if the population variance is not equal, the t statistic of the t test would be

and the degree of freedom is calculated based on the Welch Satterthwaite equation.

It is apparent that if n 1 and n 2 are sufficiently large, the t statistic resembles a normal distribution ( Fig. 3 ).

A statistical test is performed to verify the position of the difference in the sample means in the sampling distribution of the mean ( Fig. 4 ). It is statistically very rare for the difference in two sample means to lie on the margins of the distribution. Therefore, if the difference does lie on the margins, it is statistically significant to conclude that the samples were extracted from two different populations, even if they were actually extracted from the same population.

Paired T test

Paired t tests are can be categorized as a type of t test for a single sample because they test the difference between two paired results. If there is no difference between the two treatments, the difference in the results would be close to zero; hence, the difference in the sample means used for a paired t test would be 0.

Let's go back to the sampling distribution that was used in the independent t test discussed earlier. The variance of the difference between two independent sample means was represented as the sum of each variance. If the samples were not independent, the variance of the difference of two variables A and B, Var (A-B), can be shown as follows,

where σ 1 2 is the variance of variable A, σ 2 2 is the variance of variable B, and ρ is the correlation coefficient for the two variables. In an independent t test, the correlation coefficient is 0 because the two groups are independent. Thus, it is logical to show the variance of the difference between the two variables simply as the sum of the two variances. However, for paired variables, the correlation coefficient may not equal 0. Thus, the t statistic for two dependent samples must be different, meaning the following t statistic,

must be changed. First, the number of samples are paired; thus, n 1 = n 2 = n , and their variance can be represented as s 1 2 + s 2 2 - 2ρ s 1 s 2 considering the correlation coefficient. Therefore, the t statistic for a paired t test is as follows:

In this equation, the t statistic is increased if the correlation coefficient is greater than 0 because the denominator becomes smaller, which increases the statistical power of the paired t test compared to that of an independent t test. On the other hand, if the correlation coefficient is less than 0, the statistical power is decreased and becomes lower than that of an independent t test. It is important to note that if one misunderstands this characteristic and uses an independent t test when the correlation coefficient is less than 0, the generated results would be incorrect, as the process ignores the paired experimental design.

Assumptions

As previously explained, if samples are extracted from a population that displays a normal distribution but the population variance is unknown, we can use the sample variance to examine the sampling distribution of the mean, which will resemble a t distribution. Therefore, in order to reach a statistical conclusion about a sample mean with a t distribution, certain conditions must be satisfied: the two samples for comparison must be independently sampled from the same population, satisfying the conditions of normality, equal variance, and independence.

Shapiro's test or the Kolmogorov-Smirnov test can be performed to verify the assumption of normality. If the condition of normality is not met, the Wilcoxon rank sum test (Mann-Whitney U test) is used for independent samples, and the Wilcoxon sign rank test is used for paired samples for an additional nonparametric test.

The condition of equal variance is verified using Levene's test or Bartlett's test. If the condition of equal variance is not met, nonparametric test can be performed or the following statistic which follows a t distribution can is used.

However, this statistics has different degree of freedom which was calculated by the Welch-Satterthwaite [ 3 , 4 ] equation.

Owing to user-friendly statistics software programs, the rich pool of statistics information on the Internet, and expert advice from statistics professionals at every hospital, using and processing statistics data is no longer an intractable task. However, it remains the researchers' responsibility to design experiments to fulfill all of the conditions of their statistic methods of choice and to ensure that their statistical assumptions are appropriate. In particular, parametric statistical methods confer reasonable statistical conclusions only when the statistical assumptions are fully met. Some researchers often regard these statistical assumptions inconvenient and neglect them. Even some statisticians argue on the basic assumptions, based on the central limit theory, that sampling distributions display a normal distribution regardless of the fact that the population distribution may or may not follow a normal distribution, and that t tests have sufficient statistical power even if they do not satisfy the condition of normality [ 5 ]. Moreover, they contend that the condition of equal variance is not so strict because even if there is a ninefold difference in the variance, the α level merely changes from 0.5 to 0.6 [ 6 ]. However, the arguments regarding the conditions of normality and the limit to which the condition of equal variance may be violated are still bones of contention. Therefore, researchers who unquestioningly accept these arguments and neglect the basic assumptions of a t test when submitting papers will face critical comments from editors. Moreover, it will be difficult to persuade the editors to neglect the basic assumptions regardless of how solid the evidence in the paper is. Hence, researchers should sufficiently test basic statistical assumptions and employ methods that are widely accepted so as to draw valid statistical conclusions.

The results of independent and paired t tests of the examples are illustrated in Tables 1 and 2. The tests were conducted using the SPSS Statistics Package (IBM® SPSS® Statistics 21, SPSS Inc., Chicago, IL, USA).

Independent T test (Table 1)

An external file that holds a picture, illustration, etc.
Object name is kjae-68-540-a001.jpg

First, we need to examine the degree of normality by confirming the Kolmogorov-Smirnov or Shapiro-Wilk test in the second table. We can determine that the samples satisfy the condition of normality because the P value is greater than 0.05. Next, we check the results of Levene's test to examine the equality of variance. The P value is again greater than 0.05; hence, the condition of equal variance is also met. Finally, we read the significance probability for the "equal variance assumed" line. If the condition of equal variance is not met (i.e., if the P value is less than 0.05 for Levene's test), we reach a conclusion by referring to the significance probability for the "equal variance not assumed" line, or we perform a nonparametric test.

Paired T test (Table 2)

An external file that holds a picture, illustration, etc.
Object name is kjae-68-540-a002.jpg

A paired t test is identical to a single-sample t test. Therefore, we test the normality of the difference in the amount of change for treatment A and treatment B (ΔA-ΔB). The normality is verified based on the results of Kolmogorov-Smirnov and Shapiro-Wilk tests, as shown in the second table. In conclusion, there is a significant difference between the two treatments (i.e., the P value is less than 0.001).

Academia.edu no longer supports Internet Explorer.

To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to  upgrade your browser .

Enter the email address you signed up with and we'll email you a reset link.

  • We're Hiring!
  • Help Center

paper cover thumbnail

Application of t-test to analyze the small sample of Statistical Research

Profile image of International Journal of Innovations in Engineering and Science, ISSN:2456-3463  IJIES

In this paper we discuss the significance of t-test in small sample of statistics to analyze the data. Here we approach many application of t-test in statistics and research. This paper is aimed at introducing hypothesis testing, focusing on the paired t-test. It will explain how the paired t-test is applied to statistical analyses.

Related Papers

Independent Postgraduate Studies Research Centered on Educational Administration with Research, Training and Development Emphasis.

Shane J Charbonnét, Ph.D.: HCD & UXDE

Preliminary Chapter(s) -- One and Two: T-Test examines the difference(s) between variables with a set (Wrench, Thomas-Maddox, Richmond, and McCroskey, 2013; 2008). The two basic types include the Independent Samples t-Test and the Paired t-Test. This paper interprets derived PSPP results –data set 1 – resulting in an Analysis-Compare Means-Independent Samples t-Test and data set 2 – resulting in an Analysis-Compare Means-Paired Sample t-Test. It’s a 3-page report identifying the significant difference between the means of males/females and time1 / time2 on self-esteem. Independent Sample t-Test examines “one nominal variable with two categories (two independent groups) and their scores on one dependent interval/ratio variable” (Wrench, Thomas-Maddox, Richmond, and Wrench, 2013; 2008, p. 372). Data Set 1 espouses 10 total (gender specified – independent variable; self-esteem score identified – dependent variable), research participants. Each person categorized according to his / her observed self-esteem score-rating post-2-minute inspirational speech. The survey classes include male-gender with a coded numerical value of 1 and female-gender with a coded numerical value of 2. The nominal variable is the independent variable and the interval/ratio variable is the dependent variable. The central objective of the test is to determine if one group does/does not influence the other variable’s placement with the group(s). Applicable to the PSPP-results, group one assesses Group Statistics. The sample (N=5) for Var0001 and Var0002, is 5 (individually) and 10 (collectively). Var0001 mean = 56.80 and Var0002 mean = 52.00. The standard deviation (square root of the variance for Var0001) is 3.12. The standard deviation (√s2 (variance) = 3.16.

t test in research methodology pdf

Advances and Applications in Statistics

Mowafaq Al-kassab

JOURNAL OF ADVANCES IN MATHEMATICS

The t-statistic is the ratio of the departure of the estimated value of a parameter from its hypothesized value to its standard error. The term &quot;t-statistic&quot; is abbreviated from &quot;hypothesis test statistic&quot;. It was first derived as a posterior distribution in 1876 by Helmertand Lüroth. The purpose of this research is to study the t-test, especially the one sample t-test to determine if the sample data come from the same population. The grade points average (GPA) of the students for the second, third, and fourth grades of the Department of Mathematics Education, Tishk International University are used. The one sample t test is used to predict the GPA of the students for the second, third, and the fourth grades respectively, in addition to the overall average scores for the three grades. The 95% confidence interval for the true population average is also conducted.

K Y PUBLICATION

Ramnath Takiar

The present study was carefully designed to evaluate the performance of ttest as compared to Z-test in testing the significant or non-significant differences between two sample means. The sources of data for the study came by generating four Normal populations (Population A, B, C and D) and then drawing 30 samples each form those populations. Overall, the study covers 14400 comparisons to test for significant differences and 18240 comparisons for non-significant differences between means. It is surprising to note that at  = 5%, t-test was able to pick up only 29.3% of the expected significant differences between the sample means of Population C and D, which is quite low. In case of Population A and B, the validity of the test was observed to be relatively better and it was 49.6%. In view of low validity observed in the case of  = 5%, the validity was further explored at the higher levels of  namely 10%, 15% and 20%. With the rise in  levels, the validity was observed to be increasing. For the Population C and D, at  = 20% , the validity of t-test rose to 54.3% and for the Population A and B, the validity rose to 76.1%. This suggests that for testing the significant or non-significant differences between the means, especially for small samples, the  level can be raised from 5% to 20% so that more valid mean comparisons by t-test can be obtained. In view of Z-test performing better as compared to t-test in picking up the significant differences, correctly, and not lagging behind much in picking up

jorge ponce

(HCM) Võ Thái Thu Hà

Peter Samuels

Paired Samples t-test statstutor worksheet. Available at: http://www.statstutor.ac.uk/resources/uploaded/pairedsamplesttest3.pdf.

Pacing and Clinical Electrophysiology

Marek Malik

Psychological Science

Rand Wilcox

Dr. Mikael Chuaungo

A t-test is a statistic that checks if two means (averages) are reliably different from each other. Looking at the means may tell us there is a difference, but it doesn’t tell us if it’s reliable. Comparing means don’t tell if there is a reliable difference. For eg. If person A and person B flip a coin 100 times. Person A gets heads 52 times and person B gets heads 49 times. This does not tell us that Person A gets reliably more Heads than B, or whether he will get more heads than B if he flipped the coins 100 times again. There is no difference, its just chance.

Loading Preview

Sorry, preview is currently unavailable. You can download the paper by clicking the button above.

RELATED PAPERS

ForsChem Research Reports

Hugo Hernandez

Philips Ido

Edward Volchok

Dr. Edwin A . B . Juma

Akeyede Imam

Critical Care

Jonathan Ball

Engr. Dr. Muhammad Mujtaba Asad

Journal of Modern Applied Statistical Methods

Critical care (London, England)

Journal of Surgical Education

Todd Neideen

Eleazar hernandez vasquez

Joginder Kaur

… bulletin & review

Natalie Obrecht

Journal of emerging technologies and innovative research

Devendra pathak pathak

Sonja Eisenbeiss

Vanderley Borges

IOSR Journals publish within 3 days

British Journal of Mathematical and Statistical Psychology

Larry Toothaker

Handan Ankaralı

Sampark Acharya

BOHR Publishers

BOHR International Journal of Operations Management Research and Practices (BIJOMRP)

Annals of Cardiac Anaesthesia

Prabhaker Mishra

Statistical significance and other complementary measures for the interpretation of the research results

Mildrey Torres

  •   We're Hiring!
  •   Help Center
  • Find new research papers in:
  • Health Sciences
  • Earth Sciences
  • Cognitive Science
  • Mathematics
  • Computer Science
  • Academia ©2024

(Z, T, F and Chi-Square Test) in Research Methodology

(Z, T, F and Chi-Square Test) in Research Methodology

Mathematics Letters 2019; 5(3): 33-40 http://www.sciencepublishinggroup.com/j/ml doi: 10.11648/j.ml.20190503.11 ISSN: 2575-503X (Print); ISSN: 2575-5056 (Online)

The Derivation and Choice of Appropriate Test Statistic (Z, t, F and Chi-Square Test) in Research Methodology

Teshome Hailemeskel Abebe

Department of Economics, Ambo University, Ambo, Ethiopia Email address:

To cite this article: Teshome Hailemeskel Abebe. The Derivation and Choice of Appropriate Test Statistic (Z, t, F and Chi-Square Test) in Research Methodology. Mathematics Letters . Vol. 5, No. 3, 2019, pp. 33-40. doi: 10.11648/j.ml.20190503.11

Received : April 18, 2019; Accepted : July 16, 2019; Published : January 7, 2020

Abstract: The main objective of this paper is to choose an appropriate test statistic for research methodology. Specifically, this article tries to explore the concept of statistical hypothesis test, derivation of the test statistic and its role on research methodology. It also try to show the basic formulating and testing of hypothesis using test statistic since choosing appropriate test statistic is the most important tool of research. To test a hypothesis various statistical test like Z-test, Student’s t-test, F test (like ANOVA), Chi square test were identified. In testing the mean of a population or comparing the means from two continuous populations, the z-test and t-test were used, while the F test is used for comparing more than two means and equality of variance . The chi-square test was used for testing independence, goodness of fit and population variance of single sample in categorical data . Therefore, choosing an appropriate test statistic gives valid results about hypothesis testing. Keywords: Test Statistic, Z-test, Student’s t-test, F Test (Like ANOVA), Chi Square Test, Research Methodology

1. Introduction 2. Derivation of Z, t, F and Chi-Square The application of statistical test in scientific research has increased dramatically in recent years almost in every Test Statistic science. Despite, its applicability in every science, there were Theorem 1: if we draw independent random samples, misunderstanding in choosing of appropriate test statistic for from a population and compute the mean and the given problems. Thus, the main objective of this study is repeat, , … this , process many times, then is approximately to give a direction which test statistic is appropriate in normal. Since assumptions are part of the “if” part, the conducting statistical hypothesis testing in research conditions used to deduce sampling distribution of statistic, methodology. then the t, and F distributions all depend on normal Choosing appropriate statistical test may be depending on “parent” population. the types of data (continuous or categorical), i.e. whether a t- test, z-test, F-test or a chi-square test should be used depend 2.1. Chi-Square Distribution on the nature of data. For example the two-sample The chi square distribution is a theoretical distribution, independent t-test and z-test used if the two samples are which has wide application in statistical work. The term `chi- independent, a paired z-test and t-test used if two samples are square' (denoted by Greek letter and pronounced with dependent, while F-test can be used if the sample is `chi') is used to define this distribution. The chi square independent. theoretical distribution is a continuous distribution, but the Therefore, testing hypothesis about the mean of a single statistic is obtained in a discrete manner based on discrete population, comparing the means from two populations such differences between the observed and expected values. Chi- as one-way ANOVA, testing the variance of a single square ) is the sum of independent squared normal population using chi-square test and test of two or several random variables with mean and variance (i.e., mean and variance using F-test are an important topic in this standard normal random variables). 0 1 paper. Suppose we pick from a normal distribution , , … , 34 Teshome Hailemeskel Abebe: The Derivation and Choice of Appropriate Test Statistic (Z, t, F and Chi-Square Test) in Research Methodology with mean and variance, , that is ⁄ (2) . Then it turns out that be ℱ, = ⁄ ~independent, ) standard normal variables ( , , …) , and if Since ’s depend on the normal distribution, the the sum of independent standard random~(0,1) variable ( = ) have a chi-square distribution with K degree of distribution also depends on normal distribution. The ℱlimiting distribution of as is because freedom∑ given as follows: ℱ, → ∞ ⁄ as . → ∞ ⁄ ~1 2.3. Student’s -Distribution ( − ) ( − 1) = = ~ The t distribution is a probability distribution , which is Based on the Central Limit Theorem , the “limit of the frequently used to evaluate hypothesis regarding the mean of distribution is normal (as ). So that the chi-square continuous variables. Student’s t-distribution is quite similar test is given by → ∞ to normal distribution, but the exact shape of t-distribution depends on sample size, i.e., as the sample size increases then () (1) t-distribution approaches normal distribution. Unlike the = normal distribution, however, which has a standard deviation 2.2. The F Distribution of 1, the standard deviation of the t distribution varies with an is the ratio of two independent chi-squared random entity known as the degrees of freedom. We know that variablesℱ, each divided by their respective degrees of freedom as given by:

− ( − ) ( − ) = = = ⁄√ ∑ ( − ) ⁄ − 1 ∑ ( − ) ⁄ − 1 () () () () ( − ) ⁄ ⁄ ⁄ ⁄ = = ∑ ( ) = ∑ = () = = ∑ ( − ) ⁄ − 1 () ⁄( − 1) ⁄( − 1) ⁄() () ⁄() ⁄() = ⁄( − 1) where is the degree of freedom . Moreover, , then the A squared − 1 random variable equals the ratio of squared ( ) = ⁄√ ~ = = standard normal variable divided by chi-squared with ⁄ ⁄ . For t-distribution, , , − 1 ⁄ = ⁄ = ℱ = ⁄ degree of freedom. Since , the . As where and () , then the = ⁄ = ⁄ = ⁄ ~(0,1) = ~ , because . In this case, test statistic: √ squared → ∞ root ~(0,1) of both side results, ⁄ the − Student’s 1 → 1 based on normal distribution become: (3) = ~ Summary of relationships ⁄ Let Distribution~(0,1) Definition Parent Limiting Normal As , Independent ∑ ′ As → ∞, , → Independent Chi-squared ℱ , (⁄)( ⁄), ′ Normal As → ∞, ℱ, → ⁄ ⁄ ⁄ → ∞ → Note: , also . ℱ, = ℱ, = = = approaches of hypothesis testing are test statistic, p-value and 3. Test Statistic Approach of Hypothesis confidence interval . Testing In this paper, we focus only test statistic approach of hypotheses testing. A test statistic is a function of a random Hypotheses are predictions about the relationship among sample, and is therefore a random variable. When we two or more variables or groups based on a theory or compute the statistic for a given sample, we obtain an previous research [3]. Hence, hypothesis testing is the art of outcome of the test statistic. In order to perform a statistical testing if variation between two sample distributions can be test we should know the distribution of the test statistic under explained through random chance or not. The three the null hypothesis. This distribution depends largely on the 34 Teshome Hailemeskel Abebe: The Derivation and Choice of Appropriate Test Statistic (Z, t, F and Chi-Square Test) in Research Methodology

assumptions made in the model. If the specification of the random variable with and variance , then the standard model includes the assumption of normality, then the random variable Z (one-sample Z-test) has a standard normal appropriate statistical distribution is the normal distribution distribution given by: or any of the distributions associated with it, such as the Chi- square, Student’s t, or Snedecor’s F. (4) = ⁄√ ~ (0,1) 3.1. -tests (Large Sample Case) where is the sample mean, is the hypothesized population mean under the null hypothesis,μ is the population standard A Z-test is any statistical test for which the distribution of deviation and is the sample size, recall that the test statistic under the null hypothesis can be ( standard error ). However, according to Joginder K.⁄ √[6] if= the ̅ approximated by a normal distribution [17]. In this paper, the population standard deviation is unknown, but , given -test is used for testing significance of single population the Central Limit Theorem, the test statistic is given ≥ 30by: mean and difference of two population means if our sample is taken from a normal distribution with known variance or if (5) our sample size is large enough to invoke the Central Limit = ⁄√ Theorem (usually is a good rule of thumb). ≥ 30 Where . 3.1.1. One Sample -test for Mean = ∑ ( − ) A one-sample -test helps to determine whether the 3.1.2 . Two-Sample -test (When Variances Are Unequal) population mean, is equal to a hypothesized value when the A two-sample test can be applied to investigate the sample is taken from a normal distribution with known significant of the difference between the mean of two variance . Recall theorem 1. If are a random populations when the population variances are known and , , … , variables, then, ∑ . In this case, the mean of the unequal, but both distributions are normal and sample from the population= is not constant rather it varies independently drawn. Suppose we have independent depend on which sample drawn from the population is random sample of size n and m from two normal chosen. Therefore, the sample mean is a random variable populations having means and and known variances with its own distribution is called sampling distribution . and , respectively, and that we want to test the null Lemma 1. If is the random variable of the sample mean hypothesis , where is a given constant, of all the random samples of size n from the population with against one of − the alternatives= , now we can expected value and variance , then the expected derive a test statistic as follow. − ≠ value and variance() of are () and Lemma 2. If and are () = () () = independent normally , distributed , … , random , variables , … , with (), respectively. and , then Proof: consider all of the possible samples of size n from a ~ (, ) ~ (, ) − ~ ( − population with expected value and variance . . If a sample are chosen,() each comes from() the , + ) Proof: by Lemma 1. and . same population, so, … ,each has the same expected values, ~ , ~(, ) and variance . Then . () () Then the appropriate− = + test (−) statistic ~ ( for −this test, i+s given (−1) by: )

1 1 1 () , where and () = = = () = ~(0,1) = ∑ ( − )

1 = (. ()) = () If= we ∑do not know− the variance of both distributions, then we may use: 1 1 () = = (6) ()() = 1 1 () = () = (. () = Provided n and m are large enough to invoke the Central Limit Theorem. The first step is to state the null hypothesis which is a where is the sample size of the one group (X) and is statement about a parameter of the population (s) labeled the sample size of the other group (Y). as and alternative hypothesiswhich is a statement about a is the sample mean of one group and calculated as parameterH of the population (s) that is opposite to the null hypothesis labeled . The null and alternatives hypotheses ∑ , while the sample mean of the other group is (two-tail test) are givenH by; against . = denoted by and calculated as . H: μ = μ H: μ ≠ μ ∑ Theorem 2: If are normally distributed = , , … , Mathematics Letters 2019; 5(3): 33-40 35

we use a t -test for two population mean given the null hypothesis of the independent sample t-test (two-tail test) − 1) + ( − 1) , the test statistic is H: μ − μ = μ + − 2 () (12) () = ~ ∑ ( − ) + ∑ ( − ) = + − 2 where and = ∑ ( − ) = ∑ ( − ) 1 3.2.4. Paired (Dependent) Sample Test = = ( − ) − 1 Two samples are dependent (or consist of matched pairs) if and the members of one sample can be used to determine the members of the other sample. A paired t-test is used to compare two population means when we have two samples

1 in which observations in one sample can be paired with = ( − ) − 1 observations in the other sample under Gaussian distribution [7, 16]. When two variables are paired, the difference of Let us derived -distribution in two sample t-test (variances these two variables, , is treated as if it were a unknown but equal). single sample. This test = is appropriate − for pre-post treatment Lemma 3. If and are normally responses. The null hypothesis is that the true mean and independent , distributed, … , random, , … , variables with difference of the two variables is as given by; and , then . Then the test statistic: : = ~ (, ) ~ (, ) − ~ ( − . , + ) (13) Proof: by Lemma 1. and . () = ~ ~ (, ) ~(, ) Then . where , , , is the mean of the ∑ ∑() Lemma − 4.= If + (−) ~ ( and− , + (−1) are normally) ̅ pairwise = difference = under the null= √ hypothesisμ and is and independent , distributed, … , random, , … , variables with sample standard deviation of the pairwise differences. and , then () . Proof: Let and are normal random variable. ~ ( , ) ~ ( , ) ~(0,1) Therefore, is also a normal random variable. Theorem 4. If and are normally Then , = can− be thought as the sequence of random and independent , distributed, … , random, , … , variables with variable, , … , . and , then the test statistic is ~ given~ ( as: , ) ~ (, ) 3.2.5. Effect Size Both independent and dependent sample t-test will give the

() (11) researcher an indication of whether the difference between = the two groups is statistically significant, but not indicates the size (magnitude) of the effect. However, effect size explains

the degree to which the two variables are associated with one where () () ∑ ( )∑ ( )has a another. Thus, effect size is the statistics that indicates the = = -distribution with degree of freedom. relative magnitude of the differences between means or the + − 2 Proof. () amount of total variance in the dependent variable that is = = () () predictable from the knowledge of the levels of independent variable.

i. Effect Size of an Independent Sample t-test () () () + An effect size is a standardized measure of the magnitude of observed effect. The Pearson’s correlation coefficient r is Let , by lemma 4. and if () used as common measures of effect size. Correlation = ~(0,1) = coefficient is used to measure the strength of relationship () () . Then since () and between two variables. From the researcher’s point of view, + ~ ~ correlation coefficient of 0means there is no effect, and a . Therefore, which has () value of 1 means that there is a perfect effect. The following ~ = ~ () is the formula used to calculate the effect size; a -distribution with degree of freedom. + − 2 3.2.3. Independent Two Sample t-test (Variances Unknown (14) But Unequal) = However, if the variances are unknown and unequal, then where is the calculated value of the independent sample t- Mathematics Letters 2019; 5(3): 33-40 37

test, is the sample size, if refers to small such that either or . effect, refers to medium effect and0.1 refers to ≤ ⁄ , − 1 ≥ ⁄ , − 1 large effect. = 0.3 = 0.5 3.3.2. Chi-Square Test of Independence ii. Effect Size of Dependent Sample t-test The chi-square test of independence is a nonparametric The most common used method for calculating effect size statistical test used for deciding whether two categorical independent sample t-test is the Eta Squared [11]. The (nominal) variables are associated or independent [1, 5]. Let formula for simplified Eta Squared is given by: the two variables in the cross classification are X and Y, then the null against H: no association between X and Y (15) alternative . The chi- square statisticH: some association between X and Y used to conduct this test is the same as in the = goodness of fit test given by: where is the calculated value of the dependent sample - test, n is the sample size. The calculated value of the Eta Squared should be between 0 and 1. (17) [()()] , and are small, medium, and large effect, = χ = ∑ ∑ ~ 0.01, 0.06 0.14 respectively. where is the test statistic that asymptotically approaches a chi-squareχ distribution, is the observed frequency of the ith 3.3. Chi-Square Test Statistic row and jth column, representing the numbers of respondents Karl Pearson in 1900 proposed the famous chi-squared test that take on each of the various combinations of values for for comparing an observed frequency distribution to a the two variables. ( ) ( ) is the theoretically assumed distribution in large-sample expected (theoretical) frequency= of the i th row and j th column, approximation with the chi-squared distribution. According where is the number of rows in the contingency table , is to Pearson, chi -square test is a nonparametric test used for the number of columns in the contingency table. testing the hypothesis of no association between two or more The statistical decision is if , reject groups, population or criteria (i.e. to check independence the null hypothesis and concludeχ that> there[( is a)( relationship)] between two variables) and how likely the observed between the variables. If , the null distribution of data fits with the distribution that is expected hypothesis cannot be rejectedχ Analysis of variance (ANOVA) is statistical technique In this paper, the derivation and choice of appropriate a test used for comparing means of more than two groups (usually statistic were reviewed since application of right statistical at least three) given the assumption of normality, hypothesis tests to the right data is the main role of statistical independency and equality of the error variances. In one-way tools in empirical research of every science. Hence, the ANOVA, group means are compared by comparing the researcher apply -test, which is appropriate to test the existence variability between groups with that of variability within the of population mean difference in the case of large sample size groups. This is done by computing an F-statistic. The F-value and the t-test is for small sample size. Moreover, -test is used is computed by dividing the mean sum of squares between for test of several mean and variance, while Chi-squareℱ test is for the groups by the mean sum of squares within the groups [5], testing single variance, goodness of fit and independence. [8]. A one-way analysis of variance (ANOVA) is given by: References = + + [1] Alena Košťálová. (2013). Proceedings of the 10th where is the j th observation for the i th income level, is a th International Conference “Reliability and Statistics in constant term, is the effect on the i income level, Transportation and Communication” (RelStat’10), 20–23 October 2010, Riga, Latvia, p. 163-171. ISBN 978-9984-818- . The null hypothesis given by ~(0, ), = 1, 2, … , against 34-4 Transport and Telecommunication Institute, Lomonosova 1, LV-1019, Riga, Latvia. : = …. Total = variation= = 0 of the data: is partitioned as≠ variation, =1,2,…, within levels (groups) and between levels (groups). [2] Banda Gerald. (2018). A Brief Review of Independent, Dependent and One Sample t-test. International Journal of Mathematically, and portion of total Applied Mathematics and Theoretical Physics. 4 (2), pp. 50- X = ∑ ∑ 54. doi: 10.11648/j.ijamtp.20180402.13. Mathematics Letters 2019; 5(3): 33-40 39

[3] David, J. Pittenger. (2001). Hypothesis Testing as a Moral [11] Pallant, J. (2007). SPSS Survival Manual: A Step by Step to Choice. Ethics & Behavior, 11 (2), 151-162, DOI: Data Analysis Using SPSS for Windows (Version 15). Sydney: 10.1207/S15327019EB1102_3. Allen and Unwin. [4] DOWNWARD, L. B., LUKENS, W. W. & BRIDGES, F. [12] Pearson, K. On the criterion that a given system of deviations (2006). A Variation of the F-test for Determining Statistical from the probable in the case of a correlated system of Relevance of Particular Parameters in EXAFS Fits. 13th variables that arisen from random sampling. Philos. Mag. International Conference on X-ray Absorption Fine Structure, 1900, 50, 157-175. Reprinted in K. Pearson (1956), pp. 339- 129-131. 357. [5] J. P. Verma. (2013). Data Analysis in Management with SPSS [13] Philip, E. Crewson.(2014). Applied Statistics, First Edition. Software, DOI 10.1007/978-81-322-0786-3_7, Springer, United States Department of Justice. India. https://www.researchgate.net/publication/297394168. [6] Joginder Kaur. (2013). Techniques Used in Hypothesis Testing [14] Sorana, D. B., Lorentz, J., Adriana, F. S., Radu, E. S. and in Research Methodology A Review. International Journal of Doru, C. P. (2011). Pearson-Fisher Chi-Square Statistic Science and Research (IJSR) ISSN (Online): 2319-7064 Index Revisited. Journal of Information; 2, 528-545; doi: Copernicus Value: 6.14 | Impact Factor: 4.438. 10.3390/info2030528; ISSN 2078-2489, www.mdpi.com/journal/information. [7] Kousar, J. B. and Azeez Ahmed. (2015). The Importance of Statistical Tools in Research Work. International Journal of [15] Sureiman Onchiri.(2013). Conceptual model on application of Scientific and Innovative Mathematical Research (IJSIMR) chi-square test in education and social sciences. Academic Volume 3, Issue 12, PP 50-58 ISSN 2347-307X (Print) & Journals, Educational Research and Reviews; 8 (15); 1231- ISSN 2347-3142 (Online). 1241; DOI: 10.5897/ERR11.0305; ISSN 1990- 3839.http://www.academicjournals.org/ERR. [8] Liang, J. (2011). Testing the Mean for Business Data: Should One Use the Z-Test, T-Test, F-Test, The Chi-Square Test, Or [16] Tae Kyun Kim. (2015). T test as a parametric statistic. Korean The P-Value Method? Journal of College Teaching and Society of Anesthesiologists. pISSN 2005-6419, eISSN 2005- Learning, 3 (7), doi: 10.19030/tlc.v3i7.1704. 7563.

[9] LING, M. (2009b). Compendium of Distributions: Beta, [17] Vinay Pandit. (2015). A Study on Statistical Z Test to Analyses Binomial, Chi-Square, F, Gamma, Geometric, Poisson, Behavioral Finance Using Psychological Theories, Journal of Student's t, and Uniform. The Python Papers Source Codes Research in Applied Mathematics Volume 2~ Issue 1, pp: 01- 1:4. 07 ISSN (Online), 2394-0743 ISSN (Print): 2394-0735. www.questjournals.org. [10] MCDONALD, J. H. (2008). Handbook of Biological Statistics. Baltimore, Sparky House Publishing.

IMAGES

  1. T-test for dependent Samples

    t test in research methodology pdf

  2. Hypothesis Testing How To Perform Paired Sample T Tes

    t test in research methodology pdf

  3. PPT

    t test in research methodology pdf

  4. T-test

    t test in research methodology pdf

  5. T Test Example In Research Methodology

    t test in research methodology pdf

  6. T-test: Definition, Formulation, Sorts, Functions

    t test in research methodology pdf

COMMENTS

  1. PDF T-TESTS: When to use a t-test

    8. Use a table of critical t-values (see the one at the back of this document) The critical t-value at the p = .05 significance level, for a two-tailed test, is: 2.262. Our t-value (from the experiment) was: 2.183. In order for this to be significant, it must be LARGER than the critical t-value derived from the table.

  2. PDF Chapter 6 The t-test and Basic Inference Principles

    Chapter 6The t-test and Basic Inference PrinciplesThe t-test is used as an examp. e of the basic principles of statistical inference.One of the simplest situations for which we might design an experiment is the case of a nominal two-level expla. atory variable and a quantitative.

  3. (PDF) THE t TEST: An Introduction

    Abstract and Figures. The t distribution is a probability distribution similar to the Normal distribution. It is commonly used to test hypotheses involving numerical data. This paper provides an ...

  4. PDF Hypothesis Testing with t Tests

    Write the symbol for the test statistic (e.g., z or t) 2. Write the degrees of freedom in parentheses 3. Write an equal sign and then the value of the test statistic (2 decimal places) 4. Write a comma and then whether the p value associated with the test statistic was less than or greater than the cutoff p value of 05value of .05

  5. PDF Single-Sample and Two-Sample t Tests

    161 Single-Sample and 6 Two-Sample t Tests Introduction T he simple t test has a long and venerable history in psychological research. Whenever researchers want to answer simple questions about one or two means for normally distributed variables (e.g., neu- roticism, daily caloric intake, height, rainfall), a t test will often provide the answer to such questions.

  6. T Test

    A paired two-sample t-test can be used to capture the dependence of measurements between the two groups. These variations of the student's t-test use observed or collected data to calculate a test statistic, which can then be used to calculate a p-value. Often misinterpreted, the p-value is equal to the probability of collecting data that is at ...

  7. PDF An Overview of the Significance of the t-test

    Part IV is about reporting t-test results in both text and table formats and concludes with a guide to interpreting confidence intervals. Keywords: Educational research, Significance testing, Statistics, t-test 1. Introduction In 1908 William Sealy Gosset, an Englishman publishing under the pseudonym Student developed the t-test and t distribution.

  8. PDF t Tests: One-Sample, Two-Independent-Sample, and Related-Samples Designs

    Method 1: df for two-independent-sample t test = df. + dfMethod 2: df for two-independent-sample t test = ( n − 1 ) + ( n − 1 )1 2Method 3: df for two-independent-sample t test = N - 2 As summarized in Ta. le 9.4, we can add the degrees of freedom for each sample using the first two methods. In the third method,

  9. An Introduction to t Tests

    Revised on June 22, 2023. A t test is a statistical test that is used to compare the means of two groups. It is often used in hypothesis testing to determine whether a process or treatment actually has an effect on the population of interest, or whether two groups are different from one another. t test example.

  10. PDF The t-test

    The One Sample t test The One-sample t test is used to compare a sample mean to a specific value (e.g., a population parameter; a neutral point on a Likert-type scale, chance performance, etc.). Examples: 1. A study investigating whether stock brokers differ from the general population on

  11. PDF t-Test: Types and Application

    The t test is one type of inferential, parametric statistics. The t-test tells us how significant the differences between groups are; In other words, it lets us know if those differences (measured in means/averages) could have happened by chance. There are three main types of t- test: Independent Samples t-test. compares the means for two groups.

  12. (PDF) Data Analysis and Application

    PDF | On Dec 2, 2016, Toi Clayton-Soh published Data Analysis and Application - t Tests | Find, read and cite all the research you need on ResearchGate

  13. T-test: Definition, Formula, Types, Applications

    The t-test is a test in statistics that is used for testing hypotheses regarding the mean of a small sample taken population when the standard deviation of the population is not known. The t-test is used to determine if there is a significant difference between the means of two groups. The t-test is used for hypothesis testing to determine ...

  14. T test as a parametric statistic

    Parametric methods refer to a statistical technique in which one defines the probability distribution of probability variables and makes inferences about the parameters of the distribution. In cases in which the probability distribution cannot be defined, nonparametric methods are employed. T tests are a type of parametric method; they can be ...

  15. PDF Module 6: t

    The Applied Research Center Module 6: t Tests . Module 6 Overview ! Types of t Tests ! One Sample t Test ! Independent Samples t Test ! Paired Samples t Test ! Examples . t-Tests ... independent samples t-test revealed that the average grades on Assignment 1 did not differ significantly from Class 1 (M = 21.18, SD = 1.49) to Class 2 (M = 21.90 ...

  16. PDF 12.1 RESEARCH SITUATIONS WHERE THE INDEPENDENT-SAMPLES t TEST IS USED

    eine is the dichotomous independent variable with v. lues 1 and 2. Equations 12.1 and12.2 do not explicitly name the dependent variable.Just as we used M to estimate. μ for the one-sample t test, we now use (M 1 - M 2) to esti-mate (μ 1 - μ 2). What information from the sample data.

  17. (PDF) THE USE OF TWO-SAMPLE t-TEST IN THE REAL DATA

    The t-test is one of the most commonly used statistical methods. It was developed and accredited by William Gosset, Karl Pearson and R. Fisher in the 19th century. The test was further developed ...

  18. PDF C H A P T E R RESEARCH METHODOLOGY

    3.1 RESEARCH DESIGN. The researcher chose a survey research design because it best served to answer the questions and the purposes of the study. The survey research is one in which a group of people or items is studied by collecting and analyzing data from only a few people or items considered to be representative of the entire group. In other ...

  19. PDF CHAPTER 4 COMPARING MEANS USING THE t-TEST

    Chapter 5: Comparing two means using the t-test 2 What is the T-Test? The t-test was developed by a statistician, W.S. Gossett (1878-1937) who worked in a brewery in Dublin, Ireland. His pen name was 'student' and hence the term 'student's t-test' which was published in the scientific journal, Biometrika in 1908. The t-test is a ...

  20. PDF The Derivation and Choice of Appropriate Test Statistic (Z, T, F and

    KEYWORDS: Test Statistic, Z-test, Student's t-test, F test (like ANOVA), Chi square test, research methodology INTRODUCTION The application of statistical test in scientific research has increased dramatically in recent years almost in every science. Despite, its applicability in every science, there were misunderstanding in choosing of ...

  21. (PDF) Application of t-test to analyze the small ...

    Here we approach many application of t-test in statistics and research. This paper is aimed at introducing hypothesis testing, focusing on the paired t-test. It will explain how the paired t-test is applied to statistical analyses. Keywords: t-test, p-value, significance, hypothesis testing.

  22. (PDF) The Student's t-Test: A Brief Description

    In statistical methods, t-test, also known as Student 's t-test, is widely used t o compare groups' means for a particular vari-. able. The test replaces z-test whenever the standar d ...

  23. (Z, T, F and Chi-Square Test) in Research Methodology

    In testing the meanof a population or comparing the meansfrom two continuous populations, the z-test and t-test were used, while the F test is used for comparing more than two means and equality of variance. The chi-square test was used for testing independence, goodness of fitand population variance of single sample in categorical data.