Statistics provide a way for us to measure and compare variables between different sets of data. The logic of statistical significance is used to determine if the results from a study or experiment are valid and can be trusted. Statistical significance tests are used to measure the probability that any observed difference in results is due to chance rather than an underlying factor being studied. In order for a result to be deemed statistically significant, it must meet certain criteria known as “levels of significance” (Kumar & Chawla, 2017). The most commonly accepted levels of significance are 0.05 and 0.01, where anything lower than these numbers indicates that there was a statistically significant outcome (Kumar & Chawla, 2017).

A Type I error occurs when researchers reject the null hypothesis when in fact it should not have been rejected (Cumming et al., 2014). This type of error happens when researchers interpret differences in their research as meaningful instead of random variation caused by factors outside what they were studying. A Type II error occurs when researchers fail to reject the null hypothesis even though this should have been rejected (Cumming et al., 2014). This type of error happens when research fails to detect true differences between two groups because either too little evidence was collected or because there was sampling bias that prevented meaningful conclusions from being drawn about the cause-effect relationships under investigation (Cumming et al., 2014).

## Explain the logic of statistical significance and level of significance. Define Type I and Type II errors

The purpose of using statistical levels significance is twofold: firstly, it gives researchers certainty about results so that decisions can be made confidently; secondly, it helps ensure accuracy by controlling false positives and false negatives—known as Type I errors and Type II errors respectively (Kumar & Chawla, 2017). By setting thresholds such as 0.05 or 0.01 for how likely an effect needs to be before declaring its existence, we can control our chances of making mistakes—both false positive ones in which we interpret something as real but is actually just noise or randomness; or false negative ones wherein we miss out on something important despite having enough information available (Kumar & Chawla, 2017). This allows us to use statistics more effectively while minimizing potential risk associated with incorrect interpretations or wrong choices based on inaccurate data analysis outcomes.

In conclusion, statistical significance relies on levels such as 0.05 and 0.01 for determining whether observed differences in results can be attributed meaningfully rather than randomly generated through natural variation among samples being studied; Type I errors occur when researchers misinterpret such variations whereas Type II errors happen when sample sizes are too small resulting insufficient test power thereby missing out on true effects existing between samples under investigation(Cumming et al.,2014 ; Kumar & Chawla ,2017 ).