: Review of Statistical Concepts
The Mean
The mean refers to the average of a set of numbers. Calculation of the mean is vital, as it will allow an individual to calculate his or her grade point average. Besides, a person may also need to calculate the mean for several other situations and applications. From a general perspective, calculation of mean or average of a set of numbers entails adding them up and dividing the result of the summation by the total number of entries. That is, for a set of numbers, {X1, X2, X3, … Xj } the mean is the sum of all ‘X’ divided by ‘j (Anastasi & Urbina, 1997)’.
Worked Example: Calculate the mean of the following set of numbers:
25, 28, 31, 35, 43, 48
How many numbers are there in the above set of data? 6. Accordingly, add together all of the numbers and divide the total by six (6) in order to solve for the mean.
Mean = (25 + 28 + 31 + 35 + 43 + 48)/6
Mean = 210/6
Mean = 35
Overall, the formula for calculating mean is given by:
The Correlation
Correlation, otherwise referred to as correlation coefficients, is critical in statistics for measuring how strong a relation is between two variables (Anastasi & Urbina, 1997). Pearson’s correlation, for example, is commonly used in linear regression.
Worked Example:
Subject Age X Glucose Level Y XY X2 Y2
1 43 99 4257 1849 9801
2 21 65 1365 441 4225
3 25 79 1975 625 6241
4 42 75 3150 1764 5625
5 57 87 4959 3249 7569
6 59 81 4779 3481 6561
Σ 247 486 20485 11409 40022
Use the following formula to work out the correlation coefficient.
The solution is: 2868 / 5413.27 = 0.529809
The Variance
Variance refers to the average squared differences from the mean.
Formula:
The Standard Deviation
Standard deviation (σ) is the measure of the extent to which numbers in a set of data are spread out.
The Standard Deviation is the square root of variance.
The “Population Standard Deviation”:
Week 3 discussion
Correlation
Correlation refers to a useful statistical concept that when applied shows distinctiveness between two variables from their bond to other factors that separate them (Cohen, 1990). Relationship between two variables become acquainted from creation of relationship that do not make sense literally but that change the interpretations used to define an object or subject. An example of perfect correlation is that of a relationship between height and weight where it is estimated that a tall person is likely to be heavier than a shorter person is. This simple application represents an outright application of correlations where height is directly related to weight. Though the difference between two variables correlated may not be logically accountable, it helps to create grounds that evaluate the difference between two variables (Cohen, 1990). Through tabulation of the variables such as the application in themes one is able to identify invincible correlations that were not suspected before the application of the analogy. Correlations differ in magnitude hence a capable analyst is tasked with the work of identify the dominant correlation that helps in understanding the data in question. Correlation also manifest in a misleading nature as compared to its positive overlook in quantity of the analyzed subjects. Correlation plays no part in the actual diversification of data. A confusing aspect of correlation manifest especially in its funds having the ability to circumvent a portfolio while uncorrelated methods having a distinct disadvantage of being too dependent to underlying markets (Laureate Education, 2010).
References
Anastasi, A., & Urbina, S. (1997). Psychological testing (7th ed.). Upper Saddle River, NJ: Prentice Hall. ?Chapter 3, “Norms and the Meaning of Test Scores” (pp. 49–54)
Cohen, J. (1990). Things I have learned (so far). American Psychologist, 45(12), 1304–1312.
Retrieved from the Walden Library database
Laureate Education, Inc. (Executive Producer). (2010). Normal curve interactive. Baltimore, MD: Author.
Laureate Education, Inc. (Executive Producer). (2010). Two tailed curve. Baltimore, MD: Author.