Robust percentile bootstrap test with modified one-step M-estimator (MOM): An alternative modern statistical analysis
Normality and homoscedasticity are two main assumptions that must be fulfilled when dealing with classical parametric tests for comparing groups. Any violation of the assumptions will cause the results to be invalid. However, in reality, these assumptions are hardly achieved. To overcome such proble...
Saved in:
Main Author: | |
---|---|
Format: | Thesis |
Language: | eng eng |
Published: |
2015
|
Subjects: | |
Online Access: | https://etd.uum.edu.my/5324/1/s812430.pdf https://etd.uum.edu.my/5324/2/s812430_abstract.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
id |
my-uum-etd.5324 |
---|---|
record_format |
uketd_dc |
institution |
Universiti Utara Malaysia |
collection |
UUM ETD |
language |
eng eng |
advisor |
Md Yusof, Zahayu |
topic |
QA299.6-433 Analysis |
spellingShingle |
QA299.6-433 Analysis Nurul Hanis, Harun Robust percentile bootstrap test with modified one-step M-estimator (MOM): An alternative modern statistical analysis |
description |
Normality and homoscedasticity are two main assumptions that must be fulfilled when dealing with classical parametric tests for comparing groups. Any violation of the assumptions will cause the results to be invalid. However, in reality, these assumptions are hardly achieved. To overcome such problem, this study proposed to
modify a method known as Parametric Bootstrap test by substituting the usual mean, with a highly robust location measure, modified one step M-estimator (MOM). MOM is an asymmetric trimmed mean. The substitution will make the Parametric Bootstrap test more robust for comparing groups. For this study, the trimming
criteria for MOM employed two highly robust scale estimators namely MADn and Tn. A simulation study was conducted to investigate on the performance of the proposed method based on Type I error rates. To highlight the strength and weakness of the method, five variables: number of groups, balanced and unbalanced sample sizes, types of distributions, variances heterogeneity and nature of pairings of sample sizes and group variances were manipulated to create various conditions which are common to real life situations.The performance of the proposed method was then compared with the most frequently used parametric and non parametric tests for two (independent sample t-test and Mann Whitney respectively) and more than two
independent groups (ANOVA and Kruskal Wallis respectively). The finding of this study indicated that, for two groups, the robust Parametric Bootstrap test performed reasonably well under the conditions of heterogeneous variances with normal or skewed distributions. While for more than two groups, the test generate good Type I error control under heterogeneous variances and skewed distributions. In comparison with the parametric and non parametric methods, the proposed test outperforms its counterparts under non-normal distribution and heterogeneous variances. The performance of each procedure was also demonstrated using real data. In general, the performance of Type I error for the proposed test is very convincing even when the assumptions of normality and homoscedasticity are violated. |
format |
Thesis |
qualification_name |
masters |
qualification_level |
Master's degree |
author |
Nurul Hanis, Harun |
author_facet |
Nurul Hanis, Harun |
author_sort |
Nurul Hanis, Harun |
title |
Robust percentile bootstrap test with modified one-step M-estimator (MOM): An alternative modern statistical analysis |
title_short |
Robust percentile bootstrap test with modified one-step M-estimator (MOM): An alternative modern statistical analysis |
title_full |
Robust percentile bootstrap test with modified one-step M-estimator (MOM): An alternative modern statistical analysis |
title_fullStr |
Robust percentile bootstrap test with modified one-step M-estimator (MOM): An alternative modern statistical analysis |
title_full_unstemmed |
Robust percentile bootstrap test with modified one-step M-estimator (MOM): An alternative modern statistical analysis |
title_sort |
robust percentile bootstrap test with modified one-step m-estimator (mom): an alternative modern statistical analysis |
granting_institution |
Universiti Utara Malaysia |
granting_department |
Awang Had Salleh Graduate School of Arts & Sciences |
publishDate |
2015 |
url |
https://etd.uum.edu.my/5324/1/s812430.pdf https://etd.uum.edu.my/5324/2/s812430_abstract.pdf |
_version_ |
1747827908719673344 |
spelling |
my-uum-etd.53242021-04-04T08:23:45Z Robust percentile bootstrap test with modified one-step M-estimator (MOM): An alternative modern statistical analysis 2015 Nurul Hanis, Harun Md Yusof, Zahayu Awang Had Salleh Graduate School of Arts & Sciences Awang Had Salleh Graduate School of Arts & Sciences QA299.6-433 Analysis Normality and homoscedasticity are two main assumptions that must be fulfilled when dealing with classical parametric tests for comparing groups. Any violation of the assumptions will cause the results to be invalid. However, in reality, these assumptions are hardly achieved. To overcome such problem, this study proposed to modify a method known as Parametric Bootstrap test by substituting the usual mean, with a highly robust location measure, modified one step M-estimator (MOM). MOM is an asymmetric trimmed mean. The substitution will make the Parametric Bootstrap test more robust for comparing groups. For this study, the trimming criteria for MOM employed two highly robust scale estimators namely MADn and Tn. A simulation study was conducted to investigate on the performance of the proposed method based on Type I error rates. To highlight the strength and weakness of the method, five variables: number of groups, balanced and unbalanced sample sizes, types of distributions, variances heterogeneity and nature of pairings of sample sizes and group variances were manipulated to create various conditions which are common to real life situations.The performance of the proposed method was then compared with the most frequently used parametric and non parametric tests for two (independent sample t-test and Mann Whitney respectively) and more than two independent groups (ANOVA and Kruskal Wallis respectively). The finding of this study indicated that, for two groups, the robust Parametric Bootstrap test performed reasonably well under the conditions of heterogeneous variances with normal or skewed distributions. While for more than two groups, the test generate good Type I error control under heterogeneous variances and skewed distributions. In comparison with the parametric and non parametric methods, the proposed test outperforms its counterparts under non-normal distribution and heterogeneous variances. The performance of each procedure was also demonstrated using real data. In general, the performance of Type I error for the proposed test is very convincing even when the assumptions of normality and homoscedasticity are violated. 2015 Thesis https://etd.uum.edu.my/5324/ https://etd.uum.edu.my/5324/1/s812430.pdf text eng public https://etd.uum.edu.my/5324/2/s812430_abstract.pdf text eng public http://sierra.uum.edu.my/record=b1261740~S1 masters masters Universiti Utara Malaysia Babu, G. J., Padmanabhan, A. R., & Puri, M. L. (1999). Robust one-way ANOVA under possibly non-regular conditions. Biometrical Journal, 41(3), 321-339. Box, G. E. P. (1953).Non-normality and tests on variances. Biometrika, 10, 318-335. Box, G. E. P. (1954). Some theorem on quadratics forms applied in the study of analysis of variance problems: I. Effect of inequality of variance in the one-way model. Annals of Mathematical Statistics, 25, 290-302. Bradley, J. V. (1978). Robustness? British Journal of Mathematical and Statistical Psychology, 31, 144-152. Cribbie, R. A., Fiksenbaum, L., Keselman, H. J., & Wilcox, R. R. (2012). Effect of non-normality on test statistics for one-way independent groups designs. British Journal of Mathematical and Statistical Psychology, 65, 56-73. Daniel, W. W. (1990). Applied Nonparametric Statistics. Boston: PWS-Kent. Efron, B. (1979). Bootstrap methods: Another look at the Jacknife. Annals of Statistics, 7, 1-26. Efron, B. &Tibshirani, R. (1993). An Introduction to the Bootstrap. Chapman & Hall Inc. Erceg-Hurn, D. M., &Mirosevich, V. M. (2008). Modern robust statistical methods: An easy way to maximize the accuracy and power of your research. American Psychologist, 63, 591-601. Guo, J. H. &Luh, W. M. (2000). An invertible transformation two-sample trimmed tstatistics under heterogeneity and nonnormality. Statistics & Probability letters, 49, 1-7. Hampel, F. R. (1974).The influence curve and its role in robust estimation. Journal of the American Statistical Association, 69, 383-393. Hampel, F. R. (2001).Robust statistics: A brief introduction and overview. Invited talk in the Symposium “Robust Statistics and Fuzzy Techniques in Geodesy and GIS” held in ETH Zurich, Mar 12-16, 2001. Hoaglin, D. C. (1985). Summarizing shape numerically: The g- and-h distributions. In D. Hoaglin, F. Mosteller, and J. Tukey (eds.), Exploring data tables, trends, and shapes. New York: Wiley. Huber, P. J. (1964). Robust estimation of a location parameter. The Analysis of Mathematical Statistics, 38, 33-101. Huber, P. J. (1981). Robust Statistics. New York: Wiley. James, G. S. (1951). The comparison of several groups of observations when the ratios of the population variances are unknown. Biometrika, 38, 324-329. Keselman, H. J., Huberty, C. J., Lix, L. M., Olejnik, S., Cribbie, R. A., Donahue, B., Kowalchuk, R. K., Lowman, L. L., Petoskey, M. D., Keselman, J. C., & Levin, J. R. (1998). Statistical practices of Educational Researchers: An analysis of their ANOVA, MANOVA and ANCOVA analyses. Review of Educational Research, 68(3), 350-386. Keselman, H. J., Kowalchuk, R. K., Algina, J., Lix, L. M., & Wilcox, R. R. (2000). Testing treatment effects in repeated measures designs: Trimmed means and bootstrapping. British Journal of Mathematical and Statistical Psychology, 53, 175-191. Keselman, H. J., Wilcox, R. R., &Lix, L. M. (2003). A generally robust approach to hypothesis testing in independent and correlated group designs. Psychophysiology, 40, 586-596. Keselman, H. J., Wilcox, R. R., Algina, J., Fradette, K., Othman, A. R. (2004). A power comparison of robust test statistics based on adaptive estimators. Journal of Modern Applied Statistical Methods. 3(1): 27-38. Keselman, H. J., Wilcox, R. R., Lix, L. M., Algina, J. &Fradette, K. H. (2007). Adaptive robust estimation and testing.British Journal of Mathematical and Statistical Psychology, 60, 267-293. Keselman, H. J., Wilcox, R. R., Othman, A. R., & Fradette, K. (2002). Trimming, transforming statistics, and bootstrapping: circumventing the biasing effects of heteroscedasticity and nonnormality. Journal of Modern Applied Statistical Methods, 1, 288-309. Kohr, R. L. & Games, P. A, (1974). Robustness of the analysis of variance, the Welch procedure, and a Box procedure to heterogenous variances. Journal of Experimental Education, 43, 61-69. Krishnamoorthy, K., Lu, F., & Mathew, T. (2007). A parametric bootstrap approach for ANOVA with unequal variances: Fixed and random models. Computational Statistics and Data Analysis, 51, 5731-5742. Levene, H. (1960). Robust testes for equality of variances. In Contributions to Probability and Statistics (I. Olkin, ed.) 278-292. Stanford University Press, CA. Lix, L. M., & Keselman, H. J. (1998). To trim or not to trim: Tests of location equality under heteroscedasticity and non-normality. Educational and Psychological Measurement. Math. Proc. Cambridge Philosophical Society, 115, 335-363. Lix, L. M., Keselman, J. C., & Keselman, H. J. (1996). Consequences of assumption violations revisited: A quantitative review of alternatives to the one-way analysis of variance “F” test. Review of Educational Research, 66, 579-619. Manly, B. F. J. (1997). Randomization, Bootstrap and Monte Carlo Methods in Biology (2nd Ed.). London: Chapman & Hall. Md Yusof, Z. (2009). Type I error and power rates of robust methods with variable trimmed mean. Unpublished Ph.D. thesis, Universiti Sains Malaysia. Md Yusof, Z., Abdullah, S., & Syed Yahaya, S. S. (2012a). Type I error rates of parametric, robust and nonparametric methods for two group cases. World Applied Sciences Journal, 16(12), 1815-1819. Md Yusof, Z., Abdullah, S. & Syed Yahaya, S. S. (2012b). Testing the differences of student’s scores between two groups. Journal of Applied Sciences Research, 8(9), 4894-4899. Md Yusof, Z., Harun, N. H., Syed Yahaya, S. S. & Abdullah, S. (2013). A modified parametric bootstrap: an alternative to classical test. In proceeding of the World Conference on Integration of Knowledge 2013, Langkawi, Malaysia. Md Yusof, Z., Othman, A. R., and Syed Yahaya, S. S. (2010). Comparison of Type I error rates between T1 and Ft statistics for unequal population variance using variable trimming. Malaysian Journal of Mathematical Sciences, 4(2), 195-207. Mehta, J. S., & Srinivasan, R. (1970). On the Behrens-Fisher problem. Biometrika, 57, 549-655. Micceri, T. (1989). The unicorn, the normal curve, and other improbable creatures. Psychological Bulletion, 156-166. Miller, R. L. & Brewer, J. D. (2003).The A-Z Social Research. London: SAGE Publications, Ltd. Muhammad Di, N. F., Syed Yahaya, S. S., & Abdullah, S. (2014). Comparing groups using robust H statistic with adaptive trimmed mean. Sains Malaysiana, 43(4), 643-648. Othman, A. R., Keselman, H. J., Padmanabhan, A. R., Wilcox, R. R. & Fradette, K. (2003). An improved Welch-James test statistic.In proceeding of the Regional Conference on Integrating Technology in the Mathematical Sciences 2003, Universiti Sains Malaysia, Pulau Pinang, Malaysia. Othman, A. R., Keselman, H. J., Padmanabhan, A. R., Wilcox, R. R. & Fradette, K. (2004). Comparing measures of the ‘typical’ score across treatment groups. British Journal of Mathematical and Statistical Psychology, 215-234. Othman, A. R., Keselman, H. J., Wilcox, R. R., Fradette, K., & Padmanabhan, A. R. (2002). A test of symetri. Journal of Modern Applied Statistical Methods, 1, 310-315. Rocke, D. M., Downs, G. W., &Rocke, A. J. (1982). Are robust estimator really necessary?. Technometrics, 24(2), 95-101. Rosenberger, J. L., & Gasko, M. (1983). Comparing location estimators: Trimmed means, medians and trimean. In D. Hoaglin, F. Mosteller& J. Tukey (Eds.). Understanding robust and exploratory data analysis, 297-336. New York: Wiley. Rousseeuw, P. J., &Croux, C. (1993). Alternatives to the median absolute deviation. Journal of the American Statistical Association, 88, 1273-1283. SAS Institute Inc. (1999): SAS/IML User's Guide version 8. Cary. NC: SAS Institute Inc. Scheffe, H. (1959). The Analysis of Variance. New York: Wiley. Shapiro, S. S. &Wilk, M. B. (1965). An analysis of variance test for normality. Biometrika, 52, 591-611. Staudte, R. G. & Sheather, S. J. (1990). Robust estimation and testing. New York: Wiley. Syed Yahaya, S. S. (2005). Robust statistical procedures for testing the equality of central tendency parameters under skewed distributions.Unpublished Ph.D. thesis, Universiti Sains Malaysia. Syed Yahaya, S. S., Othman, A. R., &Keselman, H. J. (2006). Comparing the “typical score” across independent groups based on different criteria for trimming. Methodological Papers, 3(1), 49-62. Welch, B. L. (1951). On the comparison of several means: An alternative approach. Biometrika, 38, 330-336. Westfall, P. H. & Young, S. S. (1993). Resampling-based Multiple Testing. New York: Wiley. Wilcox, R. R. (1997). Introduction to robust estimation and hypothesis testing. Academic Press, New York. Wilcox, R. R. (2002). Understanding the practical advantages of modern ANOVA methods. Journal of Clinical Child and Adolescent Psychology, 31, 399-412. Wilcox, R. R. (2005). Introduction to robust estimation and hypothesis testing (2nd Ed.). San Diego, CA: Academic Press. Wilcox, R. R., &Keselman, H. J. (2003). Modern robust data analysis methods: Measures of central tendency. Psychological Methods, 8, 254-374. Wilcox, R. R., & Keselman, H. J. (2010). Modern robust data analysis methods: Measures of central tendency. 1-43. Wu, M., & Zuo, Y. (2009). Trimmed and Winsorized means based on a scaled deviation. Journal of Statistical Planning and Inference, 139(2), 350-365. Zikmund, W. G., Babin, B. J., Carr, J. C., & Griffin, M. (2010).Business research methods (8th ed.). Thousand Oaks, CA: Thomson/South-Western. Zimmerman, D. W. (2000). Statistical significance levels of nonparametric tests biased by heterogeneous variances of treatment groups. Journal of General Psychology, 127, 354-364. |