Music emotion classification based on vocal and instrumental sound features using artificial neural network / Nurlaila Rosli
Classifying emotion in a song remains as a challenge in various area of research. Most of existing work in music emotion classification (MEC) done by looking at features such as audio, lyrics, social tags or combination of two or more features as stated above. There were only few studies on MEC that...
Saved in:
Main Author: | |
---|---|
Format: | Thesis |
Language: | English |
Published: |
2013
|
Subjects: | |
Online Access: | https://ir.uitm.edu.my/id/eprint/64250/1/64250.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
id |
my-uitm-ir.64250 |
---|---|
record_format |
uketd_dc |
spelling |
my-uitm-ir.642502022-09-09T02:34:20Z Music emotion classification based on vocal and instrumental sound features using artificial neural network / Nurlaila Rosli 2013 Rosli, Nurlaila Electronic data processing. Information technology. Knowledge economy. Including artificial intelligence and knowledge management Vocal music Instrumental music Classifying emotion in a song remains as a challenge in various area of research. Most of existing work in music emotion classification (MEC) done by looking at features such as audio, lyrics, social tags or combination of two or more features as stated above. There were only few studies on MEC that exploit timbre features from vocal part of the song. Thus, this research present works on classifying emotion in music by extracting timbre features from both vocal and instrumental sound clips. Three timbre features, namely spectral centroid, spectral rolloff and zero-cross are extracted based on its attribute in distinguishing between sad audio features and happy audio features. The final system is able to use all of the musical timbre features extracted from vocal part and instrumental part of a song, as to classify the type of emotion in selected Malay popular music. For training and testing purposes, this system is using an Artificial Neural Network (ANN). The percentages of emotion classified in Malay popular songs are projected to be higher when both vocal and instrumental sound features are applied to the ANN classifier. The findings of this research will collectively improve MEC based on manipulation of vocal and instrumental sound timbre features, as well as contributing towards the literature of music information retrieval, affective computing and psychology. However, it is suggested that this research must be incorporated with others features, such as rhythm and spectrum along with timbre features. It is also suggested that other emotion such as anger, calmness, sorrow and etc must be considered for the improvement of this research in the future. 2013 Thesis https://ir.uitm.edu.my/id/eprint/64250/ https://ir.uitm.edu.my/id/eprint/64250/1/64250.pdf text en public masters Universiti Teknologi MARA (UiTM) Faculty of Computer and Mathematical Sciences Mokhsin @ Misron, Mudiana |
institution |
Universiti Teknologi MARA |
collection |
UiTM Institutional Repository |
language |
English |
advisor |
Mokhsin @ Misron, Mudiana |
topic |
Electronic data processing Information technology Knowledge economy Including artificial intelligence and knowledge management Vocal music Instrumental music |
spellingShingle |
Electronic data processing Information technology Knowledge economy Including artificial intelligence and knowledge management Vocal music Instrumental music Rosli, Nurlaila Music emotion classification based on vocal and instrumental sound features using artificial neural network / Nurlaila Rosli |
description |
Classifying emotion in a song remains as a challenge in various area of research. Most of existing work in music emotion classification (MEC) done by looking at features such as audio, lyrics, social tags or combination of two or more features as stated above. There were only few studies on MEC that exploit timbre features from vocal part of the song. Thus, this research present works on classifying emotion in music by extracting timbre features from both vocal and instrumental sound clips. Three timbre features, namely spectral centroid, spectral rolloff and zero-cross are extracted based on its attribute in distinguishing between sad audio features and happy audio features. The final system is able to use all of the musical timbre features extracted from vocal part and instrumental part of a song, as to classify the type of emotion in selected Malay popular music. For training and testing purposes, this system is using an Artificial Neural Network (ANN). The percentages of emotion classified in Malay popular songs are projected to be higher when both vocal and instrumental sound features are applied to the ANN classifier. The findings of this research will collectively improve MEC based on manipulation of vocal and instrumental sound timbre features, as well as contributing towards the literature of music information retrieval, affective computing and psychology. However, it is suggested that this research must be incorporated with others features, such as rhythm and spectrum along with timbre features. It is also suggested that other emotion such as anger, calmness, sorrow and etc must be considered for the improvement of this research in the future. |
format |
Thesis |
qualification_level |
Master's degree |
author |
Rosli, Nurlaila |
author_facet |
Rosli, Nurlaila |
author_sort |
Rosli, Nurlaila |
title |
Music emotion classification based on vocal and instrumental sound features using artificial neural network / Nurlaila Rosli |
title_short |
Music emotion classification based on vocal and instrumental sound features using artificial neural network / Nurlaila Rosli |
title_full |
Music emotion classification based on vocal and instrumental sound features using artificial neural network / Nurlaila Rosli |
title_fullStr |
Music emotion classification based on vocal and instrumental sound features using artificial neural network / Nurlaila Rosli |
title_full_unstemmed |
Music emotion classification based on vocal and instrumental sound features using artificial neural network / Nurlaila Rosli |
title_sort |
music emotion classification based on vocal and instrumental sound features using artificial neural network / nurlaila rosli |
granting_institution |
Universiti Teknologi MARA (UiTM) |
granting_department |
Faculty of Computer and Mathematical Sciences |
publishDate |
2013 |
url |
https://ir.uitm.edu.my/id/eprint/64250/1/64250.pdf |
_version_ |
1783735425956839424 |