![]() I found audio files from one of the speakers more approriate for training whose speaker id is hard-coded in the commonvoice_fa preprocessor. Unfortunately, only a few number of speakers in the dataset have enough number of utterances for training a Tacotron model and most of the audio files have low quality and are noisy. The model is trained on audio files from one of the speakers in Common Voice Persian which can be downloaded from the link below: The source code in this repository is highly inspired by and partially copied (and also modified) from the following repostories:Įncoder : CNN layers with batch-norm and a bi-directional lstm on top.ĭecoder: 2 LSTMs for the recurrent part and a post-net on top.Īttention type: GMM v2 with k=25. I've included WaveRNN model in the code only for infernece purposes (no trainer included). For generating better quality audios, the acoustic features (mel-spectrogram) are fed to a WaveRNN model. This repository contains implementation of a Persian Tacotron model in PyTorch with a dataset preprocessor for the Common Voice dataset. Preprocess in embedded in the file.Visit this demo page to listen to some audio samples If you want to make any changes in training the model including using F1CE loss function or using different hyperparameteres, change the related files which in this instance, they are hyperparameteres.py and f1ce_loss.py.įurthermore, the feature extraction is not embedded in the main model and you need to use methods in feature_extraction.py file to add the features at the end of each sample. |_ multilabel: files to train multilabel classifier |_ data: dictionary used to detect mispelled words |_ models: files to create binary classifiers |_ modified datasets: result of dataset modifier notebook |_ main dataset: includes EmoPars and ArmanEmo datasets |_ dataset modifier: notebook used to create datasets using thresholds or removing uncertain samples |_ augmented datasets: datasets with augmented samples |_ augmentation: notebook used for data augmentation Our model reaches a Macro-averaged F1-score of 0.81 and 0.76 on ArmanEmo and EmoPars, respectively, which are new state-of-the-art results in these benchmarks. ![]() In addition, we provide a new policy for selecting data from EmoPars, which selects the high-confidence samples as a result, the model does not see samples that do not have specific emotion during training. Moreover, feature selection is used to enhance the models' performance by emphasizing the text's specific features. Throughout this analysis, we use data augmentation techniques, data re-sampling, and class-weights with Transformer-based Pretrained Language Models(PLMs) to handle the imbalance problem of these datasets. In this paper, we evaluate EmoPars and compare them with ArmanEmo. These datasets, especially EmoPars, are suffering from inequality between several samples between two classes. EmoPars and ArmanEmo are two new human-labeled emotion datasets for the Persian language. With the spread of social media, different platforms like Twitter have become data sources, and the language used in these platforms is informal, making the emotion detection task difficult. Detecting emotion can help us in different fields, including opinion mining. ![]() Persian Emotion Detection using ParsBERT and Imbalanced Data Handling Approaches AbstractĮmotion recognition is one of the machine learning applications which can be done using text, speech, or image data gathered from social media spaces.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |