Please use this identifier to cite or link to this item: http://ricaxcan.uaz.edu.mx/jspui/handle/20.500.11845/1458
Full metadata record
DC FieldValueLanguage
dc.contributor299983es_ES
dc.contributor49237es_ES
dc.contributor268446-
dc.contributor.otherhttps://orcid.org/0000-0002-9498-6602-
dc.contributor.other0000-0002-9498-6602-
dc.coverage.spatialGlobales_ES
dc.creatorGalván Tejada, Carlos Eric-
dc.creatorGalván Tejada, Jorge-
dc.creatorCelaya Padilla, José María-
dc.creatorDelgado Contreras, Juan Rubén-
dc.creatorMagallanes Quintanar, Rafael-
dc.creatorMartínez Fierro, Margarita de la Luz-
dc.creatorGarza Veloz, Idalia-
dc.creatorLópez Hernández, Yamilé-
dc.creatorGamboa Rosales, Hamurabi-
dc.date.accessioned2020-03-25T02:52:46Z-
dc.date.available2020-03-25T02:52:46Z-
dc.date.issued2016-11-23-
dc.identifierinfo:eu-repo/semantics/publishedVersiones_ES
dc.identifier.issn1875-905Xes_ES
dc.identifier.urihttp://ricaxcan.uaz.edu.mx/jspui/handle/20.500.11845/1458-
dc.description.abstractThis work presents a human activity recognition (HAR) model based on audio features. The use of sound as an information source for HAR models represents a challenge because sound wave analyses generate very large amounts of data. However, feature selection techniques may reduce the amount of data required to represent an audio signal sample. Some of the audio features that were analyzed include Mel-frequency cepstral coefficients (MFCC). Although MFCC are commonly used in voice and instrument recognition, their utility within HAR models is yet to be confirmed, and this work validates their usefulness. Additionally, statistical features were extracted from the audio samples to generate the proposed HAR model. The size of the information is necessary to conform a HAR model impact directly on the accuracy of the model. This problem also was tackled in the present work; our results indicate that we are capable of recognizing a human activity with an accuracy of 85% using the HAR model proposed. This means that minimum computational costs are needed, thus allowing portable devices to identify human activities using audio as an information source.es_ES
dc.language.isoenges_ES
dc.publisherHindawies_ES
dc.relationhttp://dx.doi.org/10.1155/2016/1784101es_ES
dc.relation.urigeneralPublices_ES
dc.rightsAtribución-NoComercial-CompartirIgual 3.0 Estados Unidos de América*
dc.rightsAtribución-NoComercial-CompartirIgual 3.0 Estados Unidos de América*
dc.rights.urihttp://creativecommons.org/licenses/by-nc-sa/3.0/us/*
dc.sourceHindawi Vol. 2016, pp. 1-10es_ES
dc.subject.classificationINGENIERIA Y TECNOLOGIA [7]es_ES
dc.subject.otherAnalysis of Audioes_ES
dc.subject.otherNeural Networkses_ES
dc.subject.otherActivity Recognition Model Using Genetic Algorithmses_ES
dc.titleAn Analysis of Audio Features to Develop a Human Activity Recognition Model Using Genetic Algorithms, Random Forests, and Neural Networkses_ES
dc.typeinfo:eu-repo/semantics/articlees_ES
Appears in Collections:*Documentos Académicos*-- Doc. en Ing. y Tec. Aplicada

Files in This Item:
File Description SizeFormat 
An Analysis of Audio Features.pdf2,51 MBAdobe PDFView/Open


This item is licensed under a Creative Commons License Creative Commons