Please use this identifier to cite or link to this item: http://ricaxcan.uaz.edu.mx/jspui/handle/20.500.11845/1714
Full metadata record
DC FieldValueLanguage
dc.contributor31249es_ES
dc.contributor.other0000-0002-7337-8974es_ES
dc.contributor.otherhttps://orcid.org/0000-0002-7337-8974-
dc.contributor.otherhttps://orcid.org/0000-0002-8060-6170-
dc.coverage.spatialGlobales_ES
dc.creatorBecerra, Aldonso-
dc.creatorDe la Rosa Vargas, José Ismael-
dc.creatorGonzález Ramírez, Efrén-
dc.creatorPedroza, David-
dc.creatorEscalante, N. Iracemi-
dc.date.accessioned2020-04-16T19:13:20Z-
dc.date.available2020-04-16T19:13:20Z-
dc.date.issued2018-10-
dc.identifierinfo:eu-repo/semantics/publishedVersiones_ES
dc.identifier.issn1380-7501es_ES
dc.identifier.issn1573-7721es_ES
dc.identifier.urihttp://ricaxcan.uaz.edu.mx/jspui/handle/20.500.11845/1714-
dc.identifier.urihttps://doi.org/10.48779/mz95-hr57-
dc.description.abstractThe aim of this paper is to exhibit two new variations of the frame-level cost function for training a deep neural network in order to achieve better word error rates in speech recognition. Optimization methods and their minimization functions are underlying aspects to consider when someone is working on neural nets, and hence their improvement is one of the salient objectives of researchers, and this paper deals in part with such a situation. The first proposed framework is based on the concept of extropy, the complementary dual function of an uncertainty measure. The conventional cross entropy function can be mapped to a non-uniform loss function based on its corresponding extropy, enhancing the frames that have ambiguity in their belonging to specific senones. The second proposal makes a fusion of the presented mapped cross-entropy function and the idea of boosted cross-entropy, which emphasizes those frames with low target posterior probability.es_ES
dc.language.isoenges_ES
dc.publisherSpringeres_ES
dc.relationhttps://doi.org/10.1007/s11042- 018-5917-5es_ES
dc.relation.urigeneralPublices_ES
dc.rightsAtribución-NoComercial-SinDerivadas 3.0 Estados Unidos de América*
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/3.0/us/*
dc.sourceMultimedia Tools Applications, Vol. 77, No. 20, pp. 27231-27267es_ES
dc.subject.classificationINGENIERIA Y TECNOLOGIA [7]es_ES
dc.subject.otherSpeech recognitiones_ES
dc.subject.otherNeural networkses_ES
dc.subject.otherDeep learninges_ES
dc.subject.otherCross-entropyes_ES
dc.subject.otherExtropyes_ES
dc.subject.otherFrame-level loss functiones_ES
dc.titleTraining deep neural networks with non-uniform frame-level cost function for automatic speech recognitiones_ES
dc.typeinfo:eu-repo/semantics/articlees_ES
Appears in Collections:*Documentos Académicos*-- M. en Ciencias del Proc. de la Info.

Files in This Item:
File Description SizeFormat 
26_Becerra_DelaRosa MTAP P1 2018.pdfBecerra_DelaRosa MTAP 2018495,86 kBAdobe PDFView/Open


This item is licensed under a Creative Commons License Creative Commons