Repositorio Dspace

Multi-view stacking for activity recognition with sound and accelerometer data

Mostrar el registro sencillo del ítem

dc.contributor 299983 es_ES
dc.contributor.other 0000-0002-7635-4687 es_ES
dc.contributor.other https://orcid.org/0000-0002-7635-4687
dc.coverage.spatial Global es_ES
dc.creator García Ceja, Enrique
dc.creator Galván Tejada, Carlos Eric
dc.creator Brena, Ramón
dc.date.accessioned 2020-05-21T18:44:27Z
dc.date.available 2020-05-21T18:44:27Z
dc.date.issued 2018-03-10
dc.identifier info:eu-repo/semantics/publishedVersion es_ES
dc.identifier.issn 1566-2535 es_ES
dc.identifier.uri http://ricaxcan.uaz.edu.mx/jspui/handle/20.500.11845/1928
dc.identifier.uri https://doi.org/10.48779/f0cw-ft20
dc.description.abstract Many Ambient Intelligence (AmI) systems rely on automatic human activity recognition for getting crucial context information, so that they can provide personalized services based on the current users’ state. Activity recognition provides core functionality to many types of systems including: Ambient Assisted Living, fitness trackers, behavior monitoring, security, and so on. The advent of wearable devices along with their diverse set of embedded sensors opens new opportunities for ubiquitous context sensing. Recently, wearable devices such as smartphones and smart-watches have been used for activity recognition and monitoring. Most of the previous works use inertial sensors (accelerometers, gyroscopes) for activity recognition and combine them using an aggregation approach, i.e., extract features from each sensor and aggregate them to build the final classification model. This is not optimal since each sensor data source has its own statistical properties. In this work, we propose the use of a multi-view stacking method to fuse the data from heterogeneous types of sensors for activity recognition. Specifically, we used sound and accelerometer data collected with a smartphone and a wrist-band while performing home task activities. The proposed method is based on multi-view learning and stacked generalization, and consists of training a model for each of the sensor views and combining them with stacking. Our experimental results showed that the multi-view stacking method outperformed the aggregation approach in terms of accuracy, recall and specificity. es_ES
dc.language.iso eng es_ES
dc.publisher Elsevier es_ES
dc.relation https://www.sciencedirect.com/science/article/abs/pii/S1566253516301932 es_ES
dc.relation.uri generalPublic es_ES
dc.source Information Fusion Vol. 40, pp. 45-56 es_ES
dc.subject.classification INGENIERIA Y TECNOLOGIA [7] es_ES
dc.subject.other Activity Recognition es_ES
dc.subject.other Multi-viewlearning es_ES
dc.subject.other Stacked generalization es_ES
dc.subject.other Accelerometer es_ES
dc.subject.other Sound es_ES
dc.title Multi-view stacking for activity recognition with sound and accelerometer data es_ES
dc.type info:eu-repo/semantics/article es_ES


Ficheros en el ítem

Este ítem aparece en la(s) siguiente(s) colección(ones)

Mostrar el registro sencillo del ítem

Buscar en DSpace


Búsqueda avanzada

Listar

Mi cuenta

Estadísticas