• OAI
  • SRU
  • Mapa web
  • Castellano
    • Inglés
AMERICANAE
  • Inicio
  • Presentación
  • Búsqueda
  • Directorio
  • Repositorios OAI
AMERICANAE
Gobierno de España Ministerio de Asuntos Exteriores, Unión Europea y Cooperación Agencia Española de Cooperación Internacional para el Desarrollo
AMERICANAE
  • Inicio
  • Presentación
  • Búsqueda
  • Directorio
  • Repositorios OAI
Está en:  › Datos de registro
Linked Open Data
Learning visual representations with optimum-path forest and its applications to Barrett’s esophagus and adenocarcinoma diagnosis
Identificadores del recurso
http://dx.doi.org/10.1007/s00521-018-03982-0
Neural Computing and Applications.
0941-0643
http://hdl.handle.net/11449/190024
10.1007/s00521-018-03982-0
2-s2.0-85059772077
Procedencia
(LA Referencia)

Ficha

Título:
Learning visual representations with optimum-path forest and its applications to Barrett’s esophagus and adenocarcinoma diagnosis
Tema:
Adenocarcinoma
Barrett’s esophagus
Image processing
Machine learning
Optimum-path forest
Descripción:
Considering the increase in the number of the Barrett’s esophagus (BE) in the last decade, and its expected continuous increase, methods that can provide an early diagnosis of dysplasia in BE-diagnosed patients may provide a high probability of cancer remission. The limitations related to traditional methods of BE detection and management encourage the creation of computer-aided tools to assist in this problem. In this work, we introduce the unsupervised Optimum-Path Forest (OPF) classifier for learning visual dictionaries in the context of Barrett’s esophagus (BE) and automatic adenocarcinoma diagnosis. The proposed approach was validated in two datasets (MICCAI 2015 and Augsburg) using three different feature extractors (SIFT, SURF, and the not yet applied to the BE context A-KAZE), as well as five supervised classifiers, including two variants of the OPF, Support Vector Machines with Radial Basis Function and Linear kernels, and a Bayesian classifier. Concerning MICCAI 2015 dataset, the best results were obtained using unsupervised OPF for dictionary generation using supervised OPF for classification purposes and using SURF feature extractor with accuracy nearly to 78 % for distinguishing BE patients from adenocarcinoma ones. Regarding the Augsburg dataset, the most accurate results were also obtained using both OPF classifiers but with A-KAZE as the feature extractor with accuracy close to 73 %. The combination of feature extraction and bag-of-visual-words techniques showed results that outperformed others obtained recently in the literature, as well as we highlight new advances in the related research area. Reinforcing the significance of this work, to the best of our knowledge, this is the first one that aimed at addressing computer-aided BE identification using bag-of-visual-words and OPF classifiers, being the application of unsupervised technique in the BE feature calculation the major contribution of this work. It is also proposed a new BE and adenocarcinoma description using the A-KAZE features, not yet applied in the literature.
Fuente:
reponame:Repositório Institucional da UNESP
instname:Universidade Estadual Paulista (UNESP)
instacron:UNESP
Idioma:
English
Relación:
Neural Computing and Applications
Autor/Productor:
de Souza, Luis A.
Afonso, Luis C. S.
Ebigbo, Alanna
Probst, Andreas
Messmann, Helmut
Mendel, Robert
Hook, Christian
Palm, Christoph
Papa, João P.
Otros colaboradores/productores:
Universidade Estadual Paulista (UNESP)
Derechos:
info:eu-repo/semantics/openAccess
Fecha:
2019-10-06T16:59:47Z
2019-01-01
Tipo de recurso:
info:eu-repo/semantics/article
info:eu-repo/semantics/publishedVersion
About:
2021-10-23T19:28:01Zhttp://www.openarchives.org/OAI/2.0/oai_dc/Repositório Institucional da UNESP - Universidade Estadual Paulista (UNESP)

oai_dc

Descargar XML

    <?xml version="1.0" encoding="UTF-8" ?>

  1. <oai_dc:dc schemaLocation="http://www.openarchives.org/OAI/2.0/oai_dc/ http://www.openarchives.org/OAI/2.0/oai_dc.xsd">

    1. <dc:title>Learning visual representations with optimum-path forest and its applications to Barrett’s esophagus and adenocarcinoma diagnosis</dc:title>

    2. <dc:creator>de Souza, Luis A.</dc:creator>

    3. <dc:creator>Afonso, Luis C. S.</dc:creator>

    4. <dc:creator>Ebigbo, Alanna</dc:creator>

    5. <dc:creator>Probst, Andreas</dc:creator>

    6. <dc:creator>Messmann, Helmut</dc:creator>

    7. <dc:creator>Mendel, Robert</dc:creator>

    8. <dc:creator>Hook, Christian</dc:creator>

    9. <dc:creator>Palm, Christoph</dc:creator>

    10. <dc:creator>Papa, João P.</dc:creator>

    11. <dc:contributor>Universidade Estadual Paulista (UNESP)</dc:contributor>

    12. <dc:subject>Adenocarcinoma</dc:subject>

    13. <dc:subject>Barrett’s esophagus</dc:subject>

    14. <dc:subject>Image processing</dc:subject>

    15. <dc:subject>Machine learning</dc:subject>

    16. <dc:subject>Optimum-path forest</dc:subject>

    17. <dc:description>Considering the increase in the number of the Barrett’s esophagus (BE) in the last decade, and its expected continuous increase, methods that can provide an early diagnosis of dysplasia in BE-diagnosed patients may provide a high probability of cancer remission. The limitations related to traditional methods of BE detection and management encourage the creation of computer-aided tools to assist in this problem. In this work, we introduce the unsupervised Optimum-Path Forest (OPF) classifier for learning visual dictionaries in the context of Barrett’s esophagus (BE) and automatic adenocarcinoma diagnosis. The proposed approach was validated in two datasets (MICCAI 2015 and Augsburg) using three different feature extractors (SIFT, SURF, and the not yet applied to the BE context A-KAZE), as well as five supervised classifiers, including two variants of the OPF, Support Vector Machines with Radial Basis Function and Linear kernels, and a Bayesian classifier. Concerning MICCAI 2015 dataset, the best results were obtained using unsupervised OPF for dictionary generation using supervised OPF for classification purposes and using SURF feature extractor with accuracy nearly to 78 % for distinguishing BE patients from adenocarcinoma ones. Regarding the Augsburg dataset, the most accurate results were also obtained using both OPF classifiers but with A-KAZE as the feature extractor with accuracy close to 73 %. The combination of feature extraction and bag-of-visual-words techniques showed results that outperformed others obtained recently in the literature, as well as we highlight new advances in the related research area. Reinforcing the significance of this work, to the best of our knowledge, this is the first one that aimed at addressing computer-aided BE identification using bag-of-visual-words and OPF classifiers, being the application of unsupervised technique in the BE feature calculation the major contribution of this work. It is also proposed a new BE and adenocarcinoma description using the A-KAZE features, not yet applied in the literature.</dc:description>

    18. <dc:date>2019-10-06T16:59:47Z</dc:date>

    19. <dc:date>2019-10-06T16:59:47Z</dc:date>

    20. <dc:date>2019-01-01</dc:date>

    21. <dc:type>info:eu-repo/semantics/article</dc:type>

    22. <dc:type>info:eu-repo/semantics/publishedVersion</dc:type>

    23. <dc:identifier>http://dx.doi.org/10.1007/s00521-018-03982-0</dc:identifier>

    24. <dc:identifier>Neural Computing and Applications.</dc:identifier>

    25. <dc:identifier>0941-0643</dc:identifier>

    26. <dc:identifier>http://hdl.handle.net/11449/190024</dc:identifier>

    27. <dc:identifier>10.1007/s00521-018-03982-0</dc:identifier>

    28. <dc:identifier>2-s2.0-85059772077</dc:identifier>

    29. <dc:language>eng</dc:language>

    30. <dc:relation>Neural Computing and Applications</dc:relation>

    31. <dc:rights>info:eu-repo/semantics/openAccess</dc:rights>

    32. <dc:source>reponame:Repositório Institucional da UNESP</dc:source>

    33. <dc:source>instname:Universidade Estadual Paulista (UNESP)</dc:source>

    34. <dc:source>instacron:UNESP</dc:source>

    35. <about>

      1. <provenance>

        1. <originDescription altered="" harvestDate="">

          1. <baseURL />
          2. <identifier />
          3. <datestamp>2021-10-23T19:28:01Z</datestamp>

          4. <metadataNamespace>http://www.openarchives.org/OAI/2.0/oai_dc/</metadataNamespace>

          5. <repositoryID />
          6. <repositoryName>Repositório Institucional da UNESP - Universidade Estadual Paulista (UNESP)</repositoryName>

          </originDescription>

        </provenance>

      </about>

    </oai_dc:dc>

xoai

Descargar XML

    <?xml version="1.0" encoding="UTF-8" ?>

  1. <metadata schemaLocation="http://www.lyncode.com/xoai http://www.lyncode.com/xsd/xoai.xsd">

    1. <element name="dc">

      1. <element name="title">

        1. <element name="none">

          1. <field name="value">Learning visual representations with optimum-path forest and its applications to Barrett’s esophagus and adenocarcinoma diagnosis</field>

          </element>

        </element>

      2. <element name="subject">

        1. <element name="por">

          1. <field name="value">Adenocarcinoma</field>

          2. <field name="value">Barrett’s esophagus</field>

          3. <field name="value">Image processing</field>

          4. <field name="value">Machine learning</field>

          5. <field name="value">Optimum-path forest</field>

          </element>

        </element>

      3. <element name="description">

        1. <element name="none">

          1. <field name="value">Considering the increase in the number of the Barrett’s esophagus (BE) in the last decade, and its expected continuous increase, methods that can provide an early diagnosis of dysplasia in BE-diagnosed patients may provide a high probability of cancer remission. The limitations related to traditional methods of BE detection and management encourage the creation of computer-aided tools to assist in this problem. In this work, we introduce the unsupervised Optimum-Path Forest (OPF) classifier for learning visual dictionaries in the context of Barrett’s esophagus (BE) and automatic adenocarcinoma diagnosis. The proposed approach was validated in two datasets (MICCAI 2015 and Augsburg) using three different feature extractors (SIFT, SURF, and the not yet applied to the BE context A-KAZE), as well as five supervised classifiers, including two variants of the OPF, Support Vector Machines with Radial Basis Function and Linear kernels, and a Bayesian classifier. Concerning MICCAI 2015 dataset, the best results were obtained using unsupervised OPF for dictionary generation using supervised OPF for classification purposes and using SURF feature extractor with accuracy nearly to 78 % for distinguishing BE patients from adenocarcinoma ones. Regarding the Augsburg dataset, the most accurate results were also obtained using both OPF classifiers but with A-KAZE as the feature extractor with accuracy close to 73 %. The combination of feature extraction and bag-of-visual-words techniques showed results that outperformed others obtained recently in the literature, as well as we highlight new advances in the related research area. Reinforcing the significance of this work, to the best of our knowledge, this is the first one that aimed at addressing computer-aided BE identification using bag-of-visual-words and OPF classifiers, being the application of unsupervised technique in the BE feature calculation the major contribution of this work. It is also proposed a new BE and adenocarcinoma description using the A-KAZE features, not yet applied in the literature.</field>

          </element>

        </element>

      4. <element name="contributor">

        1. <element name="none">

          1. <field name="value">Universidade Estadual Paulista (UNESP)</field>

          </element>

        </element>

      5. <element name="date">

        1. <element name="none">

          1. <field name="value">2019-10-06T16:59:47Z</field>

          2. <field name="value">2019-10-06T16:59:47Z</field>

          3. <field name="value">2019-01-01</field>

          </element>

        </element>

      6. <element name="type">

        1. <element name="driver">

          1. <field name="value">info:eu-repo/semantics/article</field>

          </element>

        2. <element name="status">

          1. <field name="value">info:eu-repo/semantics/publishedVersion</field>

          </element>

        </element>

      7. <element name="identifier">

        1. <element name="uri">

          1. <field name="value">http://dx.doi.org/10.1007/s00521-018-03982-0</field>

          2. <field name="value">Neural Computing and Applications.</field>

          3. <field name="value">0941-0643</field>

          4. <field name="value">http://hdl.handle.net/11449/190024</field>

          5. <field name="value">10.1007/s00521-018-03982-0</field>

          6. <field name="value">2-s2.0-85059772077</field>

          </element>

        </element>

      8. <element name="language">

        1. <element name="iso">

          1. <field name="value">eng</field>

          </element>

        </element>

      9. <element name="relation">

        1. <element name="none">

          1. <field name="value">Neural Computing and Applications</field>

          </element>

        </element>

      10. <element name="rights">

        1. <element name="driver">

          1. <field name="value">info:eu-repo/semantics/openAccess</field>

          </element>

        </element>

      11. <element name="source">

        1. <element name="none">

          1. <field name="value">reponame:Repositório Institucional da UNESP</field>

          2. <field name="value">instname:Universidade Estadual Paulista (UNESP)</field>

          3. <field name="value">instacron:UNESP</field>

          </element>

        </element>

      12. <element name="creator">

        1. <element name="author">

          1. <field name="value">de Souza, Luis A.</field>

          2. <field name="value">Afonso, Luis C. S.</field>

          3. <field name="value">Ebigbo, Alanna</field>

          4. <field name="value">Probst, Andreas</field>

          5. <field name="value">Messmann, Helmut</field>

          6. <field name="value">Mendel, Robert</field>

          7. <field name="value">Hook, Christian</field>

          8. <field name="value">Palm, Christoph</field>

          9. <field name="value">Papa, João P.</field>

          </element>

        </element>

      </element>

    2. <element name="bundles" />
    3. <element name="others">

      1. <field name="handle" />
      2. <field name="lastModifyDate">2021-10-23T19:28:01Z</field>

      </element>

    4. <element name="repository">

      1. <field name="repositoryType">Repositório Institucional</field>

      2. <field name="repositoryURL" />
      3. <field name="institutionType">PUB</field>

      </element>

    </metadata>

  • Biblioteca AECID
  • Av. Reyes Católicos, nº 4. 28040 Madrid.
  • biblio.cooperacion@aecid.es
  • (+34) 91 583 81 75 - (+34) 91 583 81 64
  • Aviso legal
  • Protección de datos
  • Accesibilidad
  • 
  • Logo Flickr
  • 
  • 
  • 