Skip navigation
Use este identificador para citar ou linkar para este item: http://repositorio.unb.br/handle/10482/52392
Arquivos associados a este item:
Arquivo Descrição TamanhoFormato 
página em branco.pdf8,42 kBAdobe PDFVisualizar/Abrir
Título: Heuristic once learning for image & text duality information processing
Autor(es): Weigang, Li
Martins, Luiz
Ferreira, Nikson
Miranda, Christian
Althoff, Lucas
Pessoa, Walner
Farias, Mylenè
Jacobi, Ricardo
Rincon, Mauricio
ORCID: https://orcid.org/0000-0003-1826-1850
https://orcid.org/0000-0003-0089-3905
Afiliação do autor: University of Brasilia, Department of Computer Science
University of Brasilia, Department of Computer Science
University of Brasilia, Department of Computer Science
University of Brasilia, Department of Computer Science
University of Brasilia, Department of Computer Science
University of Brasilia, Department of Computer Science
University of Brasilia, Department of Computer Science
University of Brasilia, Department of Computer Science
University of Brasilia, Department of Computer Science
Assunto: Heurística
Rede Neurais Convolucionais (CNNs)
Visão computacional
Aprendizagem profunda
Imagem
Data de publicação: Dez-2022
Editora: IEEE
Referência: WEIGANG, Li et al. Heuristic once learning for image & text duality information processing. In: 2022 IEEE Smartworld, Ubiquitous Intelligence & Computing, Scalable Computing & Communications, Digital Twin, Privacy Computing, Metaverse, Autonomous & Trusted Vehicle (SmartWorld/UIC/ScalCom/DigitalTwin/PriComp/Meta), Haikou, p. 1353-1359, 2022. DOI: 10.1109/SmartWorld-UIC-ATC-ScalCom-DigitalTwin-PriComp-Metaverse56740.2022.00195. Disponível em: https://ieeexplore.ieee.org/document/10189581. Acesso em: 06 ago. 2025.
Abstract: Few-shot learning is an important mechanism to minimize the need for the labeling of large amounts of data and taking advantage of transfer learning. To identify image/text input with duality property, this research proposes a “Heuristic once learning (HOL)” mechanism to investigate multi-modal input processing similar to human-like behavior. First, we create an image/text data set of big Latin letters composed of small letters and another data set composed of Arabic, Chinese and Roman numerals. Secondly, we use Convolutional Neural Networks (CNN) for pre-training the dataset of letters to get structural features. Thirdly, using the acquired knowledge, a Self-organizing Map (SOM) and Contrastive Language-Image Pretraining (CLIP) are tested separately using zero-shot learning. Siamese Networks and Vision Transformer (ViT) are also tested using one-shot learning by knowledge transfer to identify the features of unknown characters. The research results show the potential and challenges to realize HOL and make a useful attempt for the development of general agents.
Unidade Acadêmica: Instituto de Ciências Exatas (IE)
Departamento de Ciência da Computação (IE CIC)
Programa de pós-graduação: Programa de Pós-Graduação em Informática
Licença: Copyright © 2022, IEEE. Fonte: https://s100.copyright.com/AppDispatchServlet?publisherName=ieee&publication=proceedings&title=Heuristic+Once+Learning+for+Image+%26amp%3B+Text+Duality+Information+Processing&isbn=979-8-3503-4655-8&publicationDate=December+2022&author=Li+Weigang&ContentID=10.1109/SmartWorld-UIC-ATC-ScalCom-DigitalTwin-PriComp-Metaverse56740.2022.00195&orderBeanReset=true&startPage=1353&endPage=1359&proceedingName=2022+IEEE+Smartworld%2C+Ubiquitous+Intelligence+%26+Computing%2C+Scalable+Computing+%26+Communications%2C+Digital+Twin%2C+Privacy+Computing%2C+Metaverse%2C+Autonomous+%26+Trusted+Vehicles+%28SmartWorld%2FUIC%2FScalCom%2FDigitalTwin%2FPriComp%2FMeta%29. Acesso em: 06 ago. 2025.
DOI: 10.1109/SmartWorld-UIC-ATC-ScalCom-DigitalTwin-PriComp-Metaverse56740.2022.00195
Versão da editora: https://ieeexplore.ieee.org/document/10189581/figures#figures
Aparece nas coleções:Artigos publicados em periódicos e afins

Mostrar registro completo do item Visualizar estatísticas



Os itens no repositório estão protegidos por copyright, com todos os direitos reservados, salvo quando é indicado o contrário.