Margie and Bill Klesse College of Engineering and Integrated Design
Permanent URI for this communityhttps://hdl.handle.net/20.500.12588/828
Browse
Browsing Margie and Bill Klesse College of Engineering and Integrated Design by Department "Computer Science"
Now showing 1 - 4 of 4
- Results Per Page
- Sort Options
Item Can Hierarchical Transformers Learn Facial Geometry?(2023-01-13) Young, Paul; Ebadi, Nima; Das, Arun; Bethany, Mazal; Desai, Kevin; Najafirad, PeymanHuman faces are a core part of our identity and expression, and thus, understanding facial geometry is key to capturing this information. Automated systems that seek to make use of this information must have a way of modeling facial features in a way that makes them accessible. Hierarchical, multi-level architectures have the capability of capturing the different resolutions of representation involved. In this work, we propose using a hierarchical transformer architecture as a means of capturing a robust representation of facial geometry. We further demonstrate the versatility of our approach by using this transformer as a backbone to support three facial representation problems: face anti-spoofing, facial expression representation, and deepfake detection. The combination of effective fine-grained details alongside global attention representations makes this architecture an excellent candidate for these facial representation problems. We conduct numerous experiments first showcasing the ability of our approach to address common issues in facial modeling (pose, occlusions, and background variation) and capture facial symmetry, then demonstrating its effectiveness on three supplemental tasks.Item Context Modulation Enables Multi-tasking and Resource Efficiency in Liquid State Machines(Association for Computing Machinery, 2023-08-28) Helfer, Peter; Teeter, Corinne; Hill, Aaron; Vineyard, Craig M.; Aimone, James B.; Kudithipudi, DhireeshaMemory storage and retrieval are context-sensitive in both humans and animals; memories are more accurately retrieved in the context where they were acquired, and similar stimuli can elicit different responses in different contexts. Researchers have suggested that such effects may be underpinned by mechanisms that modulate the dynamics of neural circuits in a context-dependent fashion. Based on this idea, we design a mechanism for context-dependent modulation of a liquid state machine, a recurrent spiking artificial neural network. We find that context modulation enables a single network to multitask and requires fewer neurons than when several smaller networks are used to perform the tasks individually.Item Experiences in Delivering Online CS Teacher Professional Development(Association for Computing Machinery, 2024-03-07) Wilde, Jina; Beltran, Emiliano; Zawatski, Michael J.; Fernandez, Amanda S.; Prasad, Priya V.; Yuen, Timothy T.This paper describes our team's experience in designing and delivering the online teacher professional development (PD) program, Computer Science for San Antonio (CS4SA), aimed at empowering educators with computer science (CS) knowledge to increase Latinx participation in CS and STEM education within a large, urban predominantly Latinx school district in South Texas. This paper highlights the successes, challenges, and lessons learned while facilitating two cohorts of the CS PD through online platforms during the COVID-19 pandemic. As a result of this program, participants recognized the importance of integrating CS into their classroom and becoming advocates for the discipline at the high school level. Additionally, teachers, investigators, and other personnel learned important lessons for enhancing the program's impact through collaboration with district administrators and refinement of the online learning experience.Item NEO: Neuron State Dependent Mechanisms for Efficient Continual Learning(Association for Computing Machinery, 2023-04-12) Daram, Anurag; Kudithipudi, DhireeshaContinual learning (sequential learning of tasks) is challenging for deep neural networks, mainly because of catastrophic forgetting, the tendency for accuracy on previously trained tasks to drop when new tasks are learned. Although several biologically-inspired techniques have been proposed for mitigating catastrophic forgetting, they typically require additional memory and/or computational overhead. Here, we propose a novel regularization approach that combines neuronal activation-based importance measurement with neuron state-dependent learning mechanisms to alleviate catastrophic forgetting in both task-aware and task-agnostic scenarios. We introduce a neuronal state-dependent mechanism driven by neuronal activity traces and selective learning rules, with storage requirements for regularization parameters that grow slower with network size - compared to schemes that calculate weight importance, whose storage grows quadratically. The proposed model, NEO, is able to achieve performance comparable to other state-of-the-art regularization based approaches to catastrophic forgetting, while operating with a reduced memory overhead.