{"title":"The other kind of perceptual learning","authors":"J. Fiser","doi":"10.1556/LP.1.2009.1.6","DOIUrl":"https://doi.org/10.1556/LP.1.2009.1.6","url":null,"abstract":"Abstract In the present review we discuss an extension of classical perceptual learning called the observational learning paradigm. We propose that studying the process how humans develop internal representation of their environment requires modifications of the original perceptual learning paradigm which lead to observational learning. We relate observational learning to other types of learning, mention some recent developments that enabled its emergence, and summarize the main empirical and modeling findings that observational learning studies obtained. We conclude by suggesting that observational learning studies have the potential of providing a unified framework to merge human statistical learning, chunk learning and rule learning.","PeriodicalId":88573,"journal":{"name":"Learning & perception","volume":"1 1","pages":"69-87"},"PeriodicalIF":0.0,"publicationDate":"2009-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"67139113","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Perceptual learning as a tool for boosting working memory among individuals with reading and learning disability","authors":"K. Banai, M. Ahissar","doi":"10.1556/LP.1.2009.1.9","DOIUrl":"https://doi.org/10.1556/LP.1.2009.1.9","url":null,"abstract":"The majority of individuals with dyslexia and additional learning difficulties (D-LDs) also perform poorly on many simple auditory discrimination tasks. We now trained a group of D-LD teenagers on a series of auditory tasks and assessed their pattern of auditory improvement as well as their generalization to reading related tasks. We found that the performance of most D-LD participants quickly improved and reached the level of the general age matched population. Moreover, their pattern of learning specificity (e.g. no transfer from frequency to duration discriminations) was also similar to that previously observed in the general population. When assessed with a battery of verbal tasks that they initially performed poorly, a pattern of specific transfer was observed. Performance on verbal memory tasks improved to peer level, whereas performance on reading and non-verbal cognitive tasks did not. These findings suggest that D-LDs’ mechanisms of long-term learning are adequate. Moreover, perceptual learning can be used as a tool for improving general working memory skills, whose underlying mechanisms seem to be shared by simple tones and complex speech sounds.","PeriodicalId":88573,"journal":{"name":"Learning & perception","volume":"1 1","pages":"115-134"},"PeriodicalIF":0.0,"publicationDate":"2009-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"67139178","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"How the mind constitutes itself through perceptual learning","authors":"M. Herzog, M. Esfeld","doi":"10.1556/LP.1.2009.1.11","DOIUrl":"https://doi.org/10.1556/LP.1.2009.1.11","url":null,"abstract":"Most theories of perception assume a rigid relationship between objects of the physical world and the corresponding mental representations. We show by a priori reasoning that this assumption is not fulfilled. We claim instead that all object-representation correspondences have to be learned. However, we cannot learn to perceive all objects that there are in the world. We arrive at these conclusions by a combinatory analysis of a fictive stimulus world and the way to cope with its complexity, which is perceptual learning. We show that successful perceptual learning requires changes in the representational states of the brain that are not derived directly from the constitution of the physical world. The mind constitutes itself through perceptual learning.","PeriodicalId":88573,"journal":{"name":"Learning & perception","volume":"1 1","pages":"147-154"},"PeriodicalIF":0.0,"publicationDate":"2009-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"67139008","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"MODELING PERCEPTUAL LEARNING: WHY MICE DO NOT PLAY BACKGAMMON","authors":"Elisa M. Tartaglia, K. Aberg, M. Herzog","doi":"10.1556/LP.1.2009.1.12","DOIUrl":"https://doi.org/10.1556/LP.1.2009.1.12","url":null,"abstract":"Perceptual learning is often considered one of the simplest and basic forms of learning in general. Accordingly, it is usually modeled with simple and basic neural networks which show good results in grasping the empirical data. Simple meets simple. Complex forms of perception and learning are, then, thought to rely on these simple networks. Here, we will argue that the simplicity is in fact the Achilles heel of models of perceptual learning. We propose, instead, that perceptual learning of simple stimuli cannot be modeled with simple networks. We will review some of the empirical results yielding to this conclusion","PeriodicalId":88573,"journal":{"name":"Learning & perception","volume":"1 1","pages":"155-163"},"PeriodicalIF":0.0,"publicationDate":"2009-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"67139056","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"SELECTIVENESS OF THE EXPOSURE-BASED PERCEPTUAL LEARNING: WHAT TO LEARN AND WHAT NOT TO LEARN.","authors":"Hoon Choi, Takeo Watanabe","doi":"10.1556/LP.1.2009.1.7","DOIUrl":"10.1556/LP.1.2009.1.7","url":null,"abstract":"<p><p>How does the brain determine what to learn and what not to learn? Previous studies showed that a feature or stimulus on which subjects performed a task was learned, while the features or stimuli that were irrelevant to the task were not learned. This led some researchers to conclude that attention to a stimulus was necessary for the stimulus to be learned. This thought was challenged by the discovery of a task-irrelevant perceptual learning, in which learning occurred by mere exposure to the unattended and subthreshold stimulus. However, this exposure-based learning does not necessarily indicate that all presented stimuli are learned. Rather, recent studies showed that the occurrence of this learning was very selective for the following new findings: unattended stimulus learning occurred only (1) when the unattended stimulus was associated temporally with the processing of an attended target, (2) when the unattended stimulus was synchronously presented with reinforcers, such as internal or external rewards, and (3) when the unattended stimulus had subliminal properties. These selectivities suggest some degrees of similarity between task-relevant and task-irrelevant perceptual learning, which has been the motivation for making a united model in which both task-relevant and task-irrelevant learning are formed with similar or same mechanisms.</p>","PeriodicalId":88573,"journal":{"name":"Learning & perception","volume":"1 1","pages":"89-98"},"PeriodicalIF":0.0,"publicationDate":"2009-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2865692/pdf/nihms-199037.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"28975635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Visual learning for flexible decisions in the human brain","authors":"Z. Kourtzi","doi":"10.1556/LP.1.2009.1.8","DOIUrl":"https://doi.org/10.1556/LP.1.2009.1.8","url":null,"abstract":"Abstract In our everyday interactions we encounter a plethora of novel experiences in different contexts that require prompt decisions for successful actions and social interactions. Despite the seeming ease with which we perform these interactions, extracting the key information from the highly complex input of the natural world and deciding how to interpret it is a computationally demanding task for the visual system. Accumulating evidence suggests that the brain solves this problem by combining sensory information and previous knowledge about the environment. Here, we review the neural mechanisms that mediate experience-based plasticity and shape perceptual decisions. We propose that learning plays an important role in the adaptive optimization of visual functions that translate sensory experiences to decisions by shaping neural representations across cortical circuits in the primate brain.","PeriodicalId":88573,"journal":{"name":"Learning & perception","volume":"1 1","pages":"99-114"},"PeriodicalIF":0.0,"publicationDate":"2009-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"67139126","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Perceptual learning of pop-out and the primary visual cortex","authors":"L. Zhaoping","doi":"10.1556/LP.1.2009.1.10","DOIUrl":"https://doi.org/10.1556/LP.1.2009.1.10","url":null,"abstract":"Abstract I propose that perceptual learning of tasks to detect targets among uniform background items involves changing intra-cortical interactions in the primary visual cortex (V1). This is the case for tasks that rely mainly on bottom-up saliency to guide attention to the task relevant locations quickly, and rely less on top-down knowledge of the stimuli or on other strategies. In particular, suppression between V1 neurons responding to background, rather than target, visual items is predicted to increase over the course of such learning. Various other predictions are derived from this proposal, based on the theory that V1 creates a bottom-up saliency map to guide attention. Different tasks depend to different degrees on attention driven by bottom-up saliency; this leads to differences among findings from various studies of perceptual learning of pop out or detection tasks.","PeriodicalId":88573,"journal":{"name":"Learning & perception","volume":"1 1","pages":"135-146"},"PeriodicalIF":0.0,"publicationDate":"2009-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"67138994","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}