Color Research and Application最新文献

筛选
英文 中文
MAMSN: Multi-Attention Interaction and Multi-Scale Fusion Network for Spectral Reconstruction From RGB Images 基于多关注交互和多尺度融合网络的RGB图像光谱重建
IF 1.2 3区 工程技术
Color Research and Application Pub Date : 2025-02-28 DOI: 10.1002/col.22979
Suyu Wang, Lihao Xu
{"title":"MAMSN: Multi-Attention Interaction and Multi-Scale Fusion Network for Spectral Reconstruction From RGB Images","authors":"Suyu Wang,&nbsp;Lihao Xu","doi":"10.1002/col.22979","DOIUrl":"https://doi.org/10.1002/col.22979","url":null,"abstract":"<div>\u0000 \u0000 <p>In the present era, hyperspectral images have become a pervasive tool in a multitude of fields. In order to provide a feasible alternative for scenarios where hyperspectral imaging equipment is not accessible, numerous researchers have endeavored to reconstruct hyperspectral information from limited spectral measurements, leading to the development of spectral reconstruction (SR) algorithms that primarily focus on the visible spectrum. In light of the remarkable advancements achieved in many computer vision tasks through the application of deep learning, an increasing number of SR works aim to leverage deeper and wider convolutional neural networks (CNNs) to learn intricate mappings of SR. However, the majority of deep learning methods tend to neglect the design of initial up-sampling when constructing networks. While some methods introduce innovative attention mechanisms, their transferability is limited, impeding further improvement in SR accuracy. To address these issues, we propose a multi-attention interaction and multi-scale fusion network (MAMSN) for SR. It employs a shunt-confluence multi-branch architecture to learn multi-scale information in images. Furthermore, we have devised a separable enhanced up-sampling (SEU) module, situated at the network head, which processes spatial and channel information separately to produce more refined initial up-sampling results. To fully extract features at different scales for visible-spectrum spectral reconstruction, we introduce an adaptive enhanced channel attention (AECA) mechanism and a joint complementary multi-head self-attention (JCMS) mechanism, which are combined into a more powerful feature extraction module, the dual residual double attention block (DRDAB), through a dual residual structure. The experimental results show that the proposed MAMSN network outperforms other SR methods in overall performance, particularly in quantitative metrics and perceptual quality.</p>\u0000 </div>","PeriodicalId":10459,"journal":{"name":"Color Research and Application","volume":"50 4","pages":"388-402"},"PeriodicalIF":1.2,"publicationDate":"2025-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144281415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards a Model of Color Reproduction Difference 色彩再现差异模型的探讨
IF 1.2 3区 工程技术
Color Research and Application Pub Date : 2025-02-06 DOI: 10.1002/col.22969
Gregory High, Peter Nussbaum, Phil Green
{"title":"Towards a Model of Color Reproduction Difference","authors":"Gregory High,&nbsp;Peter Nussbaum,&nbsp;Phil Green","doi":"10.1002/col.22969","DOIUrl":"https://doi.org/10.1002/col.22969","url":null,"abstract":"<p>It is difficult to predict the visual difference between cross-media color reproductions. Typically, visual difference occurs due to the limitations of each output medium's color gamut, the difference in substrate colors, and the gamut mapping operations used to transform the source material. However, for pictorial images the magnitude of the resulting visual difference is also somewhat content dependent. Previously, we created an interval scale of overall visual difference (<span></span><math>\u0000 <semantics>\u0000 <mrow>\u0000 <mi>Δ</mi>\u0000 <mi>V</mi>\u0000 </mrow>\u0000 </semantics></math>) by comparing gamut mapped images side-by-side on a variety of simulated output media. In this paper we use the preexisting visual difference data, together with the known source images, as well as information relating to the output gamuts, to create a model of color reproduction difference which is both output-gamut and source-image dependent. The model generalizes well for a range of images, and therefore performs better than mean <span></span><math>\u0000 <semantics>\u0000 <mrow>\u0000 <mi>Δ</mi>\u0000 <msub>\u0000 <mi>E</mi>\u0000 <mn>00</mn>\u0000 </msub>\u0000 </mrow>\u0000 </semantics></math> as a predictor of visual difference. In addition, the inclusion of coefficients derived directly from the source images provides insight into the main drivers of the visual difference.</p>","PeriodicalId":10459,"journal":{"name":"Color Research and Application","volume":"50 4","pages":"372-387"},"PeriodicalIF":1.2,"publicationDate":"2025-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/col.22969","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144281375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unified Color Harmony Model 统一色彩和谐模型
IF 1.2 3区 工程技术
Color Research and Application Pub Date : 2025-02-02 DOI: 10.1002/col.22977
Long Xu, Dongyuan Liu, Su Jin Park, Sangwon Lee
{"title":"Unified Color Harmony Model","authors":"Long Xu,&nbsp;Dongyuan Liu,&nbsp;Su Jin Park,&nbsp;Sangwon Lee","doi":"10.1002/col.22977","DOIUrl":"https://doi.org/10.1002/col.22977","url":null,"abstract":"<div>\u0000 \u0000 <p>Color harmony is an aesthetic sensation evoked by the balanced and coherent arrangement of the colors of visual elements. While traditional methods define harmonious subspaces from geometric relationships or numerical formulas, we employ a data-driven approach to create a unified model for evaluating and generating color combinations of arbitrary sizes. By treating color sequences as linguistic sentences, we construct a color combinations generator using SeqGAN, a generative model capable of learning discrete data through reinforcement learning. The resulting model produces color combinations as much preferred as those by the best models of each size and excels at penalizing color combinations from random sampling. The distribution of the generated colors has more diverse hues than the input data, in contrast to the NLP-based model that predominantly predicts achromatic colors due to exposure bias. The flexible structure of our model allows for simple extension to additional conditions such as group preference or emotional keywords.</p>\u0000 </div>","PeriodicalId":10459,"journal":{"name":"Color Research and Application","volume":"50 4","pages":"346-371"},"PeriodicalIF":1.2,"publicationDate":"2025-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144281459","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Applying Color Appearance Model CAM16-UCS in Image Processing Under HDR Viewing Conditions 色彩外观模型CAM16-UCS在HDR条件下图像处理中的应用
IF 1.2 3区 工程技术
Color Research and Application Pub Date : 2025-01-28 DOI: 10.1002/col.22972
Xinye Shi, Ming Ronnier Luo, Yuechen Zhu
{"title":"Applying Color Appearance Model CAM16-UCS in Image Processing Under HDR Viewing Conditions","authors":"Xinye Shi,&nbsp;Ming Ronnier Luo,&nbsp;Yuechen Zhu","doi":"10.1002/col.22972","DOIUrl":"https://doi.org/10.1002/col.22972","url":null,"abstract":"<div>\u0000 \u0000 <p>Achieving successful cross-media color reproduction is very important in image processing. The purpose of this study is to accumulate high dynamic range data to verify and modify the CAM16-UCS model. There are two experiments in this study. The first experiment was aimed to collect corresponding data of color patches between colors on a display and the real scene viewed under high dynamic range viewing conditions. The results were used to refine CAM16-UCS model. Six illumination levels (i.e., 15, 100, 1000, 3160, 10 000, and 32 000 lx) and 13 test color samples were used in the experiment. Ten observers adjusted the color patches on the display to match the color samples of the real scene. The visual results showed a clear trend, an increase in the illumination level raised vividness perception (both increase in lightness and colorfulness). However, CAM16-UCS did not give accurate prediction to the visual results, especially in the lightness direction. The model was then refined to achieve satisfactory performance and to truthfully reflect the visual phenomena. However, the effect of the modified model could not achieve successful color image reproduction, especially under low illumination conditions. Experiment 2 was conducted by adjusting the overall lightness and colorfulness of the image. The results were used to extend the model for image reproduction. Also, an independent experiment verified that the image generated by the new model matched the real environment well, indicating that the model can perform well in scene restoration.</p>\u0000 </div>","PeriodicalId":10459,"journal":{"name":"Color Research and Application","volume":"50 4","pages":"335-345"},"PeriodicalIF":1.2,"publicationDate":"2025-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144281416","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Metamer Mismatching Predicts Color Difference Ellipsoids 异元不匹配预测椭球的色差
IF 1.2 3区 工程技术
Color Research and Application Pub Date : 2025-01-24 DOI: 10.1002/col.22976
Emitis Roshan, Brian Funt
{"title":"Metamer Mismatching Predicts Color Difference Ellipsoids","authors":"Emitis Roshan,&nbsp;Brian Funt","doi":"10.1002/col.22976","DOIUrl":"https://doi.org/10.1002/col.22976","url":null,"abstract":"<p>It is well known that color-discrimination thresholds vary throughout color space, as is easily observed from the familiar MacAdam ellipses plotted in chromaticity space. But why is this the case? Existing formulations of uniform color spaces (e.g., CIELAB, CIECAM02, CAM16-UCS) and their associated color-difference DE metrics are all models, not theories, based on fits to psychophysical data. While they are of great practical value, they provide no theoretical understanding as to why color discrimination varies as it does. In contrast, the hypothesis advanced and tested here is that the degree of color variability created by metamer mismatching is the primary (although not exclusive) factor underlying the variation in color-discrimination thresholds throughout color space. Not only is it interesting to understand the likely cause of the variation, but knowing the cause may foster the development of more accurate color difference metrics.</p>","PeriodicalId":10459,"journal":{"name":"Color Research and Application","volume":"50 4","pages":"327-334"},"PeriodicalIF":1.2,"publicationDate":"2025-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/col.22976","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144281458","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Categorical color perception shown in a cross-lingual comparison of visual search 在视觉搜索的跨语言比较中显示的分类颜色感知
IF 1.2 3区 工程技术
Color Research and Application Pub Date : 2025-01-10 DOI: 10.1002/col.22964
Elley Wakui, Dimitris Mylonas, Serge Caparos, Jules Davidoff
{"title":"Categorical color perception shown in a cross-lingual comparison of visual search","authors":"Elley Wakui,&nbsp;Dimitris Mylonas,&nbsp;Serge Caparos,&nbsp;Jules Davidoff","doi":"10.1002/col.22964","DOIUrl":"https://doi.org/10.1002/col.22964","url":null,"abstract":"<p>Categorical perception (CP) for colors entails that hues within a category look more similar than would be predicted by their perceptual distance. We examined color CP in both a UK and a remote population (Himba) for newly acquired and long-established color terms. Previously, the Himba language used the same color term for blue and green but now they have labels that match the English terms. However, they still have no color terms for the purple areas of color space. Hence, we were able to investigate a color category boundary that exists in the Himba language but not in English as well as a boundary that is the same for both. CP was demonstrated for both populations in a visual search task for one different hue among 12 otherwise similar hues; a task that eliminated concerns of label matching. CP was found at the color-category boundaries that are specific to each language. Alternative explanations of our data are discussed and, in particular, that it is the task-dependent use of categorical rather than non-categorical (perceptual) color networks which produces CP. It is suggested that categorical networks for colors are bilaterally represented and are the default choice in a suprathreshold similarity judgment.</p>","PeriodicalId":10459,"journal":{"name":"Color Research and Application","volume":"50 4","pages":"301-313"},"PeriodicalIF":1.2,"publicationDate":"2025-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/col.22964","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144281477","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Color Space Conversion Model From CMYK to CIELab Based on Stacking Ensemble Learning 基于堆叠集成学习的CMYK到CIELab色彩空间转换模型
IF 1.2 3区 工程技术
Color Research and Application Pub Date : 2025-01-10 DOI: 10.1002/col.22971
Hongwu Zhan, Yifei Zou, Yinwei Zhang, Weiwei Gong, Fang Xu
{"title":"Color Space Conversion Model From CMYK to CIELab Based on Stacking Ensemble Learning","authors":"Hongwu Zhan,&nbsp;Yifei Zou,&nbsp;Yinwei Zhang,&nbsp;Weiwei Gong,&nbsp;Fang Xu","doi":"10.1002/col.22971","DOIUrl":"https://doi.org/10.1002/col.22971","url":null,"abstract":"<div>\u0000 \u0000 <p>This paper develops a method based on a stacking ensemble learning model to achieve more accurate conversion from CMYK colors to LAB colors. The model employs tetrahedral interpolation, radial basis function (RBF) interpolation, and KAN as base learners, with linear regression as the meta-learner. Our findings show that the stacking-based model outperforms single models in accuracy for color conversion. In the empirical study, color blocks were printed and the collected data was measured to train and validate the stacking ensemble learning model. The results show that the stacking-based model achieves superior accuracy in color space conversion tasks. This research has substantial practical implications for enhancing color management technology in the printing industry.</p>\u0000 </div>","PeriodicalId":10459,"journal":{"name":"Color Research and Application","volume":"50 4","pages":"314-326"},"PeriodicalIF":1.2,"publicationDate":"2025-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144281476","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comparing AI and Human Emotional Responses to Color: A Semantic Differential and Word-Color Association Approach 比较人工智能和人类对颜色的情绪反应:语义差异和词-颜色关联方法
IF 1.2 3区 工程技术
Color Research and Application Pub Date : 2025-01-07 DOI: 10.1002/col.22978
Ling Zheng, Long Xu
{"title":"Comparing AI and Human Emotional Responses to Color: A Semantic Differential and Word-Color Association Approach","authors":"Ling Zheng,&nbsp;Long Xu","doi":"10.1002/col.22978","DOIUrl":"https://doi.org/10.1002/col.22978","url":null,"abstract":"<div>\u0000 \u0000 <p>This study investigates the ability of artificial intelligence (AI) to simulate human emotional responses to color using two established methods: semantic differential (SD) method and word-color association (WCA) approach. The SD method quantifies emotional reactions to colors through bipolar adjective pairs (e.g., warm–cool, heavy–light), while the WCA method explores associations between specific words and colors. AI responses were compared with data from human participants across various demographics. Results show that AI consistently evaluates basic emotional dimensions, such as warm–cool and heavy–light, with high accuracy, often surpassing human consistency. However, AI struggled with more subjective and culturally influenced dimensions like modern–classical and active-passive. In the WCA experiment, AI replicated many general color associations but faced challenges with complex emotions like joy and anticipation. These findings highlight AI's potential in tasks requiring standardized emotional responses but reveal its limitations in capturing nuanced human emotions, especially in culturally sensitive contexts.</p>\u0000 </div>","PeriodicalId":10459,"journal":{"name":"Color Research and Application","volume":"50 4","pages":"286-300"},"PeriodicalIF":1.2,"publicationDate":"2025-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144281374","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The colour Technology of Under the Caribbean (Hans Hass, 1954) Through a Comparison of Original Film Sources and Archival Documents 《加勒比之海》(汉斯·哈斯,1954)的色彩技术——通过对原始电影资料和档案文件的比较
IF 1.2 3区 工程技术
Color Research and Application Pub Date : 2024-12-29 DOI: 10.1002/col.22974
Rita Clemens
{"title":"The colour Technology of Under the Caribbean (Hans Hass, 1954) Through a Comparison of Original Film Sources and Archival Documents","authors":"Rita Clemens","doi":"10.1002/col.22974","DOIUrl":"https://doi.org/10.1002/col.22974","url":null,"abstract":"<div>\u0000 \u0000 <p>Hans Hass' <i>Under the Caribbean</i> (1954, LI, AT, DE) was one of the world's first underwater colour films. As such, it provides a unique case study and raises interesting questions about the film's colour technology, combining 35 mm chromogenic negative and 16 mm Kodachrome processes with Technicolor imbibition printing in an interweaving of colour processes. Research into the vast amount of Hass' film material held at the Filmarchiv Austria has not yet revealed any of the original Kodachrome footage of this film nor its opticals. However, based on archival documents, it was possible to confirm and reconstruct the workflow Technicolor adopted for this film. Investigating the production history of <i>Under the Caribbean</i> not only provides film historical knowledge of this specific film, but also film technical insights into the production of other films of the early 50s, that also combine several colour processes. This research will be presented together with a discussion of the restoration possibilities offered by the source material, that is, the cut negative and several release prints.</p>\u0000 </div>","PeriodicalId":10459,"journal":{"name":"Color Research and Application","volume":"50 3","pages":"276-282"},"PeriodicalIF":1.2,"publicationDate":"2024-12-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143846153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Auto-White Balance Algorithm of Skin Color Based on Asymmetric Generative Adversarial Network 基于非对称生成对抗网络的肤色自动白平衡算法
IF 1.2 3区 工程技术
Color Research and Application Pub Date : 2024-12-24 DOI: 10.1002/col.22970
Sicong Zhou, Hesong Li, Wenjun Sun, Fanyi Zhou, Kaida Xiao
{"title":"Auto-White Balance Algorithm of Skin Color Based on Asymmetric Generative Adversarial Network","authors":"Sicong Zhou,&nbsp;Hesong Li,&nbsp;Wenjun Sun,&nbsp;Fanyi Zhou,&nbsp;Kaida Xiao","doi":"10.1002/col.22970","DOIUrl":"https://doi.org/10.1002/col.22970","url":null,"abstract":"<p>Skin color constancy under nonuniform correlated color temperatures (CCT) and multiple light sources has always been a hot issue in color science. A more high-quality skin color reproduction method has broad application prospects in camera photography, face recognition, and other fields. The processing process from the 14bit or 16bit RAW pictures taken by the camera to the final output of 8bit JPG pictures is called the image processing pipeline, in which the steps of the auto-white balance algorithm have a decisive impact on the skin color reproduction result. The traditional automatic white balance algorithm is based on hypothetical statistics. Moreover, the estimated illuminant color is obtained through illuminant estimation. However, the traditional grayscale world, perfect reflector, and other auto-white balance algorithms perform unsatisfactorily under non-uniform or complex light sources. The method based on sample statistics proposes a new solution to this problem from another aspect. The deep learning algorithm, especially the generative adversarial network (GAN) algorithm, is very suitable for establishing the mapping between pictures and has an excellent performance in the fields of image reconstruction, image translation, defogging, and coloring. This paper proposes a new solution to this problem. The asymmetric UNet3+ shape generator integrates better global and local information to obtain a more refined correction matrix incorporating details of the whole image. The discriminator is Patch-discriminator, which focuses more on image details by changing the attention field. The dataset used in this article is the Liverpool-Leeds Skin-color Database (LLSD) and some supplementary images, including the skin color of more than 960 subjects under D65 and different light sources. Finally, we calculate the CIEDE2000 color difference and some other image quality index between the test skin color JPEG picture corrected by the auto-white balance algorithm and the skin color under the corresponding D65 to evaluate the effect of white balance correction. The results show that the asymmetric GAN algorithm proposed in this paper can bring higher quality skin color reproduction results than the traditional auto-white balance algorithm and existing deep learning WB algorithm.</p>","PeriodicalId":10459,"journal":{"name":"Color Research and Application","volume":"50 3","pages":"266-275"},"PeriodicalIF":1.2,"publicationDate":"2024-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/col.22970","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143846076","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信