平面印刷术:简介。

IF 1.9 4区 工程技术 Q3 MICROSCOPY
John Rodenburg
{"title":"平面印刷术:简介。","authors":"John Rodenburg","doi":"10.1111/jmi.70025","DOIUrl":null,"url":null,"abstract":"<p>For anyone new to ptychography, the first obstacle to overcome is how to pronounce its name. The author has heard many tortured attempts trying to simultaneously incorporate the ‘p’ with the ‘t’—an impossible task. The answer is very simple: forget the ‘p’—in English it is silent, just as in ‘psychology’. Pronounce it as ‘tykography’.</p><p>Ptychography overcomes the two most enduring historical weaknesses of conventional transmission (and reflection) microscopy. It can in principle obtain wavelength limited resolution, unaffected by lens aberration or the maximum scattering angle imposed by the numerical aperture of the lens. This is especially important for X-ray and electron imaging where, for various intractable reasons, the useable numerical aperture of the available lenses is so small. It can also record the image phase near perfectly, meaning that otherwise transparent objects can be imaged with very high contrast.</p><p>Unlike conventional microscopy with lenses, ptychography does not provide a real or virtual image that can be seen directly. Instead, it uses a computer to process a very large quantity of data that bear no relationship to the final image that it ‘reconstructs’. Ordinary microscopists—that is, those who simply want to see a magnified image of their specimen and do not want to understand exactly how the image is computed—can find this circuitous process all rather alienating. First results from the author's group in the early 1990s were widely dismissed by the community. A leading microscopist at the time asserted that he would never believe in an image that came out of a computer. A further problem was that the pictures we could obtain in those days were so small and totally unconvincing. Ptychography had to wait for Moore's Law to catch up with its greedy data requirements.</p><p>However, in the last 10–15 years, ptychography has become the technique of choice for very high-resolution X-ray imaging and tomography. In the last 5 years or so, some extraordinary electron ptychography results have been reported, far surpassing the resolution limit that for so many years had seemed insurmountable using magnetic lenses and aberration correction. Optical microscopy is already wavelength limited, but the very sensitive phase image that ptychography supplies has removed the need for staining or labelling, thus allowing live imaging of biological cells.</p><p>The experimental method is deceptively simple. We have a source of radiation which shines upon the specimen. The wavefield at the exit surface of this specimen is then allowed to propagate some distance downstream of the object where the pattern of scattered intensity is recorded on a two-dimensional detector. It is important to understand that this detector can be as large as we like. It can capture scattering up to large angles, where high-resolution information is expressed. Electron and X-ray lenses can only capture and focus reliably small angles of scatter, which severely limits their resolution.</p><p>We then arrange for the specimen and the illumination to be moved laterally relative to one another, whereupon the scattered intensity is recorded again. The process is repeated several times (in practice, this can be as many as 100 or 1000 times) in such a way that each area of interest of the specimen is illuminated at least once. A necessary condition for the computation of the image is that the area of specimen illuminated at any one position must also overlap with at least one other area of the specimen which has also been illuminated.</p><p>This overlap is important because it means that the same element (pixel) of the specimen is expressed in more than one scattering pattern, meaning that we have redundancy in the data: we record many more data than the number of numbers we need to compute the final image. These ‘extra’ data are of fundamental importance in ptychography.</p><p>First, to make an image, we must solve the ‘phase problem’. Every measurement we make—every pixel in every scattering pattern (usually a diffraction pattern)—can only be recorded in intensity. However, the underlining wave impinging on the detector has two numbers associated with it: a modulus and a phase or, equivalently, the real and imaginary components of a complex number. In some imaging techniques, like radio astronomy, the frequency of the wave disturbance is low enough so that we can measure its amplitude and its time of arrival (which is encoded in the phase) directly, say by plotting the signal on a cathode ray tube. This is as much as we can ever measure about a propagating wave. If we assemble all the data from many detectors, then we can work out backwards the shape of the source of the waves: that is, an image of the object.</p><p>However, to see very small objects, we need to use a radiation with a wavelength concomitant with the size of that object, which itself implies a very high frequency wave. For the microscopic radiations (light, X-ray and electrons), there are no detectors that can record directly the phase of such waves: only the intensity (the modulus squared) can be measured. All phase information is lost.</p><p>The genius of ptychography is that it recovers this ‘lost phase’ by exploiting the effect that the lateral shift of the illumination/specimen has on the recorded data. Once we have solved for the phase of the wave over the entire detector, we can use this to generate a computational lens which has a much larger numerical aperture (and hence can achieve much higher resolution) than the very small numerical apertures achieved by short wavelength (X-ray and electron) lenses.</p><p>An essential mathematical constraint is that the two functions that move across one another remain constant during the course of the experiment. However, there is great flexibility in the physical nature of the functions themselves. For example, in Fourier ptychography, one of the functions is the wavefield lying in the back focal plane of a low-resolution microscope, while the other function is the objective aperture lying in the same plane. Tilting the illumination has the effect of shifting the wavefield pattern across the aperture. In this case, the data collection occurs in the image plane which lies in the Fourier domain of the aperture.</p><p>Although nowadays taken for granted, it is not at all obvious that the illumination/specimen shifts used in ptychography should allow for the solution of the phase problem. We can argue that the data set we record is highly constrained because of the overlaps between illumination positions. But then does it automatically follow that the relative phases of all the diffraction patterns can be solved for unambiguously? When the author first considered this issue in the late 1980s, the answer was far from clear. It was at that time that Owen Saxton (as of the Gerchberg and Saxton phase retrieval algorithm) suggested that he might consider looking at some work by Walter Hoppe from the late 1960s and early 1970s. This had shown that moving a carefully designed coherent illumination field across a crystal specimen could, in theory, solve for the phase difference between adjacent crystalline reflections. The method was demonstrated using light and a one-dimensional grating. Hegerl and Hoppe later referred to the scheme as ‘ptychography’ because it required the diffracted beams to be convolved or ‘folded’ into one another. ‘Ptych’ is the ancient Greek for ‘fold’. (Incidentally, it also means—amongst other things—the entrails of an animal and the folds in gently rolling hills).</p><p>For the author, this was a pivotal insight. If relative phases could be found for pairs of diffracted beams, then surely this same concept—moving an illumination field—could be extended to general, non-crystalline objects, hence solving for the phases between all such pairs of beams? For an extended non-crystalline specimen, the diffraction pattern involves interferences between millions of diffracted beams. Nevertheless, it is useful to have a simple model for why ptychography should in principle be able solve the phase problem. Unfortunately, because the original papers written by Hoppe are difficult to understand (and are in German), those new to the field often find the ‘ptych’ concept rather confusing and irrelevant.</p><p>Today, nobody does ptychography in the way it was initially envisaged. Its original applicability is extremely narrow: perfect crystal structures can be easily solved using X-ray methods so there is no real scientific need for ptychography of crystals. However, in the mid-1990s, electron crystalline ptychography in its original form—interfering pairs of beams in the scanning transmission electron microscope (STEM) configuration—was indeed shown to work.</p><p>A much more difficult problem is to reconstruct the specimen function for some general ptychographical data set—that is, one where the data have been scattered from an infinite (and possibly 3D) object which has complicated non-crystalline structure, and the form of the illumination is unknown.</p><p>Nowadays nearly all reconstruction algorithms converge upon a solution iteratively. We assume we know the way that the illumination interacts with the specimen and how the resulting scattered wave propagates to the detector. This might have to include modelling scattering from multiple layers of a thick specimen. Indeed, it was a major advance in ptychography to realise that its data could be used to solve for 3D structures. At any particular iteration, we have an ongoing estimate of the specimen function and the illumination function. We then calculate the intensity of the set of diffraction patterns we would expect these functions to generate. Of course, if our estimated functions are not the same as the as their actual counterparts, the modelled data will not be the same as the real data. We use this difference to guide us to a new estimate of the specimen and illumination functions and then repeat the process iteratively until the real and the estimated data match with one another. There are a great number of ways to implement this sort of scheme in practice.</p><p>We also mention that there are two non-iterative ‘direct’ inversion methods which were developed during the 1990s: the Wigner Distribution Deconvolution (WDD) and the single sideband method (SSB). These are still being used by some workers. They have some distinct advantages (and also some limitations), but we do not have space to describe them here.</p><p>The first iterative approach to the reconstruction problem (called the ptychographic iterative engine, ‘PIE’) was published 2004. This was shown to work experimentally with hard X-rays in 2007 and subsequently led to an explosive interest in ptychography at X-ray synchrotrons around the world. Although ptychography applies to any wavelength, there were a number of reasons why it made such a large and immediate impact in the field of X-ray imaging. First, the gain in resolution beyond the capabilities of a typical X-ray lens was by about a factor of 5. Second, the phase image provided by ptychography is ideal for tomography; phase is cumulative as it passes through the object, and so it gives a linear measure of how much material density the beam has passed through. X-ray ‘ptycho-tomography’ is now a standard technique at many beamlines. Thirdly, the single photon-counting hard X-ray detectors available were much more efficient than any electron detector at that time. Ptychography had come of age.</p><p>Part of the beauty of ptychography is that the mathematics of the reconstruction process can be applied to any wavelength of microscopic imaging. Once the inverse problem had been solved, it quickly found applications in light, electron, EUV, and Terahertz imaging.</p><p>As mentioned above, ptychography relies on redundancy in the data we record to solve the phase problem. In fact, in certain experimental configurations, this redundancy can be huge: we can in principle obtain a 2D diffraction pattern for every single pixel in the (2D) object/image plane. In electron imaging, this is now referred to as ‘4D STEM’. If we record a 4D data set to solve for a 2D image, we clearly have a super-abundance of this ‘extra’ data. We can use this redundancy to greatly enhance the capabilities of the technique. There is no space here to describe all the advances in the technique that use these ‘extra’ data. Suffice it to say that key developments over the last 10–15 years have included methods for removing partial coherence in the source and illumination optics, methods for coping—and indeed computationally reversing—3D multiple scattering effects (important in electron imaging, where scattering is very strong), and methods to retrospectively correct errors that occurred during the data collection.</p><p>But the story is far from over. There is still a lot of work to do to make ptychography as easy to use as a conventional microscope. It is now seen as a standard technique at dedicated X-ray beamlines, but even then, a user needs a lot of understanding of the technique to optimise results. Electron ptychography is much more difficult and is a very long way off from being accessible to non-specialists, even though it holds great promise: the increased resolution it provides has made it possible to image for the first time atomic vibrations and the bonding of atoms.</p><p>So, ‘ptychography’ is an irritating word for a technique that is revolutionising microscopy over all wavelengths, both photon and electron. Its capabilities continue to expand very quickly. Ironically, modern incarnations of it bear little or no relationship to the original concept described by its name: but the word is now baked into the literature. The applications of ptychography are so wide it would be very hard to come up with a single term that could include all its incarnations. I think we must learn to live with this wretched name forever: but whatever you do, please don't try to pronounce that ‘p’!</p>","PeriodicalId":16484,"journal":{"name":"Journal of microscopy","volume":"300 2","pages":"153-155"},"PeriodicalIF":1.9000,"publicationDate":"2025-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/jmi.70025","citationCount":"0","resultStr":"{\"title\":\"Ptychography: A brief introduction\",\"authors\":\"John Rodenburg\",\"doi\":\"10.1111/jmi.70025\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>For anyone new to ptychography, the first obstacle to overcome is how to pronounce its name. The author has heard many tortured attempts trying to simultaneously incorporate the ‘p’ with the ‘t’—an impossible task. The answer is very simple: forget the ‘p’—in English it is silent, just as in ‘psychology’. Pronounce it as ‘tykography’.</p><p>Ptychography overcomes the two most enduring historical weaknesses of conventional transmission (and reflection) microscopy. It can in principle obtain wavelength limited resolution, unaffected by lens aberration or the maximum scattering angle imposed by the numerical aperture of the lens. This is especially important for X-ray and electron imaging where, for various intractable reasons, the useable numerical aperture of the available lenses is so small. It can also record the image phase near perfectly, meaning that otherwise transparent objects can be imaged with very high contrast.</p><p>Unlike conventional microscopy with lenses, ptychography does not provide a real or virtual image that can be seen directly. Instead, it uses a computer to process a very large quantity of data that bear no relationship to the final image that it ‘reconstructs’. Ordinary microscopists—that is, those who simply want to see a magnified image of their specimen and do not want to understand exactly how the image is computed—can find this circuitous process all rather alienating. First results from the author's group in the early 1990s were widely dismissed by the community. A leading microscopist at the time asserted that he would never believe in an image that came out of a computer. A further problem was that the pictures we could obtain in those days were so small and totally unconvincing. Ptychography had to wait for Moore's Law to catch up with its greedy data requirements.</p><p>However, in the last 10–15 years, ptychography has become the technique of choice for very high-resolution X-ray imaging and tomography. In the last 5 years or so, some extraordinary electron ptychography results have been reported, far surpassing the resolution limit that for so many years had seemed insurmountable using magnetic lenses and aberration correction. Optical microscopy is already wavelength limited, but the very sensitive phase image that ptychography supplies has removed the need for staining or labelling, thus allowing live imaging of biological cells.</p><p>The experimental method is deceptively simple. We have a source of radiation which shines upon the specimen. The wavefield at the exit surface of this specimen is then allowed to propagate some distance downstream of the object where the pattern of scattered intensity is recorded on a two-dimensional detector. It is important to understand that this detector can be as large as we like. It can capture scattering up to large angles, where high-resolution information is expressed. Electron and X-ray lenses can only capture and focus reliably small angles of scatter, which severely limits their resolution.</p><p>We then arrange for the specimen and the illumination to be moved laterally relative to one another, whereupon the scattered intensity is recorded again. The process is repeated several times (in practice, this can be as many as 100 or 1000 times) in such a way that each area of interest of the specimen is illuminated at least once. A necessary condition for the computation of the image is that the area of specimen illuminated at any one position must also overlap with at least one other area of the specimen which has also been illuminated.</p><p>This overlap is important because it means that the same element (pixel) of the specimen is expressed in more than one scattering pattern, meaning that we have redundancy in the data: we record many more data than the number of numbers we need to compute the final image. These ‘extra’ data are of fundamental importance in ptychography.</p><p>First, to make an image, we must solve the ‘phase problem’. Every measurement we make—every pixel in every scattering pattern (usually a diffraction pattern)—can only be recorded in intensity. However, the underlining wave impinging on the detector has two numbers associated with it: a modulus and a phase or, equivalently, the real and imaginary components of a complex number. In some imaging techniques, like radio astronomy, the frequency of the wave disturbance is low enough so that we can measure its amplitude and its time of arrival (which is encoded in the phase) directly, say by plotting the signal on a cathode ray tube. This is as much as we can ever measure about a propagating wave. If we assemble all the data from many detectors, then we can work out backwards the shape of the source of the waves: that is, an image of the object.</p><p>However, to see very small objects, we need to use a radiation with a wavelength concomitant with the size of that object, which itself implies a very high frequency wave. For the microscopic radiations (light, X-ray and electrons), there are no detectors that can record directly the phase of such waves: only the intensity (the modulus squared) can be measured. All phase information is lost.</p><p>The genius of ptychography is that it recovers this ‘lost phase’ by exploiting the effect that the lateral shift of the illumination/specimen has on the recorded data. Once we have solved for the phase of the wave over the entire detector, we can use this to generate a computational lens which has a much larger numerical aperture (and hence can achieve much higher resolution) than the very small numerical apertures achieved by short wavelength (X-ray and electron) lenses.</p><p>An essential mathematical constraint is that the two functions that move across one another remain constant during the course of the experiment. However, there is great flexibility in the physical nature of the functions themselves. For example, in Fourier ptychography, one of the functions is the wavefield lying in the back focal plane of a low-resolution microscope, while the other function is the objective aperture lying in the same plane. Tilting the illumination has the effect of shifting the wavefield pattern across the aperture. In this case, the data collection occurs in the image plane which lies in the Fourier domain of the aperture.</p><p>Although nowadays taken for granted, it is not at all obvious that the illumination/specimen shifts used in ptychography should allow for the solution of the phase problem. We can argue that the data set we record is highly constrained because of the overlaps between illumination positions. But then does it automatically follow that the relative phases of all the diffraction patterns can be solved for unambiguously? When the author first considered this issue in the late 1980s, the answer was far from clear. It was at that time that Owen Saxton (as of the Gerchberg and Saxton phase retrieval algorithm) suggested that he might consider looking at some work by Walter Hoppe from the late 1960s and early 1970s. This had shown that moving a carefully designed coherent illumination field across a crystal specimen could, in theory, solve for the phase difference between adjacent crystalline reflections. The method was demonstrated using light and a one-dimensional grating. Hegerl and Hoppe later referred to the scheme as ‘ptychography’ because it required the diffracted beams to be convolved or ‘folded’ into one another. ‘Ptych’ is the ancient Greek for ‘fold’. (Incidentally, it also means—amongst other things—the entrails of an animal and the folds in gently rolling hills).</p><p>For the author, this was a pivotal insight. If relative phases could be found for pairs of diffracted beams, then surely this same concept—moving an illumination field—could be extended to general, non-crystalline objects, hence solving for the phases between all such pairs of beams? For an extended non-crystalline specimen, the diffraction pattern involves interferences between millions of diffracted beams. Nevertheless, it is useful to have a simple model for why ptychography should in principle be able solve the phase problem. Unfortunately, because the original papers written by Hoppe are difficult to understand (and are in German), those new to the field often find the ‘ptych’ concept rather confusing and irrelevant.</p><p>Today, nobody does ptychography in the way it was initially envisaged. Its original applicability is extremely narrow: perfect crystal structures can be easily solved using X-ray methods so there is no real scientific need for ptychography of crystals. However, in the mid-1990s, electron crystalline ptychography in its original form—interfering pairs of beams in the scanning transmission electron microscope (STEM) configuration—was indeed shown to work.</p><p>A much more difficult problem is to reconstruct the specimen function for some general ptychographical data set—that is, one where the data have been scattered from an infinite (and possibly 3D) object which has complicated non-crystalline structure, and the form of the illumination is unknown.</p><p>Nowadays nearly all reconstruction algorithms converge upon a solution iteratively. We assume we know the way that the illumination interacts with the specimen and how the resulting scattered wave propagates to the detector. This might have to include modelling scattering from multiple layers of a thick specimen. Indeed, it was a major advance in ptychography to realise that its data could be used to solve for 3D structures. At any particular iteration, we have an ongoing estimate of the specimen function and the illumination function. We then calculate the intensity of the set of diffraction patterns we would expect these functions to generate. Of course, if our estimated functions are not the same as the as their actual counterparts, the modelled data will not be the same as the real data. We use this difference to guide us to a new estimate of the specimen and illumination functions and then repeat the process iteratively until the real and the estimated data match with one another. There are a great number of ways to implement this sort of scheme in practice.</p><p>We also mention that there are two non-iterative ‘direct’ inversion methods which were developed during the 1990s: the Wigner Distribution Deconvolution (WDD) and the single sideband method (SSB). These are still being used by some workers. They have some distinct advantages (and also some limitations), but we do not have space to describe them here.</p><p>The first iterative approach to the reconstruction problem (called the ptychographic iterative engine, ‘PIE’) was published 2004. This was shown to work experimentally with hard X-rays in 2007 and subsequently led to an explosive interest in ptychography at X-ray synchrotrons around the world. Although ptychography applies to any wavelength, there were a number of reasons why it made such a large and immediate impact in the field of X-ray imaging. First, the gain in resolution beyond the capabilities of a typical X-ray lens was by about a factor of 5. Second, the phase image provided by ptychography is ideal for tomography; phase is cumulative as it passes through the object, and so it gives a linear measure of how much material density the beam has passed through. X-ray ‘ptycho-tomography’ is now a standard technique at many beamlines. Thirdly, the single photon-counting hard X-ray detectors available were much more efficient than any electron detector at that time. Ptychography had come of age.</p><p>Part of the beauty of ptychography is that the mathematics of the reconstruction process can be applied to any wavelength of microscopic imaging. Once the inverse problem had been solved, it quickly found applications in light, electron, EUV, and Terahertz imaging.</p><p>As mentioned above, ptychography relies on redundancy in the data we record to solve the phase problem. In fact, in certain experimental configurations, this redundancy can be huge: we can in principle obtain a 2D diffraction pattern for every single pixel in the (2D) object/image plane. In electron imaging, this is now referred to as ‘4D STEM’. If we record a 4D data set to solve for a 2D image, we clearly have a super-abundance of this ‘extra’ data. We can use this redundancy to greatly enhance the capabilities of the technique. There is no space here to describe all the advances in the technique that use these ‘extra’ data. Suffice it to say that key developments over the last 10–15 years have included methods for removing partial coherence in the source and illumination optics, methods for coping—and indeed computationally reversing—3D multiple scattering effects (important in electron imaging, where scattering is very strong), and methods to retrospectively correct errors that occurred during the data collection.</p><p>But the story is far from over. There is still a lot of work to do to make ptychography as easy to use as a conventional microscope. It is now seen as a standard technique at dedicated X-ray beamlines, but even then, a user needs a lot of understanding of the technique to optimise results. Electron ptychography is much more difficult and is a very long way off from being accessible to non-specialists, even though it holds great promise: the increased resolution it provides has made it possible to image for the first time atomic vibrations and the bonding of atoms.</p><p>So, ‘ptychography’ is an irritating word for a technique that is revolutionising microscopy over all wavelengths, both photon and electron. Its capabilities continue to expand very quickly. Ironically, modern incarnations of it bear little or no relationship to the original concept described by its name: but the word is now baked into the literature. The applications of ptychography are so wide it would be very hard to come up with a single term that could include all its incarnations. I think we must learn to live with this wretched name forever: but whatever you do, please don't try to pronounce that ‘p’!</p>\",\"PeriodicalId\":16484,\"journal\":{\"name\":\"Journal of microscopy\",\"volume\":\"300 2\",\"pages\":\"153-155\"},\"PeriodicalIF\":1.9000,\"publicationDate\":\"2025-08-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://onlinelibrary.wiley.com/doi/epdf/10.1111/jmi.70025\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of microscopy\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1111/jmi.70025\",\"RegionNum\":4,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"MICROSCOPY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of microscopy","FirstCategoryId":"5","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1111/jmi.70025","RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"MICROSCOPY","Score":null,"Total":0}
引用次数: 0

摘要

对于任何一个刚接触印刷术的人来说,要克服的第一个障碍是如何发音。作者听说过许多人试图同时把“p”和“t”结合起来——这是一项不可能完成的任务。答案很简单:忘记“p”——在英语中它是不发音的,就像在“心理学”中一样。读作“tykography”。印刷术克服了传统透射(和反射)显微镜的两个最持久的历史弱点。原则上,它可以获得波长有限的分辨率,不受透镜像差或透镜数值孔径施加的最大散射角的影响。这对于x射线和电子成像尤其重要,因为各种棘手的原因,可用透镜的可用数值孔径太小了。它还可以近乎完美地记录图像相位,这意味着其他透明物体可以以非常高的对比度成像。与传统的透镜显微镜不同,全息照相不能提供可以直接看到的真实或虚拟图像。相反,它使用计算机来处理大量的数据,这些数据与它“重建”的最终图像没有任何关系。普通的显微镜学家——也就是说,那些只想看到他们的标本的放大图像,而不想确切了解图像是如何计算出来的人——会发现这种迂回的过程相当疏远。作者小组在20世纪90年代初的第一批结果被学术界广泛忽视。当时一位著名的显微镜学家断言,他永远不会相信从电脑里出来的图像。另一个问题是,在那个时代,我们所能获得的照片太小,完全没有说服力。印刷术不得不等待摩尔定律来满足其贪婪的数据需求。然而,在过去的10-15年里,平面照相术已经成为高分辨率x射线成像和断层扫描的首选技术。在过去的5年左右的时间里,一些非凡的电子压型图结果被报道出来,远远超过了多年来使用磁透镜和像差校正似乎无法克服的分辨率限制。光学显微镜已经受到波长的限制,但是显微照相提供的非常敏感的相位图像已经不需要染色或标记,从而允许对生物细胞进行实时成像。实验方法看似简单。我们有一个辐射源照射在标本上。然后允许在该样品的出口表面的波场向物体下游传播一段距离,在那里散射强度的模式被记录在二维探测器上。重要的是要明白,这个探测器可以像我们喜欢的那样大。它可以捕捉到大角度的散射,从而表达出高分辨率的信息。电子和x射线透镜只能捕获和聚焦小角度的散射,这严重限制了它们的分辨率。然后,我们安排样品和照明相对于另一个横向移动,然后再次记录散射强度。该过程重复几次(在实践中,这可以多达100或1000次),以这样一种方式,每个感兴趣的区域的样品被照亮至少一次。计算图像的必要条件是,在任何一个位置照射的试样的面积也必须与也已照射的试样的至少一个其他区域重叠。这种重叠很重要,因为它意味着样本的相同元素(像素)以不止一种散射模式表示,这意味着我们在数据中有冗余:我们记录的数据比计算最终图像所需的数据数量要多。这些“额外的”数据在型相学中是至关重要的。首先,要制作图像,我们必须解决“相位问题”。我们所做的每一次测量——每一个散射模式(通常是衍射模式)中的每一个像素——只能用强度来记录。然而,撞击到探测器上的底层波有两个与之相关的数字:一个模量和一个相位,或者等价地,一个复数的实分量和虚分量。在一些成像技术中,如射电天文学,波干扰的频率足够低,因此我们可以直接测量它的振幅和到达时间(这是在相位中编码的),比如在阴极射线管上绘制信号。这是我们所能测量到的关于传播波的信息。如果我们把来自许多探测器的所有数据集合起来,那么我们就可以反向计算出波源的形状:也就是说,一个物体的图像。然而,为了观察非常小的物体,我们需要使用波长与物体大小相匹配的辐射,这本身就意味着一种非常高频的波。 对于微观辐射(光,x射线和电子),没有探测器可以直接记录这些波的相位:只能测量强度(模量的平方)。所有相位信息丢失。印刷术的天才之处在于,它通过利用照明/样品的横向位移对记录数据的影响来恢复这种“丢失的相位”。一旦我们解决了整个探测器上的波的相位,我们就可以用它来生成一个计算透镜,它具有更大的数值孔径(因此可以获得更高的分辨率),而不是短波长(x射线和电子)透镜所获得的非常小的数值孔径。一个基本的数学约束是,在实验过程中,两个相互移动的函数保持不变。然而,功能本身的物理性质具有很大的灵活性。例如,在傅里叶平面摄影中,一个函数是位于低分辨率显微镜的后焦平面的波场,而另一个函数是位于同一平面的物镜孔径。倾斜照明具有在光圈上移动波场模式的效果。在这种情况下,数据采集发生在位于孔径傅里叶域中的成像平面上。虽然现在被认为是理所当然的,但在平面照相术中使用的照明/试样位移应该允许相位问题的解决,这一点并不明显。我们可以说,由于照明位置之间的重叠,我们记录的数据集受到高度约束。那么,是不是所有衍射图的相对相位都能得到明确的解呢?当作者在20世纪80年代末第一次考虑这个问题时,答案远没有明确。就在那个时候,欧文·萨克斯顿(Gerchberg和Saxton相位检索算法的作者)建议他可以考虑一下沃尔特·霍普在20世纪60年代末和70年代初的一些工作。这表明,在晶体样品上移动一个精心设计的相干照明场,理论上可以解决相邻晶体反射之间的相位差。利用光和一维光栅对该方法进行了验证。黑格尔和霍普后来把这个方案称为“压型图”,因为它要求衍射光束相互卷积或“折叠”。“Ptych”在古希腊是“折叠”的意思。(顺便说一句,它也指动物的内脏和起伏的山丘上的褶皱。)对于作者来说,这是一个关键的见解。如果能找到衍射光束对的相对相位,那么这个相同的概念——移动一个照明场——肯定可以推广到一般的、非晶体的物体上,从而解决所有这些光束对之间的相位?对于扩展的非晶体样品,衍射图样涉及数百万衍射光束之间的干涉。尽管如此,有一个简单的模型来解释为什么压型术原则上应该能够解决相位问题是有用的。不幸的是,因为Hoppe写的原始论文很难理解(而且是用德语写的),那些刚进入这个领域的人经常发现“ptych”这个概念相当令人困惑和不相关。今天,没有人按照最初设想的方式来做印刷术。它最初的适用范围非常狭窄:完美的晶体结构可以很容易地用x射线方法解决,所以没有真正的科学需要晶体的平面摄影。然而,在20世纪90年代中期,原始形式的电子晶体平面摄影(扫描透射电子显微镜(STEM)结构中的干涉光束对)确实显示出了作用。一个更加困难的问题是为一些一般的物理数据集重建样本函数,也就是说,数据是从一个无限的(可能是3D的)具有复杂的非晶体结构的物体中分散出来的,并且照明的形式是未知的。目前,几乎所有的重构算法都是迭代地收敛于一个解。我们假设我们知道照明与样品相互作用的方式,以及由此产生的散射波如何传播到探测器。这可能必须包括从厚样品的多层散射建模。事实上,意识到它的数据可以用来解决三维结构,这是印刷术的一个重大进步。在任何特定的迭代,我们有一个持续的估计样本函数和照明函数。然后我们计算我们期望这些函数产生的衍射图案的强度。当然,如果我们估计的函数与实际对应的函数不相同,则建模数据将与实际数据不相同。 我们使用这种差异来指导我们对标本和照明函数进行新的估计,然后迭代地重复这个过程,直到实际数据和估计数据相互匹配。在实践中,有很多方法可以实现这种方案。我们还提到了20世纪90年代发展起来的两种非迭代“直接”反演方法:Wigner分布反褶积(WDD)和单边带方法(SSB)。一些工人仍在使用这些设备。它们有一些明显的优点(也有一些限制),但我们在这里没有篇幅来描述它们。重建问题的第一个迭代方法(称为ptychographic迭代引擎,简称PIE)于2004年发表。这在2007年的硬x射线实验中被证明是有效的,随后引起了全世界对x射线同步加速器的触媒学的爆炸性兴趣。虽然平面照相术适用于任何波长,但它在x射线成像领域产生如此巨大而直接的影响有很多原因。首先,它的分辨率比典型的x射线透镜提高了大约5倍。其次,平面摄影提供的相位图像是断层成像的理想选择;当它穿过物体时,相位是累积的,所以它给出了光束穿过多少物质密度的线性测量。x射线“脑脊膜断层扫描”现在是许多光束线的标准技术。第三,当时可用的单光子计数硬x射线探测器比任何电子探测器都要高效得多。印刷术已经成熟。印刷术的部分美妙之处在于重建过程的数学可以应用于任何波长的显微成像。一旦反问题得到解决,它很快就在光、电子、极紫外光和太赫兹成像中得到了应用。如上所述,平面照相依赖于我们记录的数据中的冗余来解决相位问题。事实上,在某些实验配置中,这种冗余可能是巨大的:原则上,我们可以为(2D)物体/图像平面中的每个单个像素获得2D衍射图案。在电子成像中,这被称为“4D STEM”。如果我们记录一个4D数据集来求解一个2D图像,我们显然有一个超级丰富的“额外”数据。我们可以使用这种冗余来极大地增强该技术的能力。这里没有空间来描述使用这些“额外”数据的技术的所有进步。可以说,过去10-15年的关键发展包括消除光源和照明光学中的部分相干性的方法,处理(实际上是计算逆转)3d多重散射效应的方法(在散射非常强的电子成像中很重要),以及回顾性纠正数据收集过程中发生的错误的方法。但故事远未结束。要使压型照相术像传统显微镜一样易于使用,还有很多工作要做。它现在被视为专用x射线束线的标准技术,但即使如此,用户也需要对该技术有很多了解才能优化结果。电子印刷技术要困难得多,而且距离非专业人士能够接触到它还有很长的路要走,尽管它有很大的希望:它提供的更高的分辨率首次使原子振动和原子键合的成像成为可能。因此,对于一项在所有波长(包括光子和电子)范围内彻底改变显微镜技术的技术来说,“ptychography”是一个令人恼火的词。它的能力继续迅速扩展。具有讽刺意味的是,它的现代化身与它的名字所描述的原始概念几乎没有关系,但这个词现在已经融入了文学作品中。印刷术的应用是如此广泛,很难想出一个单一的术语,可以包括所有的化身。我想我们必须学会永远带着这个讨厌的名字生活下去,但是不管你做什么,请不要试图发那个“p”的音!
本文章由计算机程序翻译,如有差异,请以英文原文为准。

Ptychography: A brief introduction

Ptychography: A brief introduction

For anyone new to ptychography, the first obstacle to overcome is how to pronounce its name. The author has heard many tortured attempts trying to simultaneously incorporate the ‘p’ with the ‘t’—an impossible task. The answer is very simple: forget the ‘p’—in English it is silent, just as in ‘psychology’. Pronounce it as ‘tykography’.

Ptychography overcomes the two most enduring historical weaknesses of conventional transmission (and reflection) microscopy. It can in principle obtain wavelength limited resolution, unaffected by lens aberration or the maximum scattering angle imposed by the numerical aperture of the lens. This is especially important for X-ray and electron imaging where, for various intractable reasons, the useable numerical aperture of the available lenses is so small. It can also record the image phase near perfectly, meaning that otherwise transparent objects can be imaged with very high contrast.

Unlike conventional microscopy with lenses, ptychography does not provide a real or virtual image that can be seen directly. Instead, it uses a computer to process a very large quantity of data that bear no relationship to the final image that it ‘reconstructs’. Ordinary microscopists—that is, those who simply want to see a magnified image of their specimen and do not want to understand exactly how the image is computed—can find this circuitous process all rather alienating. First results from the author's group in the early 1990s were widely dismissed by the community. A leading microscopist at the time asserted that he would never believe in an image that came out of a computer. A further problem was that the pictures we could obtain in those days were so small and totally unconvincing. Ptychography had to wait for Moore's Law to catch up with its greedy data requirements.

However, in the last 10–15 years, ptychography has become the technique of choice for very high-resolution X-ray imaging and tomography. In the last 5 years or so, some extraordinary electron ptychography results have been reported, far surpassing the resolution limit that for so many years had seemed insurmountable using magnetic lenses and aberration correction. Optical microscopy is already wavelength limited, but the very sensitive phase image that ptychography supplies has removed the need for staining or labelling, thus allowing live imaging of biological cells.

The experimental method is deceptively simple. We have a source of radiation which shines upon the specimen. The wavefield at the exit surface of this specimen is then allowed to propagate some distance downstream of the object where the pattern of scattered intensity is recorded on a two-dimensional detector. It is important to understand that this detector can be as large as we like. It can capture scattering up to large angles, where high-resolution information is expressed. Electron and X-ray lenses can only capture and focus reliably small angles of scatter, which severely limits their resolution.

We then arrange for the specimen and the illumination to be moved laterally relative to one another, whereupon the scattered intensity is recorded again. The process is repeated several times (in practice, this can be as many as 100 or 1000 times) in such a way that each area of interest of the specimen is illuminated at least once. A necessary condition for the computation of the image is that the area of specimen illuminated at any one position must also overlap with at least one other area of the specimen which has also been illuminated.

This overlap is important because it means that the same element (pixel) of the specimen is expressed in more than one scattering pattern, meaning that we have redundancy in the data: we record many more data than the number of numbers we need to compute the final image. These ‘extra’ data are of fundamental importance in ptychography.

First, to make an image, we must solve the ‘phase problem’. Every measurement we make—every pixel in every scattering pattern (usually a diffraction pattern)—can only be recorded in intensity. However, the underlining wave impinging on the detector has two numbers associated with it: a modulus and a phase or, equivalently, the real and imaginary components of a complex number. In some imaging techniques, like radio astronomy, the frequency of the wave disturbance is low enough so that we can measure its amplitude and its time of arrival (which is encoded in the phase) directly, say by plotting the signal on a cathode ray tube. This is as much as we can ever measure about a propagating wave. If we assemble all the data from many detectors, then we can work out backwards the shape of the source of the waves: that is, an image of the object.

However, to see very small objects, we need to use a radiation with a wavelength concomitant with the size of that object, which itself implies a very high frequency wave. For the microscopic radiations (light, X-ray and electrons), there are no detectors that can record directly the phase of such waves: only the intensity (the modulus squared) can be measured. All phase information is lost.

The genius of ptychography is that it recovers this ‘lost phase’ by exploiting the effect that the lateral shift of the illumination/specimen has on the recorded data. Once we have solved for the phase of the wave over the entire detector, we can use this to generate a computational lens which has a much larger numerical aperture (and hence can achieve much higher resolution) than the very small numerical apertures achieved by short wavelength (X-ray and electron) lenses.

An essential mathematical constraint is that the two functions that move across one another remain constant during the course of the experiment. However, there is great flexibility in the physical nature of the functions themselves. For example, in Fourier ptychography, one of the functions is the wavefield lying in the back focal plane of a low-resolution microscope, while the other function is the objective aperture lying in the same plane. Tilting the illumination has the effect of shifting the wavefield pattern across the aperture. In this case, the data collection occurs in the image plane which lies in the Fourier domain of the aperture.

Although nowadays taken for granted, it is not at all obvious that the illumination/specimen shifts used in ptychography should allow for the solution of the phase problem. We can argue that the data set we record is highly constrained because of the overlaps between illumination positions. But then does it automatically follow that the relative phases of all the diffraction patterns can be solved for unambiguously? When the author first considered this issue in the late 1980s, the answer was far from clear. It was at that time that Owen Saxton (as of the Gerchberg and Saxton phase retrieval algorithm) suggested that he might consider looking at some work by Walter Hoppe from the late 1960s and early 1970s. This had shown that moving a carefully designed coherent illumination field across a crystal specimen could, in theory, solve for the phase difference between adjacent crystalline reflections. The method was demonstrated using light and a one-dimensional grating. Hegerl and Hoppe later referred to the scheme as ‘ptychography’ because it required the diffracted beams to be convolved or ‘folded’ into one another. ‘Ptych’ is the ancient Greek for ‘fold’. (Incidentally, it also means—amongst other things—the entrails of an animal and the folds in gently rolling hills).

For the author, this was a pivotal insight. If relative phases could be found for pairs of diffracted beams, then surely this same concept—moving an illumination field—could be extended to general, non-crystalline objects, hence solving for the phases between all such pairs of beams? For an extended non-crystalline specimen, the diffraction pattern involves interferences between millions of diffracted beams. Nevertheless, it is useful to have a simple model for why ptychography should in principle be able solve the phase problem. Unfortunately, because the original papers written by Hoppe are difficult to understand (and are in German), those new to the field often find the ‘ptych’ concept rather confusing and irrelevant.

Today, nobody does ptychography in the way it was initially envisaged. Its original applicability is extremely narrow: perfect crystal structures can be easily solved using X-ray methods so there is no real scientific need for ptychography of crystals. However, in the mid-1990s, electron crystalline ptychography in its original form—interfering pairs of beams in the scanning transmission electron microscope (STEM) configuration—was indeed shown to work.

A much more difficult problem is to reconstruct the specimen function for some general ptychographical data set—that is, one where the data have been scattered from an infinite (and possibly 3D) object which has complicated non-crystalline structure, and the form of the illumination is unknown.

Nowadays nearly all reconstruction algorithms converge upon a solution iteratively. We assume we know the way that the illumination interacts with the specimen and how the resulting scattered wave propagates to the detector. This might have to include modelling scattering from multiple layers of a thick specimen. Indeed, it was a major advance in ptychography to realise that its data could be used to solve for 3D structures. At any particular iteration, we have an ongoing estimate of the specimen function and the illumination function. We then calculate the intensity of the set of diffraction patterns we would expect these functions to generate. Of course, if our estimated functions are not the same as the as their actual counterparts, the modelled data will not be the same as the real data. We use this difference to guide us to a new estimate of the specimen and illumination functions and then repeat the process iteratively until the real and the estimated data match with one another. There are a great number of ways to implement this sort of scheme in practice.

We also mention that there are two non-iterative ‘direct’ inversion methods which were developed during the 1990s: the Wigner Distribution Deconvolution (WDD) and the single sideband method (SSB). These are still being used by some workers. They have some distinct advantages (and also some limitations), but we do not have space to describe them here.

The first iterative approach to the reconstruction problem (called the ptychographic iterative engine, ‘PIE’) was published 2004. This was shown to work experimentally with hard X-rays in 2007 and subsequently led to an explosive interest in ptychography at X-ray synchrotrons around the world. Although ptychography applies to any wavelength, there were a number of reasons why it made such a large and immediate impact in the field of X-ray imaging. First, the gain in resolution beyond the capabilities of a typical X-ray lens was by about a factor of 5. Second, the phase image provided by ptychography is ideal for tomography; phase is cumulative as it passes through the object, and so it gives a linear measure of how much material density the beam has passed through. X-ray ‘ptycho-tomography’ is now a standard technique at many beamlines. Thirdly, the single photon-counting hard X-ray detectors available were much more efficient than any electron detector at that time. Ptychography had come of age.

Part of the beauty of ptychography is that the mathematics of the reconstruction process can be applied to any wavelength of microscopic imaging. Once the inverse problem had been solved, it quickly found applications in light, electron, EUV, and Terahertz imaging.

As mentioned above, ptychography relies on redundancy in the data we record to solve the phase problem. In fact, in certain experimental configurations, this redundancy can be huge: we can in principle obtain a 2D diffraction pattern for every single pixel in the (2D) object/image plane. In electron imaging, this is now referred to as ‘4D STEM’. If we record a 4D data set to solve for a 2D image, we clearly have a super-abundance of this ‘extra’ data. We can use this redundancy to greatly enhance the capabilities of the technique. There is no space here to describe all the advances in the technique that use these ‘extra’ data. Suffice it to say that key developments over the last 10–15 years have included methods for removing partial coherence in the source and illumination optics, methods for coping—and indeed computationally reversing—3D multiple scattering effects (important in electron imaging, where scattering is very strong), and methods to retrospectively correct errors that occurred during the data collection.

But the story is far from over. There is still a lot of work to do to make ptychography as easy to use as a conventional microscope. It is now seen as a standard technique at dedicated X-ray beamlines, but even then, a user needs a lot of understanding of the technique to optimise results. Electron ptychography is much more difficult and is a very long way off from being accessible to non-specialists, even though it holds great promise: the increased resolution it provides has made it possible to image for the first time atomic vibrations and the bonding of atoms.

So, ‘ptychography’ is an irritating word for a technique that is revolutionising microscopy over all wavelengths, both photon and electron. Its capabilities continue to expand very quickly. Ironically, modern incarnations of it bear little or no relationship to the original concept described by its name: but the word is now baked into the literature. The applications of ptychography are so wide it would be very hard to come up with a single term that could include all its incarnations. I think we must learn to live with this wretched name forever: but whatever you do, please don't try to pronounce that ‘p’!

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Journal of microscopy
Journal of microscopy 工程技术-显微镜技术
CiteScore
4.30
自引率
5.00%
发文量
83
审稿时长
1 months
期刊介绍: The Journal of Microscopy is the oldest journal dedicated to the science of microscopy and the only peer-reviewed publication of the Royal Microscopical Society. It publishes papers that report on the very latest developments in microscopy such as advances in microscopy techniques or novel areas of application. The Journal does not seek to publish routine applications of microscopy or specimen preparation even though the submission may otherwise have a high scientific merit. The scope covers research in the physical and biological sciences and covers imaging methods using light, electrons, X-rays and other radiations as well as atomic force and near field techniques. Interdisciplinary research is welcome. Papers pertaining to microscopy are also welcomed on optical theory, spectroscopy, novel specimen preparation and manipulation methods and image recording, processing and analysis including dynamic analysis of living specimens. Publication types include full papers, hot topic fast tracked communications and review articles. Authors considering submitting a review article should contact the editorial office first.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信