折剑头 折剑头
关注数: 11 粉丝数: 109 发帖数: 7,118 关注贴吧数: 57
缪子反常磁矩的最新测量结果出来了 贴一下Physics World的报道。 The muon’s theory-defying magnetism is confirmed by new experiment A long-standing discrepancy between the predicted and measured values of the muon’s magnetic moment has been confirmed by new measurements from an experiment at Fermilab in the US. The 200-strong Muon g–2 collaboration has published a result consistent with data collected two decades ago by an experiment bearing the same name at the Brookhaven National Laboratory, also in the US. This pushes the disparity between the experimental value and that predicted by the Standard Model of particle physics up to 4.2σ, suggesting that physicists could be close to discovering new fundamental forces or particles. The muon, like its lighter and longer-lived cousin the electron, has a magnetic moment due to its intrinsic angular momentum or spin. According to basic quantum theory, a quantity known as the “g-factor” that links the magnetic moment with the spin should be equal to 2. But corrections added to more advanced theory owing to the effects of short-lived virtual particles increase g by about 0.1%. It is this small difference – expressed as the “anomalous g-factor”, a = (g – 2)/2 – that is of interest because it is sensitive to virtual particles both known and unknown. In 1997–2001, the Brookhaven collaboration measured this quantity using a 15 m-diameter storage ring fitted with superconducting magnets that provide a vertical 1.45 T magnetic field. The researchers injected muons into the ring with their spins polarized so that initially the spin axes aligned with the particles’ forward direction. Detectors positioned around the ring then measured the energy and direction of the positrons generated by the muons’ decay. Spin precession Were there no anomalous moment, the magnetic field would cause the muon spins to precess such that their axes remain continuously aligned along the muons’ direction of travel. But the anomaly causes the rate of precession to slightly outstrip the muons’ orbital motion so that for every 29 trips around the ring the spin axes undergo about 30 complete rotations. Because the positrons have more energy on average when the spin aligns in a forward direction, the intensity of the most energetic positrons registered by the detectors varies cyclically – dropping to a minimum after about 14.5 revolutions and then rising back up to a maximum. It is this frequency – the number of such cycles per second – that reveals the precise value of a. When the Brookhaven collaboration announced its final set of results in 2006, it reported a value of a = 0.00116592080 and an error of 0.54 parts per million (ppm) – putting it at odds with theory by between 2.2–2.7σ. That discrepancy then rose as theorists refined their Standard Model predictions, so that it currently stands at about 3.7σ. The latest measurements extend the disparity still further. The recent measurements were made using the same storage ring as in the earlier work – the 700 tonne apparatus was transported in 2013 over 5000 km (via land, sea and river) from Brookhaven near New York City to Fermilab on the outskirts of Chicago. But while the core of the device remains unchanged, the uniformity of the magnetic field that it produces has been increased by a faxtor of 2.5 and the muon beams that feeds it are purer and more intense. Avoiding human bias The international collaboration at Fermilab has so far analyzed the results from one experimental run, carried out in 2018. It has gone to great lengths to try and avoid any sources of human bias, having even made its experimental clock deliberately out-of-synch to mask the muons’ true precession rate until the group’s analysis was complete. Describing its results in Physical Review Letters, alongside more technical details in three other journals, the collaboration reports a new value for a of 0.00116592040 and an uncertainty of 0.46 ppm. On its own, this is 3.3σ above the current value from the Standard Model and slightly lower than the Brookhaven result, but consistent with it. Together, the results from the two labs yield a weighted average of 0.00116592061, an uncertainty of 0.35 ppm and a deviation from theory – thanks to the smaller error bars – of 4.2σ. That is still a little short of the 5σ that physicists normally consider the threshold for discovery. Tamaki Yoshioka of Kyushu University in Japan praises Fermilab Muon g–2 for its “really exciting result”, which, he says, indicates the possibility of physics beyond the Standard Model. But he argues that it is still too early to completely rule out systematic errors as the cause of the disparity, given that the experiments at both labs have used the same muon storage ring. This, he maintains, raises the importance of a rival g–2 experiment under construction at the Japan Proton Accelerator Research Complex in Tokai. Expected to come online in 2025, this experiment will have quite different sources of systematic error. Alternative theory Indeed, if a group of theorists going by the name of the Budapest-Marseille-Wuppertal Collaboration is correct, there may be no disparity between experiment and theory at all. In a new study in Nature, it shows how lattice-QCD simulations can boost the contribution of known virtual hadrons so that the predicted value of the muon’s anomalous moment gets much closer to the experimental ones. Collaboration member Zoltan Fodor of Pennsylvania State University in the US says that the disparity between the group’s calculation and the newly combined experimental result stands at just 1.6σ. The Fermilab collaboration continues to collect data and plans to release results from at least four more runs. Those, it says, will benefit from a more stable temperature in the experimental hall and a better-centred beam. “These changes, amongst others,” it writes, “will lead to higher precision in future publications.”
布鲁克海文赢了 科技日报讯 (记者刘霞)美国能源部官员宣布,核物理学家的下一个梦想有望在纽约实现。据美国《科学》杂志网站9日报道,能源部将在布鲁克海文国家实验室制造新型电子-离子对撞机(EIC),让高能电子束冲入质子内部,探究质子“内心”奥秘。据悉,EIC的建造成本介于16亿至26亿美元之间,拟2030年投入使用。 能源部科学副部长保罗·达巴尔在新闻发布会上说:“这将是几十年来美国建造的首台全新对撞机,它有望使美国在未来几十年保持核物理学领域的领先地位。” 自20世纪70年代初,物理学家就知道,每个质子由三个质量更小的夸克通过胶子彼此结合形成。尽管质子比灰尘还常见,但仍蕴藏不少未解之谜。因此,能源部决定建造新型对撞机揭示质子奥秘。位于弗吉尼亚州的能源部托马斯·杰斐逊国家加速器实验室也希望“花落自家”,最新选址决定让这场双雄之争落槌定音。 此前,杰斐逊实验室已通过向富含质子和中子的靶核发射电子束来研究质子。2017年,该实验室耗资3.38亿美元,将其“连续电子束加速器设施”的能量提升了一倍。以此为基础,只要再添加一个新质子加速器就可以完成EIC的建造。 而布鲁克海文国家实验室则另辟蹊径。他们的“相对论重离子对撞机”(RHIC)让金和铜等原子核发生对撞,产生一团超热的夸克胶子对等离子体。 据信,宇宙大爆炸后万分之一秒左右宇宙就是一团夸克胶子等离子体。RHIC是一个长3.8公里的圆环,由两个同心反向的环形加速器组成。该实验室计划利用其中一个环加速质子并添加一个电子加速器来建造EIC。 达巴尔说,为决定EIC“花落谁家”,能源部官员成立了一个独立的EIC选址委员会,权衡了诸多因素,包括建造成本等——质子加速器通常比电子加速器更大且更昂贵,布鲁克海文实验室最终中选。当然,杰斐逊实验室的科学家也将参与新对撞机的设计、建造和运行。 达巴尔表示,动工前,该项目仍须通过几大考验,包括详细设计方案获批、估算成本及规划建造进度等,可能需几年时间。为给新对撞机让路,自1999年运行至今的RHIC将于2024年“寿终正寝”
9秒之谜 谜一样的9秒:测不准的中子寿命2017-03-11 09:55 来源: 《环球科学》杂志分享到 撰文杰弗里 · L · 格林(Geoffrey L. Greene) 彼得 · 格尔滕博特(Peter Geltenbort) 翻译张寂潮孙保华 杰弗里·L·格林是美国田纳西大学物理教授,同时受聘于美国橡树岭国家实验室的散裂中子源。40多年来,他一直致力于中子特性的研究。 彼得·格尔滕博特是法国劳厄-郎之万实验室的科学家,在这里,他利用世界上最强的中子源研究中子的基本性质。 两个测量中子寿命的精密实验结果存在着9秒的差异。这种差异究竟是反映了测量的误差,还是预示着一些更深层次的待解之谜? 在原子核中,一个普通的中子可以存活很久,甚至可能永远不会发生衰变。然而,自由的中子却会在15分钟左右转变成其他粒子。“左右”反映了物理学家对中子认识的不足。尽管做了很多努力,我们仍没能准确地测量出中子的寿命。 中子的寿命 理论上,测量中子的寿命应该是非常简单直接的。在β衰变中,一个中子会衰变成一个质子、一个电子和一个反中微子(中微子对应的反物质),衰变后粒子的总质量稍小,但是总的电荷、自旋以及其他守恒量都与原粒子相同。这些守恒量中包括“质能”,也就是说减少的质量都转化为衰变产物的动能。 由于衰变本质上是一个随机的量子现象,我们无法准确地预言某一个特定的中子会在何时衰变,因此我们只能通过研究大量中子的衰变来测量中子的平均寿命。 研究者使用了两种实验方法:一种被称为“瓶”方法,另一种是“束”方法。“瓶”实验将中子限制在容器内,统计给定时间后容器内剩下的中子数。“束”方法则不同,并不是观察消失的中子,而是通过寻找中子衰变后出现的产物来测量中子的平均寿命。 “瓶”方法非常具有挑战性,因为中子穿透能力很强,可以轻易地穿过绝大多数容器壁。法国团队采用表面非常光滑的容器捕获极冷的中子(就是那些动能非常低的中子)来进行实验。如果中子足够慢、容器壁足够光滑,中子就会被容器壁反射从而留在容器内。 然而不幸的是,任何瓶容器都不是完美的。假如有中子不慎泄露出容器,我们就会把这部分中子的减少也归咎于β衰变,并得出错误的中子寿命。为了进行计数上的修正,我们使用了一种很巧妙的技术。如果中子速度慢一点,或者容器大一点,撞击容器壁的中子就会减少,丢失的中子也会减少。通过一系列尝试,改变容器的大小和中子的能量(速度),就可以外推出不会发生中子撞击和丢失的理想容器,在实验中会得到怎样的结果。迄今为止最精确的“瓶”实验,是在法国的劳厄-郎之万研究所(ILL)进行的。 在美国国家标准与技术研究所(NIST)的中子研究中心,格林和其他研究人员使用束方法来测量中子寿命。科学家让一束冷中子流穿过由磁场和高压环状电极组成的陷阱,这个陷阱可以捕获任何穿过其中的正离子。中子为电中性,可以穿过这个陷阱。然而,如果中子在陷阱中发生了衰变,产生带正电的质子就会被陷阱“抓住”。研究人员会周期性的“打开”这个陷阱,将质子清出并且对其进行统计。原则上,实验中质子的捕获和探测都是近乎完美的,我们只需针对可能遗漏的衰变做一些很小的修正。 错在哪里? 当我们在进行精密测量时,总会计算实验结果的不确定度。一般来说,任何测量的不确定度都存在两个来源:统计误差和系统误差。统计误差是因一个实验只能够测量有限的样本而造成的,样本越大,测量就越可靠,统计误差就越小。 不确定度的第二个来源是系统误差,由于它来源于测量过程中的缺陷,所以更加难以估计。我们所能做到的最佳方法是对所有可以想到的误差来源进行详细的研究,然后评估出每一个误差对最终结果可能造成的影响。换言之,我们投入巨大精力来评估“已知的未知”。 当然,最让人担心的是我们忽略了一个隐藏在实验过程中的“未知的未知”,即一个我们甚至不知道自己不知道的系统误差。克服此类误差的唯一方法是去进行另一个完全独立的测量,并且使用完全不同的实验方法,这样将不会受到系统误差的影响。 对于中子寿命的测量, NIST的“束”实验对中子寿命的最新测量结果是887.7秒,误差在3.1秒内。另一方面,劳厄-郎之万研究所的“瓶”实验测量出的中子寿命为878.5秒,误差在1秒内。 这两个结果分别是全世界相同类型的中子寿命测量实验中最为精确的,然而它们之间相差了大约9秒。这样的时间差异明显比两个实验所给出的不确定度都要大得多。 关于这个差异,有一个令人激动的解释:这些差异可能反映了一些尚未被发现的物理现象。举个例子,假设中子除了正常的β衰变之外,还通过一些未知的途径进行衰变,并且在衰变中不产生质子,这样就不会被只能捕捉质子的“束”实验所检测到。而我们认为更可能的原因是在某个实验(甚至可能是两个实验)中,我们低估或者忽视了某种系统误差。 探寻中子寿命的意义 弄清楚我们忽略了什么,当然会让我们这些实验物理学家安心。但更为重要的是,如果能够解决这个疑难问题,并得到中子的真实寿命,我们就可以回答一些长期存在的、有关宇宙的基本问题。 首先,中子衰变的精确时标可以帮助我们理解弱力是怎样作用于其他粒子的。弱力导致了几乎所有的放射性衰变,也是太阳内发生核聚变的原因。中子的β衰变是最简单、最纯粹的弱相互作用例子。 得到准确的中子衰变速率也有助于检验宇宙早期演化的大爆炸理论。根据这个理论,宇宙在诞生后1秒左右时,是由炽热且致密的粒子混合组成的,这些粒子包括质子、中子、电子等等。大约3分钟之后,不断膨胀的宇宙冷却到了可使质子和中子结合成为最简单的原子核——氘核的温度。之后,其他种类的简单原子核进一步合成。 这个过程被称为大爆炸核合成。在宇宙冷却时,如果中子衰变速率远大于宇宙的冷却速度,那么当宇宙冷却到原子核形成所需的适宜温度时,中子就已全部衰变掉了,剩下的只有质子,相应的就会有一个几乎完全由氢元素组成的宇宙。另一方面,如果中子的寿命远大于大爆炸核合成所需的冷却时间,那么宇宙中的氦元素就会过剩,进而影响较重元素的形成,而后者又会影响恒星演化过程。因此,宇宙冷却速度和中子寿命之间的平衡,对于元素的形成尤为关键。 根据天文学的数据,我们可以测量出宇宙中氢和氦的比例,以及氘和其他轻元素在整个宇宙中的含量。我们想探究这些测量数据与大爆炸理论预测是否一致。如果没有一个可靠寿命值,进行的比较就总是受局限。 有一种可以解决“瓶”实验和“束”实验结果之间差异的方法,那就是开展更多具有类似测量精度、且不具有相同系统误差的实验。除了继续进行瓶实验和束实验,世界上还有几个科学家团队正在探索其他测量中子寿命的方法。日本质子加速器研究装置(J-PARC)的一个小组,正在开发一种新型的“束”实验方法,它会检测中子衰变产生的电子,而不是之前的质子。另一个让人兴奋的进展来自于由俄罗斯彼得堡核物理研究所、美国洛斯阿拉莫斯国家实验室、德国慕尼黑工业大学和劳厄-郎之万研究所共同组成的研究组,他们计划使用一种新型的中子瓶,这种中子瓶将依靠磁场而不是物质瓶壁来约束中子。这样一来,从瓶子边缘意外泄露出的中子数量和之前的实验就完全不同,这意味着两者的系统误差也将完全不同。我们热切地希望,正在继续开展的“瓶”实验和“束”实验,加上新一代的测量实验,能够最终解决测量中子寿命这个疑难问题。 本文由《环球科学》(《科学美国人》中文版)供稿,编者有删改。 更多精彩!欢迎关注“科普中国-科技前沿大师谈”官方微信(kjqydst)。 作者: 杰弗里 · L · 格林(Geoffrey L. Greene) 彼得 · 格尔滕博特(Peter Geltenbort) [责任编辑: 宋金玉]
EMC效应似乎有点眉目了 Nature上最新的一篇文章给出了新的解释: Why neutrons and protons are modified inside nuclei The structure of a neutron or a proton is modified when the particle is bound in an atomic nucleus. Experimental data suggest an explanation for this phenomenon that could have broad implications for nuclear physics. In 1983, it was discovered that the internal structure of a nucleon — a proton or a neutron — depends on its environment1. That is, the structure of a nucleon in empty space is different from its structure when it is embedded inside an atomic nucleus. However, despite vigorous theoretical and experimental work, the cause of this modification has remained unknown. In a paper in Nature, the CLAS Collaboration2 presents evidence that sheds light on this long-standing issue. The advent of nuclear physics dates back to the days of Ernest Rutherford, whose experiments in the early 1900s on the scattering of α-particles (helium nuclei) by matter revealed a compact, dense core at the centre of atoms3. Since then, physicists have been working to understand the structure of the atomic nucleus and the dynamics of its component parts. Similarly, since the revelation in the late 1960s that nucleons themselves have internal constituents called quarks4,5, extensive work has focused on studying this deeper underlying structure. For decades, it was generally thought that nucleons in nuclei were structurally independent of each other and were e ssentially influenced by the average nuclear field produced by their mutual interactions. However, a lingering question had been whether nucleons were modified when inside a nucleus; that is, whether their structure was different from that of a free nucleon. In 1983, a startling discovery by the European Muon Collaboration (EMC) at the particle-physics laboratory CERN near Geneva, Switzerland, provided evidence for such a nucleon modification1. The modification, known as the EMC effect, manifested itself as a variation in the momentum distribution of quarks inside the nucleons embedded in nuclei. This result was verified by subsequent experiments at the SLAC National Accelerator Laboratory in Menlo Park, California6,7, and at the Thomas Jefferson National Accelerator Facility (Jefferson Lab) in Newport News, Virginia8. Although the existence of the EMC effect is now firmly established, its cause has been elusive. Current thinking offers two possible explanations. The first is that all nucleons in a nucleus are modified to some extent because of the average nuclear field. The second is that most nucleons are not modified, but that specific ones are substantially altered by interacting in what are called short-range correlated (SRC) pairs over brief time periods (Fig. 1). The current paper provides definitive evidence in favour of the second explanation. The EMC effect is measured in experiments in which electrons are scattered from a system of particles, such as a nucleus or a nucleon. The electron energies are selected so that the quantum-mechanical waves associated with the electrons have a wavelength that matches the dimensions of the system of interest. To study the interior of a nucleus, energies of 1–2 GeV (billion electronvolts) are needed. To probe the structure of a smaller system, such as a nucleon, higher energies (smaller wavelengths) are required, in a process called deep inelastic scattering (DIS). This process was central to the discovery of the quark substructure of nucleons4,5, which resulted in the 1990 Nobel Prize in Physics9. In DIS experiments, the rate at which scattering occurs is described by a quantity called the scattering cross-section. The magnitude of the EMC effect is determined by plotting the ratio of the per-nucleon cross-section for a given nucleus to that for the hydrogen isotope deuterium as a function of the momentum of the quark that is struck by the electron. If there were no nucleon modification, this ratio would have a constant value of 1. The fact that this ratio decreases as a function of momentum for a given nucleus indicates that individual nucleons in the nucleus are somehow modified. Moreover, the fact that this decrease occurs more rapidly if the mass of the nucleus is increased suggests that the EMC effect is enhanced for heavier nuclei. The CLAS Collaboration has used electron-scattering data taken at Jefferson Lab to establish a relationship between the size of the EMC effect and the number of neutron–proton SRC pairs in a given nucleus. A key feature of the work is the extraction of a mathematical function that includes the effect of SRC pairs on the scattering cross-section and that is shown to be independent of the nucleus. This universality provides strong confirmation of the correlation between the EMC effect and neutron–proton SRC pairs. The results indicate that the nucleon modification is a dynamical effect that arises from local density variations, as opposed to being a static, bulk property of the medium in which all nucleons are modified by the average nuclear field. The authors have focused on neutron–proton SRC pairs for a particular reason: it turns out that these pairs are more common than their neutron–neutron or proton–proton counterparts. In this sense, the nucleons are isophobic; that is, similar nucleons are less likely to pair up than are dissimilar nucleons. Therefore, owing to the asymmetry in the numbers of neutrons and protons in medium-mass and heavy nuclei, the probability of protons forming neutron–proton SRC pairs increases roughly as the ratio of neutrons to protons, whereas the probability of neutrons doing this tends to plateau10. The CLAS Collaboration has used this specific feature to solidify its conclusions by demonstrating a clear difference between the per-proton and per-neutron EMC effects for asymmetric nuclei heavier than carbon. The fact that this distinction emerges directly from the data provides further support for the authors’ interpretation that the nucleon modification arises from the formation of SRC pairs. One implication of the present study is that information deduced about free neutrons from DIS experiments on deuterium or heavier nuclei needs to be corrected for the EMC effect to account for the modification of the neutrons in the nuclear medium. Another consequence concerns current and future experiments in which neutrinos or their antiparticles (antineutrinos) are scattered from asymmetric nuclei. Because protons and neutrons have different quark compositions, and because protons are more strongly affected by the in-medium modification than are neutrons, neutrino and antineutrino scattering cross-sections can show variations that could erroneously be attributed to an effect of some exotic physics — such as deficiencies in the standard model of particle physics, or possible mechanisms for understanding the asymmetry between matter and antimatter in the Universe. Before any such claim can be made, the differences in the EMC effect for protons and neutrons would have to be taken into account. Nature 566, 332-333 (2019) doi: 10.1038/d41586-019-00577-0
超对称的首次应用竟然是在粒子物理以外的领域 超对称改善激光阵列束流质量 Physics World上的报道:Supersymmetry boosts beam quality of laser arrays Principles of supersymmetry have been used to boost the performance of an array of solid-state lasers. The work was led by Mercedeh Khajavikhan at the University of Central Florida in the US. Her team used ideas underpinning the speculative supersymmetry theory of particle physics to suppress unwanted high-frequency modes in their array. The result was a focussed beam intensity that is more than four times greater than achieved by conventional laser arrays. Increasing the power of a laser beam normally requires increasing the cross-sectional area of the laser cavity. This is a problem because wider cavities can support multiple transverse modes, which can create turbulence that degrades beam quality. High-frequency supermodes To avoid this problem, narrow solid-state laser cavities can be placed in a parallel array. If the cavities are close to one another, the modes in each cavity can couple together through evanescent electric fields that “leak” between cavities. In theory, this allows all the cavities to oscillate in step, meaning the power can be scaled up without the instabilities associated with a wider laser cavity. The problem is that such arrays can support several high-frequency “supermodes”, which degrade the laser light and make it difficult to focus the beam to a small spot. At first glance, supersymmetry has little to do with solid-state lasers. It was first proposed in the late 1970s and it attempts to resolve long-standing problems with the Standard Model of particle physics. These include the “hierarchy problem”, which is our lack of understanding of why the weak force is much, much stronger than gravity. Supersymmetry attempts to resolve these problems by introducing a high-energy “superpartner” for every known particle. Khajavikhan colleagues at Central Florida realized that ideas from supersymmetry could be borrowed to make better lasers. The supermodes that plague cavity arrays could be suppressed, they reasoned, if every mode except the fundamental mode was evanescently coupled to a high-energy “super-supermode”. These super-supermodes would be designed to have low quality factors and thus high losses, which would prevent the supermodes from reaching the lasing threshold. The laser could then produce a much higher energy laser beam than a standard laser array while still emitting light only at the fundamental frequency. Creativity and ingenuity Now, after what Khajavikhan describes as “a lot of creativity and ingenuity from our postdoc Mohammad Hokmabadi to implement and validate these abstract ideas”, the Central Florida researchers have built a supersymmetric laser array. It comprises nine evanescently coupled quantum-well cavities etched onto a wafer. Five quantum wells form the laser itself and the other four play the role of lossy superpartners. The researchers compared the device’s far-field light output to the output of a laser containing just one quantum well and to that of a standard laser array containing five active cavities but no superpartner. The single quantum well laser produced a beam with a spread of around 24º and relatively low output power. The standard laser array produced 10 times the output power as the single quantum well laser for the same pump intensity. However, supermodes caused the beam to degrade in quality, broadening to 38° spread. The supersymmetric laser, however, emitted almost as much power as the standard laser array, but did so in just the fundamental mode, producing a beam waist of just 11.6°. This produced an intensity at the focus 4.2 times as high as with the standard laser array. “We foresee many applications of supersymmetric laser arrays in medicine, military, industry and communications,” says Khajavikhan: “Wherever there is a need for high power integrated laser arrays having a high beam quality.” Ortwin Hess at Imperial College, who last year helped design a laser that took completely the opposite approach and suppressed turbulence bymaximizing the number of modes in a broad area laser, is impressed with the work of Christodoulides and colleagues: “I think their method is very nice,” he says. Hess adds that he is very pleased that researchers have succeeded in taking two different approaches to solving the same problem. Optical physicist Lan Yang of Washington University of St Louis in Missouri, US agrees: “The marriage of theory and experiment is quite novel. This is a wonderful, collaborative work,” she says.” She says that more work is now needed to check the stability of the laser’s intensity: “If they can find a strategy to manipulate the lasing profile, that will be even better.” The research is described in Science. Tim Wogan is a science writer based in the UK
利用中子探测混凝土中的含盐量 Neutrons sniff-out salt damage inside concrete structures A compact neutron source has been used to quickly and non-destructively measure the amount of salt inside pieces of concrete. The technique was developed by Yoshie Otake and colleagues at RIKEN in Japan and could help assess salt damage in the world’s aging civil infrastructure. Concrete reinforced with steel beams is a key component of bridges, tunnels and other civil infrastructure; and therefore maintaining its integrity is an important task worldwide. As reinforced concrete ages, the steel beams can corrode as salt penetrates the material. This is a significant problem in coastal regions, where salt is present in sea spray, and also in places where salt is used to melt ice on roads and walkways. Salt incursion is a particular problem in Japan because of the country’s densely-populated coastline and temperate climate. Japanese engineers are therefore very keen to determine when salt corrosion exceeds safe legal limits so that structures can be repaired or replaced. Streamlined process Current methods for corrosion inspection involve boring out core samples from concrete – which is a time-consuming and potentially destructive process. To streamline the inspection process, Otake’s team has developed a technique that uses a neutron beam to measure the salt content inside concrete. Because it is non-destructive, the technique can monitor changes in salt content over time without the need to bore more and more holes. Otake and colleagues made their measurements using the RIKEN’s Accelerator-driven Compact Neutron Source (RANS), which produces a neutron beam by bombarding a beryllium target with 7 MeV protons. These neutrons emerge at high speeds and are then slowed-down (or themalized) by passing them through a polyethylene moderator. Neutrons are an ideal probe of concrete because they can travel deep into the material with relative ease. Occasionally, however, neutrons will interact with atomic nuclei in the concrete creating gamma rays that can then escape. The RIKEN researchers use high-resolution germanium detectors to measure the energy distribution of these gamma-rays, looking for the distinctive energy peaks associated with the chlorine nuclei in salt. They tested the technique by sandwiching salt between concrete slabs and trying to detect it. In just 10 min, the team determined the salt content of regions surrounded by up to 18 cm of concrete. “Our feasibility study has shown that neutron beams can indeed be used to measure whether the salt content of a concrete structure is within the legal limits set by the government,” says Otake. However, the technique cannot currently be used in the field because RANS is too large to move. “Our next challenge is to build a compact neutron source that is small enough to be readily transported to various infrastructures to conduct measurements”. 原文刊载于Physics World网站。
关于Travis Norsen的Foundations of Quantum Mechanics 在American Journal of Physics的官网上看到本书的介绍,书如其名,这本教材把写作重点放在了量子力学的基础上:该如何诠释量子力学中出现的各种基础概念?比方说测量,比方说定域性,比方说.......薛定谔的猫? 其实这些概念量子力学的科普书中可没少出现,但是标准的量子力学教材中反倒是很少着墨。可能是怕学生误入歧途沉溺于这些概念而忽视了计算问题?现在能专门出一本教材讲述这些问题我倒是挺开心的。 Foundations of Quantum Mechanics: An Exploration of the Physical Meaning of Quantum Theory Tim Maudlin Foundations of Quantum Mechanics: An Exploration of the Physical Meaning of Quantum Theory. Travis Norsen 310 pp. Springer, 2017. Price: $59.99 (softcover) ISBN 978-3-319-65866-7; $44.99 (e-book) ISBN 978-3-319-65867-4. Travis Norsen's Foundations of Quantum Mechanics could be the spark that ignites a revolution. There is nothing new in it. If those two sentences sound contradictory, they should. How could a book without a novel thesis change everything? Welcome to the world of foundations of quantum mechanics. Everyone knows, in some vague way, that there exists such a field as foundations of physics in general, and of quantum theory in particular. But it may be unclear exactly who does this work and what they do. One stereotype is that foundations of physics is what some physicists do on the weekends or after they have run out of real physics to do. Also some philosophers do it full time. This last fact is a huge red flashing warning sign that there is something disreputable about the whole business. In the case of quantum theory, a terminological marker has been created. Quantum theory is the most predictively accurate theory in history. There is no doubt that it is in some sense correct. But even though we have every reason to trust its predictions, there is still another question: how to interpret it. According to this elucidation, quantum theory has everything one could want from a theory save an “interpretation.” And whatever it is to interpret a theory, it can't be of any importance to physicists in their everyday life. Quantum theory has gone from triumph to triumph without having an “interpretation.” An “interpretation” must be some inessential luxury add-on, like heated seats in a car: it makes you feel warmer and more comfortable, but plays no role in getting you from here to there. On this understanding, worrying about interpreting quantum theory is inessential to pursuing the basic aims of science. This is where Norsen comes in. Think of Foundations of Quantum Mechanics first and foremost as what it is: a textbook for students. As such, it should not and does not contain any novelty in its content. Textbooks are judged by the logic of their organization, the clarity of their presentation and the lucidity of their style. This one covers many of the topics of a standard introduction to quantum physics, but focuses its attention on the foundational questions: What is there? How does it behave when no one is looking? How does it behave when someone is looking? (Separating these questions indicates that we are doing quantum theory.) Which parts of the mathematical apparatus represent real physical properties and which are merely gauge degrees of freedom? What sort of thing does the wavefunction of a system represent? Standard textbooks gloss over these questions. Norsen dwells on them. The first chapter covers familiar ground: the structure of pre-quantum theories including Newtonian Mechanics and Maxwellian Electrodynamics. Even here, the presentation foregrounds issues that are commonly ignored. In these seemingly unproblematic theories, how do we determine the physical ontology (i.e., the basic physical entities) postulated by the theory? A familiar example is the scalar and vector potentials of classical electro-magnetism. In certain gauges (e.g., Coulomb gauge) the potentials react instantaneously to distant states of affairs. But the sting of this appearance of action-at-a-distance is drawn if one denies physical reality to the potentials, regarding them instead as mere calculational devices. Already we find ourselves contemplating questions about what is real, and about whether anything physically real goes faster than light. The second chapter presents basic quantum phenomena involving interference and entanglement. This will be familiar to any student who has had an introduction to quantum mechanics, but playing around with particular examples encourages developing a “feel” for the theory. Deviation from the standard textbook begins in the next three chapters. Each of these presents a “problem” confronting attempts to understand quantum mechanics as a physical theory. Chapter 3 discusses the Measurement Problem, Chapter 4 the Locality Problem, and Chapter 5 the Ontology Problem. The Measurement Problem is the best known of the three. Succinctly: is there any fundamental physical difference between interactions that count as “measurements” and those that don't? A “fundamental” difference shows up when articulating the basic laws of the theory. John von Neumann's axiomatization of quantum mechanics treats measurement as fundamental. The wavefunction evolves by smooth deterministic laws when the system is not being measured and by sudden indeterministic collapses when measured. This approach contradicts the conviction that measurements are physical interactions like any others, governed by the same laws. What's a measurement depends on the physical dynamics rather than the other way around. The Measurement Problem poses a difficulty if measurement is a trigger for wavefunction collapse. But the collapse itself, no matter how triggered, raises a different puzzle: the Locality Problem. This is what bothered Einstein about quantum theory from the beginning. Collapses, as physical events, are wildly non-local. Thus the famous “spooky-action-at-a-distance” that Einstein could not abide. Finally, the Ontology Problem concerns the physical significance of the wavefunction. One way to pull the non-local sting from wavefunction collapse is to regard the wavefunction as a mathematical object that does not represent any physical property of an individual system. Does it rather represent only statistical features of an ensemble of systems? Does it represent any objective, mind-independent fact? Or rather reflect just the information an agent has about the system? All of these options have been defended, and it is easy to see their attraction. The wavefunction of an electron spreads out in space. Does that mean the electron itself spread out? Or that a huge collection of electrons spreads out? Or that my information about where the electron is dilutes? But if it is not the single electron physically spreading, how can one explain two-slit interference? Further, the mathematical wavefunction is not defined over three-dimensional physical space but over the 3N-dimensional configuration space of N particles. Fields in 3N-dimensional space don't have any evident relation to the three-dimensional world we find ourselves in, the world that physics is meant to explain. Norsen recounts how Schrödinger tried to solve this problem by defining a three-dimensional “charge density” for each electron, and then superimposing all of these in a common three-dimensional space. However, the “smeariness” of the charge density could not be quarantined to the microscopic, but amplified up to macroscopic scale. That is the problem of his eponymous cat. How might one solve the Measurement, Locality and Ontology Problems? These are questions that a typical physics textbook either ignores altogether or tries to finesse. They are also problems that many physics students are intensely interested in. It is here that you least want to hear the command: “Shut up and calculate!”. If calculation will not address these problems, what will? Each problem reflects an unclarity about the physical significance of the mathematical formalism. And making precise statements about the physical ontology and dynamical laws is just what it is to precisely specify a physical theory. Standard quantum textbooks do not exposit a physical theory that lacks an interpretation: they present a predictive formalism without any accompanying physical theory! “Interpreting quantum theory” is actually constructing alternative physical theories that can account for the accuracy of the predictive formalism. Chapter Six discusses the most famous “interpretation” of all: the Copenhagen Interpretation. It is not a precisely formulated physical theory. It does not say what physically exists and how it behaves. The contemporary Copenhagen Interpretation is just an attitude: the refusal to ask, much less attempt to answer, foundational questions about quantum theory. That is not how Bohr saw things. He thought that deep morals about the nature of the world had been revealed by quantum theory. Einstein found Bohr's exposition largely incomprehensible. One lovely thing in these chapters, and indeed throughout the whole book, is the judicious but extensive use of quotations from Einstein, Schrödinger, Heisenberg, Born, Bell, Bohr, etc. Their discussions are sharp and clear, and students will delight at reading the masters debating what they have done. Nothing could be more gratifying to an undergraduate physics student than reading Einstein complain about his difficulties with quantum mechanics. Chapter 6 ends without any clearly articulated physical theory in hand. Here Foundations of Quantum Mechanics departs most dramatically from standard textbook presentations: it presents three clear, mathematically formulated physical theories that aspire to make the same—or nearly the same—predictions as the quantum predictive formalism. Each of these three theories exemplifies a response to Schrödinger's cat problem. Here's Schrödinger's puzzle. Initially, we assign a wavefunction to the system containing the cat and apparatus. Suppose that wavefunction always evolves in accord with the linear Schrödinger equation. It becomes a superposition of macroscopically different states, some with a live cat and others with it dead. If the wavefunction is complete (i.e., if it represents every physical characteristic of the cat) we have a problem. The cat ends up neither simply dead nor simply alive. As John Bell put it: “Either the wavefunction, as given by the Schrödinger equation, is not everything or it is not right.” Regarding the wavefunction as incomplete—as not everything—yields a hidden variables theory. The term is a terrible misnomer. If the extra variables are to determine the health of the cat then they had better not be hidden, else we would not be able to tell if the cat ends up alive or dead. Regarding the wavefunction as complete but not right (as given by Schrödinger's equation) yields a collapse theory. The Copenhagen Interpretation is often taken to be a collapse theory that ties the collapses to measurements, an option that highlights the measurement problem. Chapter 7 presents the most famous “hidden variables” theory: the pilot wave theory or Bohmian mechanics. In this theory “particles” are particles—point objects that have definite positions and follow continuous trajectories through space-time. The wavefunction always evolves by Schrödinger's equation and the point particles also evolve deterministically, in accord with the guidance equation. The evolving microscopic particles congregate into macroscopic objects, which are shaped and behave just like ones we see in the real world. At the end of Schrödinger's experiment, for example, there will either be a cat-shaped collection of particles moving like a live cat or a cat-shaped collection inert like a dead cat. No problem. If Bohmian mechanics solves Schrödinger's problem so cleanly, why has it not been universally adopted? Because the dynamics of the Bohmian particles is wildly non-local: which way a particle here goes can depend on the disposition of a piece of matter way over there. Bohmian mechanics incorporates the spooky action-at-a-distance that Einstein hated. Chapter 8 exposits Bell's theorem: John Bell's proof that non-locality is unavoidable given the predictions of standard quantum mechanics. That removes the main objection to Bohmian mechanics, although, as Bell says, in the way Einstein would have liked least. Chapter 9 presents the most highly developed collapse theory, due to GianCarlo Ghirardi, Alberto Rimini and Tulio Weber, universally known as GRW. GRW avoids the difficulty of tying the collapses to measurements by tying them instead to……nothing. The collapses just happen randomly with fixed probability per unit time. Finally, Chapter 10 investigates escaping Bell's dilemma by maintaining that the wavefunction evolving by Schrödinger's equation is both everything and is right. This yields the Many Worlds or Everett Interpretation. It is a famously weird physical theory, not least due to the multiplying worlds. It is, for example, problematic what the probabilistic predictions of the quantum predictive apparatus even mean in this setting. GRW, Many Worlds and Bohmian mechanics are not presented in any standard quantum mechanics textbook. How adequate is Norsen's exposition? The writing is not just so clear and straightforward that a non-expert can understand it; it is so clear and straightforward that an expert cannot manage to misunderstand it. What shortcomings does Foundations of Quantum Mechanics have? Norsen, like many others, attributes the electromagnetic gauge of Ludvig Lorenz instead to Hendrik Lorentz. And there are many topics that have been omitted: the PBR theorem, the Bohm-Aharonov effect, field theory, the challenge of Relativity, particle creation and annihilation, etc. But this last complaint is really a call for a successor volume: Advanced Foundations of Quantum Mechanics. May this book ignite a revolution in the pedagogy of quantum mechanics. Vive la Révolution! Resources AUTHOR LIBRARIAN ADVERTISER General Information
缪子反常磁矩呼唤着新物理 今年发表在Physics Reports上的论文。 摘要: We review how the muon anomalous magnetic moment (g−2) and the quest for lepton flavor violation are intimately correlated. Indeed the decay μ→eγ is induced by the same amplitude for different choices of in- and outgoing leptons. In this work, we try to address some intriguing questions such as: Which hierarchy in the charged lepton sector one should have in order to reconcile possible signals coming simultaneously fromg−2and lepton flavor violation? What can we learn if theg−2anomaly is confirmed by the upcoming flagship experiments at FERMILAB and J-PARC, and no signal is seen in the decayμ→eγin the foreseeable future? On the other hand, if theμ→eγdecay is seen in the upcoming years, do we need to necessarily observe a signal also ing−2?. In this attempt, we generally study the correlation between these observables in a detailed analysis of simplified models. We derive master integrals and fully analytical and exact expressions for both phenomena, and address other flavor violating signals. We investigate under which conditions the observations can be made compatible and discuss their implications. Lastly, we discuss in this context several extensions of the SM, such as the Minimal Supersymmetric Standard Model, Left–Right symmetric model, B–L model, scotogenic model, two Higgs doublet model, Zee–Babu model, 331 model, and Lμ−Lτ, dark photon, seesaw models type I, II and III, and also address the interplay with μ→eee decay and μ–e conversion.
黄涛老师的《量子色动力学专题》出版了 高能所的报道:   近期,理论物理室黄涛研究员主编的理论物理专著《量子色动力学专题》在科学出版社出版。此书采用专题形式,对量子色动力学(QCD)基础和近期发展的许多专题进行了深入介绍。 量子色动力学是20世纪70年代初发展起来的粒子物理中强相互作用基本理论,它的创立者D.J. Gross、H.D. Politzer和F. Wilczek获得2004年诺贝尔物理奖。 黄涛研究员曾在2011年撰写了一本《量子色动力学引论》(北京大学出版社出版)介绍了量子色动力学的基础和基本应用。若干年过去,此领域缺乏一本反映近期进展且适合初入门者学习相关专题的教科书。针对这一需求,此书作者选取了QCD中基础方法和有效理论进行了系统性的介绍,以专题形式撰写了相关章节,期望此书成为既能衔接基础理论课程与相关前沿研究,又能为研究生和青年学者了解量子色动力学整体框架与最新进展提供帮助的教科书。 参与撰写此专著的共有15位作者,他们是丁亨通、马滟青、王由凯、王伟、王青、王新年、申建明、吕才典、朱守华、吴兴刚、李湘楠、陈莹、赵光达、郭新恒、黄涛。黄涛研究员负责全书的策划和组织。各位作者都是一直从事粒子物理与核物理的理论研究,并在各自工作领域做出了有影响力的成绩。他们基于自己多年的研究工作积累和对物理的深入理解,认真组织内容,经过两年时间努力得以出版。此书内容既衔接基础理论与相关研究前沿,又有一定的自洽性和相对独立性,是迅速系统掌握量子色动力学整体框架和最新进展所必备的参考书。 相信此专著的出版对于我国从事粒子物理尤其是量子色动力学研究领域的研究生、青年教师和科研人员有所帮助。内容简介: 《量子色动力学专题》采用专题形式,对量子色动力学(QCD)基础和有效理论进行了深入介绍,包含QCD基本特点、格点QCD、QCD求和规则、手征微扰理论、重夸克有效理论、软共线有效理论和非相对论QCD等,还介绍了多种微扰QCD计算技巧、高能强子碰撞与重味夸克物理中的QCD效应以及热密QCD理论等。内容既衔接基础理论与相关研究前沿,又有一定的自洽性和相对独立性,是迅速系统掌握量子色动力学整体框架和新进展所必备的参考书。
1 下一页