Based on the Sloan Digital Sky Survey DR6 (SDSS) and the 'Millennium Simulation (MS), we investigate the alignment between galaxies and large-scale structure. For this purpose, we develop two new statistical tool...Based on the Sloan Digital Sky Survey DR6 (SDSS) and the 'Millennium Simulation (MS), we investigate the alignment between galaxies and large-scale structure. For this purpose, we develop two new statistical tools, namely the alignment correlation function and the cos(20)-statistic. The former is a two-dimensional extension of the traditional two-point correlation function and the latter is related to the ellipticity correlation function used for cosmic shear measurements. Both are based on the cross correlation between a sample of galaxies with orientations and a reference sample which represents the large-scale structure. We apply the new statistics to the SDSS galaxy catalog. The alignment correlation function reveals an overabundance of reference galaxies along the major axes of red, luminous (L 〉 ~L*) galaxies out to projected separations of 60 h-lMpc. The signal increases with central galaxy luminosity. No alignment signal is detected for blue galaxies. The cos(2θ)-statistic yields very similar results. Starting from a MS semi-analytic galaxy catalog, we assign an orientation to each red, luminous and central galaxy, based on that of the central region of the host halo (with size similar to that of the stellar galaxy). As an alternative, we use the orientation of the host halo itself. We find a mean projected misalignment between a halo and its central region of -25°. The misalignment decreases slightly with increasing luminosity of the central galaxy. Using the orientations and luminosities of the semi-analytic galaxies, we repeat our alignment analysis on mock surveys of the MS. Agreement with the SDSS results is good if the central orientations are used. Predictions using the halo orientations as proxies for cen- tral galaxy orientations overestimate the observed alignment by more than a factor of 2. Finally, the large volume of the MS allows us to generate a two-dimensional map of the alignment correlation function, which shows the reference galaxy distribution to be flat- tened parallel to the orientations of red luminous galaxies with axis ratios of -0.5 and ,-0.75 for halo and central orientations, respectively. These ratios are almost independent of scale out to 60 h^-1 Mpc.展开更多
Major interactions are known to trigger star formation in galaxies and alter their color.We study the major interactions in filaments and sheets using SDSS data to understand the influence of large-scale environments ...Major interactions are known to trigger star formation in galaxies and alter their color.We study the major interactions in filaments and sheets using SDSS data to understand the influence of large-scale environments on galaxy interactions.We identify the galaxies in filaments and sheets using the local dimension and also find the major pairs residing in these environments.The star formation rate(SFR) and color of the interacting galaxies as a function of pair separation are separately analyzed in filaments and sheets.The analysis is repeated for three volume limited samples covering different magnitude ranges.The major pairs residing in the filaments show a significantly higher SFR and bluer color than those residing in the sheets up to the projected pair separation of~50 kpc.We observe a complete reversal of this behavior for both the SFR and color of the galaxy pairs having a projected separation larger than 50 kpc.Some earlier studies report that the galaxy pairs align with the filament axis.Such alignment inside filaments indicates anisotropic accretion that may cause these differences.We do not observe these trends in the brighter galaxy samples.The pairs in filaments and sheets from the brighter galaxy samples trace relatively denser regions in these environments.The absence of these trends in the brighter samples may be explained by the dominant effect of the local density over the effects of the large-scale environment.展开更多
We examine the possibility of applying the baryonic acoustic oscillation reconstruction method to improve the neutrino massΣm_νconstraint.Thanks to the Gaussianization of the process,we demonstrate that the reconstr...We examine the possibility of applying the baryonic acoustic oscillation reconstruction method to improve the neutrino massΣm_νconstraint.Thanks to the Gaussianization of the process,we demonstrate that the reconstruction algorithm could improve the measurement accuracy by roughly a factor of two.On the other hand,the reconstruction process itself becomes a source of systematic error.While the algorithm is supposed to produce the displacement field from a density distribution,various approximations cause the reconstructed output to deviate on intermediate scales.Nevertheless,it is still possible to benefit from this Gaussianized field,given that we can carefully calibrate the“transfer function”between the reconstruction output and theoretical displacement divergence from simulations.The limitation of this approach is then set by the numerical stability of this transfer function.With an ensemble of simulations,we show that such systematic error could become comparable to statistical uncertainties for a DESI-like survey and be safely neglected for other less ambitious surveys.展开更多
The improvements in the sensitivity of the gravitational wave(GW) network enable the detection of several large redshift GW sources by third-generation GW detectors. These advancements provide an independent method to...The improvements in the sensitivity of the gravitational wave(GW) network enable the detection of several large redshift GW sources by third-generation GW detectors. These advancements provide an independent method to probe the large-scale structure of the universe by using the clustering of the binary black holes(BBHs). The black hole catalogs are complementary to the galaxy catalogs because of large redshifts of GW events, which may imply that BBHs are a better choice than galaxies to probe the large-scale structure of the universe and cosmic evolution over a large redshift range. To probe the large-scale structure, we used the sky position of the BBHs observed by third-generation GW detectors to calculate the angular correlation function and the bias factor of the population of BBHs. This method is also statistically significant as 5000 BBHs are simulated. Moreover, for the third-generation GW detectors, we found that the bias factor can be recovered to within 33% with an observational time of ten years. This method only depends on the GW source-location posteriors;hence, it can be an independent method to reveal the formation mechanisms and origin of the BBH mergers compared to the electromagnetic method.展开更多
The alignment between satellite and central galaxies serves as a proxy for addressing the issue of galaxy formation and evolution, and has been investigated abundantly in observations and theoretical works.Most scenar...The alignment between satellite and central galaxies serves as a proxy for addressing the issue of galaxy formation and evolution, and has been investigated abundantly in observations and theoretical works.Most scenarios indicate that the satellites preferentially are located along the major axis of their central galaxy. Recent work shows that the strength of alignment signals depends on the large-scale environment in observations. We use the publicly-released data from EAGLE to figure out whether the same effect can be found in the associated hydrodynamic simulation. We found much stronger environmental dependency of alignment signals in the simulation. We also explore change of alignments to address the formation of this effect.展开更多
The size distributions of 2D and 3D Voronoi cells and of cells of Vp(2, 3),--2D cut of 3D Voronoi diagram--are explored, with the slngle-parameter (re-scaled) gamma distribution playing a central role in the analy...The size distributions of 2D and 3D Voronoi cells and of cells of Vp(2, 3),--2D cut of 3D Voronoi diagram--are explored, with the slngle-parameter (re-scaled) gamma distribution playing a central role in the analytical fitting. Observational evidence for a cellular universe is briefly reviewed. A simulated Vp(2, 3) map with galaxies lying on the cell boundaries is constructed to compare, as regards general appearance, with the observed CfA map of galaxies and voids, the parameters of the simulation being so chosen as to reproduce the largest observed void size.展开更多
The Brans-Dicke(BD)theory is the simplest Scalar-Tensor theory of gravity,which can be considered as a candidate of modified Einstein’s theory of general relativity.In this work,we forecast the constraints on BD theo...The Brans-Dicke(BD)theory is the simplest Scalar-Tensor theory of gravity,which can be considered as a candidate of modified Einstein’s theory of general relativity.In this work,we forecast the constraints on BD theory in the CSST galaxy clustering spectroscopic survey with a magnitude limit~23 AB mag for point-source 5σdetection.We generate mock data based on the zCOSMOS catalog and consider the observational and instrumental effects of the CSST spectroscopic survey.We predict galaxy power spectra in the BD theory from z=0 to 1.5,and the galaxy bias and other systematical parameters are also included.The Markov Chain Monte Carlo technique is employed to find the best-fits and probability distributions of the cosmological and systematical parameters.A BD parameterζis introduced,which satisfiesζ=In(1+(1/ω)).We find that the CSST spectroscopic galaxy clustering survey can give|ξ|<10^(-2),or equivalently|ω|>O(10^(2))and|■/G|<10^(-13),under the assumptionζ=0.These constraints are almost at the same order of magnitude compared to the joint constraints using the current cosmic microwave background,baryon acoustic oscillations and TypeⅠa supernova data,indicating that the CSST galaxy clustering spectroscopic survey would be powerful for constraining the BD theory and other modified gravity theories.展开更多
In this work we have to deal with the axiomatization of cosmology, but it is only recently that we have hit upon a new mathematical approach to capitalize on our new set identities for the basic laws of cosmology. So ...In this work we have to deal with the axiomatization of cosmology, but it is only recently that we have hit upon a new mathematical approach to capitalize on our new set identities for the basic laws of cosmology. So our proposal of settlement is that we will propose some new laws (e.g., formation of the black hole). We introduce the concept of axiom cosmology. This principle describes the cosmology which can get freedom from the notion of the induction. We present a large-scale structure model of the universe, and this leads to successfully explanation of problem of closed universe or open universe (because from the outset it is theorem and its succinct proof). In this paper we prove that the non-singular point theorem means that a singularity cannot be mathematically defined nor physical. It allows us to overcome the mysterious, physical singularity conundrum and explain meaningful antimatter annihilations for general configurations.展开更多
We present a GPU-accelerated cosmological simulation code,PhotoNs-GPU,based on an algorithm of Particle Mesh Fast Multipole Method(PM-FMM),and focus on the GPU utilization and optimization.A proper interpolated method...We present a GPU-accelerated cosmological simulation code,PhotoNs-GPU,based on an algorithm of Particle Mesh Fast Multipole Method(PM-FMM),and focus on the GPU utilization and optimization.A proper interpolated method for truncated gravity is introduced to speed up the special functions in kernels.We verify the GPU code in mixed precision and different levels of the interpolated method on GPU.A run with single precision is roughly two times faster than double precision for current practical cosmological simulations.But it could induce an unbiased small noise in power spectrum.Compared with the CPU version of PhotoNs and Gadget-2,the efficiency of the new code is significantly improved.Activated all the optimizations on the memory access,kernel functions and concurrency management,the peak performance of our test runs achieves 48%of the theoretical speed and the average performance approaches to~35%on GPU.展开更多
We investigate a hybrid numerical algorithm aimed at large-scale cosmological N-body simulation for on-going and future high precision sky surveys.It makes use of a truncated Fast Multiple Method(FMM)for short-range g...We investigate a hybrid numerical algorithm aimed at large-scale cosmological N-body simulation for on-going and future high precision sky surveys.It makes use of a truncated Fast Multiple Method(FMM)for short-range gravity,incorporating a Particle Mesh(PM)method for long-range potential,which is applied to deal with extremely large particle number.In this work,we present a specific strategy to modify a conventional FMM by a Gaussian shaped factor and provide quantitative expressions for the interaction kernels between multipole expansions.Moreover,a proper Multipole Acceptance Criterion for the hybrid method is introduced to solve potential precision loss induced by the truncation.Such procedures reduce the amount of computation compared to an original FMM and decouple the global communication.A simplified version of code is introduced to verify the hybrid algorithm,accuracy and parallel implementation.展开更多
Constraining neutrino mass remains an elusive challenge in modern physics.Precision measurements are expected from several upcoming cosmological probes of large-scale structure.Achieving this goal relies on an equal l...Constraining neutrino mass remains an elusive challenge in modern physics.Precision measurements are expected from several upcoming cosmological probes of large-scale structure.Achieving this goal relies on an equal level of precision from theoretical predictions of neutrino clustering.Numerical simulations of the non-linear evolution of cold dark matter and neutrinos play a pivotal role in this process.We incorporate neutrinos into the cosmological N-body code CUBEP3M and discuss the challenges associated with pushing to the extreme scales demanded by the neutrino problem.We highlight code optimizations made to exploit modern high performance computing architectures and present a novel method of data compression that reduces the phase-space particle footprint from 24 bytes in single precision to roughly 9 bytes.We scale the neutrino problem to the Tianhe-2 supercomputer and provide details of our production run,named Tian Nu,which uses 86%of the machine(13 824 compute nodes).With a total of 2.97 trillion particles,Tian Nu is currently the world’s largest cosmological N-body simulation and improves upon previous neutrino simulations by two orders of magnitude in scale.We finish with a discussion of the unanticipated computational challenges that were encountered during the Tian Nu runtime.展开更多
We forecast the cosmological constraints of the neutral hydrogen(HI) intensity mapping(IM)technique with radio telescopes by assuming 1-year of observational time. The current and future radio telescopes that we consi...We forecast the cosmological constraints of the neutral hydrogen(HI) intensity mapping(IM)technique with radio telescopes by assuming 1-year of observational time. The current and future radio telescopes that we consider here are Five-hundred-meter Aperture Spherical radio Telescope(FAST), Baryon acoustic oscillations In Neutral Gas Observations(BINGO), and Square Kilometre Array phase Ⅰ(SKA-Ⅰ) single-dish experiments. We also forecast the combined constraints of the three radio telescopes with Planck. We find that the 1σ errors of(w0, wa) for BINGO, FAST and SKA-Ⅰ with respect to the fiducial values are respectively,(0.9293, 3.5792),(0.4083, 1.5878) and(0.3158, 0.4622). This is equivalent to(56.04%, 55.64%) and(66.02%, 87.09%) improvements in constraining(w0, wa) for FAST and SKA-Ⅰ respectively relative to BINGO. Simulations further show that SKA-Ⅰ will put more stringent constraints than both FAST and BINGO when each of the experiments is combined with Planck measurements. The 1σ errors for(w0, wa), BINGO + Planck, FAST + Planck and SKA-Ⅰ + Planck covariance matrices are respectively(0.0832, 0.3520),(0.0791, 0.3313) and(0.0678, 0.2679) implying there is an improvement in(w0, wa) constraints of(4.93%, 5.88%) for FAST + Planck relative to BINGO + Planck and an improvement of(18.51%, 23.89%) in constraining(w0, wa) for SKA-Ⅰ + Planck relative to BINGO + Planck. We also compared the performance of Planck data plus each single-dish experiment relative to Planck alone,and find that the reduction in(w0, wa) 1σ errors for each experiment plus Planck, respectively, imply the(w0, wa) constraints improvement of(22.96%, 8.45%),(26.76%, 13.84%) and(37.22%, 30.33%) for BINGO + Planck, FAST + Planck and SKA-Ⅰ + Planck relative to Planck alone. For the nine cosmological parameters in consideration, we find that there is a trade-off between SKA-Ⅰ and FAST in constraining cosmological parameters, with each experiment being more superior in constraining a particular set of parameters.展开更多
Herein,we present a deep-learning technique for reconstructing the dark-matter density field from the redshift-space distribution of dark-matter halos.We built a UNet-architecture neural network and trained it using t...Herein,we present a deep-learning technique for reconstructing the dark-matter density field from the redshift-space distribution of dark-matter halos.We built a UNet-architecture neural network and trained it using the COmoving Lagrangian Acceleration fast simulation,which is an approximation of the N-body simulation with 5123 particles in a box size of 500 h^(-1)Mpc.Further,we tested the resulting UNet model not only with training-like test samples but also with standard N-body simulations,such as the Jiutian simulation with 61443particles in a box size of 1000 h^(-1)Mpc and the ELUCID simulation,which has a different cosmology.The real-space dark-matter density fields in the three simulations can be reconstructed reliably with only a small reduction of the cross-correlation power spectrum at 1%and 10%levels at k=0.1 and 0.3 h Mpc-1,respectively.The reconstruction clearly helps to correct for redshift-space distortions and is unaffected by the different cosmologies between the training(Planck2018)and test samples(WMAP5).Furthermore,we tested the application of the UNet-reconstructed density field to obtain the velocity&tidal field and found that this approach provides better results compared with the traditional approach based on the linear bias model,showing a 12.2%improvement in the correlation slope and a 21.1%reduction in the scatter between the predicted and true velocities.Thus,our method is highly efficient and has excellent extrapolation reliability beyond the training set.This provides an ideal solution for determining the three-dimensional underlying density field from the plentiful galaxy survey data.展开更多
A significant excess of the stellar mass density at high redshift has been discovered from the early data release of James Webb Space Telescope(JWST),and it may require a high star formation efficiency.However,this wi...A significant excess of the stellar mass density at high redshift has been discovered from the early data release of James Webb Space Telescope(JWST),and it may require a high star formation efficiency.However,this will lead to large number density of ionizing photons in the epoch of reionization(EoR),so that the reionization history will be changed,which can arise tension with the current EoR observations.Warm dark matter(WDM),via the free streaming effect,can suppress the formation of small-scale structure as well as low-mass galaxies.This provides an effective way to decrease the ionizing photons when considering a large star formation efficiency in high-z massive galaxies without altering the cosmic reionization history.On the other hand,the constraints on the properties of WDM can be derived from the JWST observations.In this work,we study WDM as a possible solution to reconcile the JWST stellar mass density of high-z massive galaxies and reionization history.We find that,the JWST high-z comoving cumulative stellar mass density alone has no significant preference for either CDM or WDM model.But using the observational data of other stellar mass density measurements and reionization history,we obtain that the WDM particle mass with mw=0.51_(-0.12)^(+0.22) keV and star formation efficiency parameter f_(*)^(0)> 0.39 in 2σ confidence level can match both the JWST high-z comoving cumulative stellar mass density and the reionization history.展开更多
Gamma-ray bursts(GRBs) are among the brightest objects in the Universe and, hence, can be observed up to a very high redshift. Properly calibrated empirical correlations between intensity and spectral correlations of ...Gamma-ray bursts(GRBs) are among the brightest objects in the Universe and, hence, can be observed up to a very high redshift. Properly calibrated empirical correlations between intensity and spectral correlations of GRBs can be used to estimate the cosmological parameters. However, the possibility of the evolution of GRBs with redshift is a long-standing puzzle. In this work, we used 162 long-duration GRBs to determine whether GRBs below and above a certain redshift have different properties. The GRBs are split into two groups, and we fit the Amati relation for each group separately. Our findings demonstrate that estimations of the Amati parameters for the two groups are substantially dissimilar. We perform simulations to investigate whether the selection effects could cause the difference. Our analysis shows that the differences may be either intrinsic or due to systematic errors in the data, and the selection effects are not their true origin. However, in-depth analysis with a new data set comprised of 119 long GRBs shows that intrinsic scatter may partly be responsible for such effects.展开更多
We propose a light-weight deep convolutional neural network(CNN)to estimate the cosmological parameters from simulated 3-dimensional dark matter distributions with high accuracy.The training set is based on 465 realiz...We propose a light-weight deep convolutional neural network(CNN)to estimate the cosmological parameters from simulated 3-dimensional dark matter distributions with high accuracy.The training set is based on 465 realizations of a cubic box with a side length of 256 h-1 Mpc,sampled with 1283 particles interpolated over a cubic grid of 1283 voxels.These volumes have cosmological parameters varying within the flatΛCDM parameter space of 0.16≤?m≤0.46 and 2.0≤109 As≤2.3.The neural network takes as an input cubes with 32^3 oxels and has three convolution layers,three dense layers,together with some batch normalization and pooling layers.In the final predictions from the network we find a 2.5%bias on the primordial amplitudeσ8 that cannot easily be resolved by continued training.We correct this bias to obtain unprecedented accuracy in the cosmological parameter estimation with statistical uncertainties ofδ?m=0.0015 andδσ8=0.0029,which are several times better than the results of previous CNN works.Compared with a 2-point analysis method using the clustering region of 0-130 and 10-130 h-1 Mpc,the CNN constraints are several times and an order of magnitude more precise,respectively.Finally,we conduct preliminary checks of the error-tolerance abilities of the neural network,and find that it exhibits robustness against smoothing,masking,random noise,global variation,rotation,reflection,and simulation resolution.Those effects are well understood in typical clustering analysis,but had not been tested before for the CNN approach.Our work shows that CNN can be more promising than people expected in deriving tight cosmological constraints from the cosmic large scale structure.展开更多
The baryon acoustic oscillations (BAO) reconstruction plays a crucial role in cosmological analysis for spectroscopic galaxy surveys because it can make the density field effectively more linear and more Gaussian.The ...The baryon acoustic oscillations (BAO) reconstruction plays a crucial role in cosmological analysis for spectroscopic galaxy surveys because it can make the density field effectively more linear and more Gaussian.The combination of the power spectra before and after the BAO reconstruction helps break degeneracies among parameters,then improves the constraints on cosmological parameters.It is therefore important to estimate the covariance matrix between pre-and post-reconstructed power spectra.In this work,we use perturbation theory to estimate the covariance matrix of the related power spectra multipoles,and check the accuracy of the derived covariance model using a large suite of dark matter halo catalogs at z=0.5.We find that the diagonal part of the auto covariance is well described by the Gaussian prediction,while the cross covariance deviates from the Gaussian prediction quickly when k>0.1 h Mpc^(-1).Additionally,we find the non-Gaussian effect in the nondiagonal part of the cross covariance is comparable to,or even stronger than,the pre-reconstruction covariance.By adding the non-Gaussian contribution,we obtain good agreement between analytical and numerical covariance matrices in the non-diagonal part up to k■0.15 h Mpc^(-1).The agreement in the diagonal part is also improved,bu still under-predicts the correlation in the cross covariance block.展开更多
The deceleration coefficient q and the jerk coefficient j obtained by the Taylor expansion of the scale factor a(t)play an important role in the study of cosmology.The current value of these coefficients for a cosmolo...The deceleration coefficient q and the jerk coefficient j obtained by the Taylor expansion of the scale factor a(t)play an important role in the study of cosmology.The current value of these coefficients for a cosmological model reflects the transition time between the phases dominated by dark energy and matter and can be used to determine if and how much the universe is decelerating.Thus,these coefficient values offer a way of constraining a particular cosmology model.Research based on this scenario was completed by Orlando Luongo and Marco Muccino.However,some approaches in this method should be tested prudently because some conditions such as dd_(L)/d_(Z)>0 dH/d_(Z) and may not be guaranteed.In this study,we used the MAPAge model to reconstruct the jerk parameters (q0and j0) with DESI 2024 data.Using the MAPAge model ensures particular physical circumstances are satisfied in the approach of determining the jerk parameters.Compared to the previous method,which used the Taylor expansion series q0,j0,and s0as model-independent parameters,we obtained more physical and slightly different results for the jerk parameters.Our results suggest that the DESI 2024 BAO data set favours different jerk parameters compared to the jerk parameters in the standardΛCDM model.展开更多
The measurement of cosmological distances using baryon acoustic oscillations(BAO)is crucial for studying the universe’s expansion.The China Space Station Telescope(CSST)galaxy redshift survey,with its vast volume and...The measurement of cosmological distances using baryon acoustic oscillations(BAO)is crucial for studying the universe’s expansion.The China Space Station Telescope(CSST)galaxy redshift survey,with its vast volume and sky coverage,provides an opportunity to address key challenges in cosmology.However,redshift uncertainties in galaxy surveys can degrade both angular and radial distance estimates.In this study,we forecast the precision of BAO distance measurements using mock CSST galaxy samples,applying a two-point correlation function(2PCF)wedge approach to mitigate redshift errors.We simulate redshift uncertainties of σ_(0)=0.003 andσ_(0)=0.006,representative of expected CSST errors,and examine their effects on the BAO peak and distance scaling factors,α_(⊥)andα_(||),across redshift bins within 0.0<z≤1.0.The wedge 2PCF method proves more effective in detecting the BAO peak compared with the monopole 2PCF,particularly forσ_(0)=0.006.Constraints on the BAO peaks show thatα_(⊥)is well constrained around 1.0,regardless of σ_(0),with precision between 1%and 3%across redshift bins.In contrast,α_(||)measurements are more sensitive to increases inσ_(0).Forσ_(0)=0.003,the results remain close to the fiducial value,with uncertainties ranging between 4%and 9%;forσ_(0)=0.006,significant deviations from the fiducial value are observed.We also study the ability to measure parameters(Ω_(m),H_(0)r_(d))using distance measurements,proving robust constraints as a cosmological probe under CSST-like redshift uncertainties.These findings demonstrate that the CSST survey enables few-percent precision measurements of D_(A)using the wedge 2PCF method,highlighting its potential to place tight constraints on the universe’s expansion history and contribute to high-precision cosmological studies.展开更多
Fractal dimensions of the volume--limited subsamples with various infrared luminositiessorted out from samples given by the IRAS galaxy redshift surveys in fields F15 and NGWhave been calculated. Results show that str...Fractal dimensions of the volume--limited subsamples with various infrared luminositiessorted out from samples given by the IRAS galaxy redshift surveys in fields F15 and NGWhave been calculated. Results show that structures with scales larger than about 60 h_(50)^(-1)Mpcexist in the large--scale distribution of infrared galaxies; distribution of IRAS galaxies hasmulti--level fractal structure. That is, the distribution has fractal structure with definitefractal dimension D only in certain scale range, and as the scale increases to certain turningscale r_c,the distribution transits to another fractal structure with different D. TLe fractaldimensions on levels with larger scales are generally larger than that on the levels withsmallerscales. It consists with the observational feature of the large-scale distribution of galaxies,i.e. clustering of galaxies is the dominate character on rather small scales, but on largerscales it becomes distribution with voids.展开更多
基金supported by NSFC (Nos. 10533030, 10821302,10878001)the Knowledge Innovation Program of CAS (No. KJCX2-YW-T05)by 973 Program(No. 2007CB815402).
文摘Based on the Sloan Digital Sky Survey DR6 (SDSS) and the 'Millennium Simulation (MS), we investigate the alignment between galaxies and large-scale structure. For this purpose, we develop two new statistical tools, namely the alignment correlation function and the cos(20)-statistic. The former is a two-dimensional extension of the traditional two-point correlation function and the latter is related to the ellipticity correlation function used for cosmic shear measurements. Both are based on the cross correlation between a sample of galaxies with orientations and a reference sample which represents the large-scale structure. We apply the new statistics to the SDSS galaxy catalog. The alignment correlation function reveals an overabundance of reference galaxies along the major axes of red, luminous (L 〉 ~L*) galaxies out to projected separations of 60 h-lMpc. The signal increases with central galaxy luminosity. No alignment signal is detected for blue galaxies. The cos(2θ)-statistic yields very similar results. Starting from a MS semi-analytic galaxy catalog, we assign an orientation to each red, luminous and central galaxy, based on that of the central region of the host halo (with size similar to that of the stellar galaxy). As an alternative, we use the orientation of the host halo itself. We find a mean projected misalignment between a halo and its central region of -25°. The misalignment decreases slightly with increasing luminosity of the central galaxy. Using the orientations and luminosities of the semi-analytic galaxies, we repeat our alignment analysis on mock surveys of the MS. Agreement with the SDSS results is good if the central orientations are used. Predictions using the halo orientations as proxies for cen- tral galaxy orientations overestimate the observed alignment by more than a factor of 2. Finally, the large volume of the MS allows us to generate a two-dimensional map of the alignment correlation function, which shows the reference galaxy distribution to be flat- tened parallel to the orientations of red luminous galaxies with axis ratios of -0.5 and ,-0.75 for halo and central orientations, respectively. These ratios are almost independent of scale out to 60 h^-1 Mpc.
基金financial support from the SERB,DST,Government of India through the project CRG/2019/001110IUCAA,Pune for providing support through an associateship program+1 种基金IISER Tirupati for support through a postdoctoral fellowshipFunding for the SDSS and SDSS-Ⅱhas been provided by the Alfred P.Sloan Foundation,the U.S.Department of Energy,the National Aeronautics and Space Administration,the Japanese Monbukagakusho,the Max Planck Society,and the Higher Education Funding Council for England。
文摘Major interactions are known to trigger star formation in galaxies and alter their color.We study the major interactions in filaments and sheets using SDSS data to understand the influence of large-scale environments on galaxy interactions.We identify the galaxies in filaments and sheets using the local dimension and also find the major pairs residing in these environments.The star formation rate(SFR) and color of the interacting galaxies as a function of pair separation are separately analyzed in filaments and sheets.The analysis is repeated for three volume limited samples covering different magnitude ranges.The major pairs residing in the filaments show a significantly higher SFR and bluer color than those residing in the sheets up to the projected pair separation of~50 kpc.We observe a complete reversal of this behavior for both the SFR and color of the galaxy pairs having a projected separation larger than 50 kpc.Some earlier studies report that the galaxy pairs align with the filament axis.Such alignment inside filaments indicates anisotropic accretion that may cause these differences.We do not observe these trends in the brighter galaxy samples.The pairs in filaments and sheets from the brighter galaxy samples trace relatively denser regions in these environments.The absence of these trends in the brighter samples may be explained by the dominant effect of the local density over the effects of the large-scale environment.
基金the support from the science research grants from the China Manned Space Project with NO.CMS-CSST-2021-B01supported by the World Premier International Research Center Initiative(WPI),MEXT,Japan+12 种基金the Ontario Research Fund:Research Excellence Program(ORF-RE)Natural Sciences and Engineering Research Council of Canada(NSERC)[funding reference number RGPIN-2019-067,CRD 523638-201,555585-20]Canadian Institute for Advanced Research(CIFAR)Canadian Foundation for Innovation(CFI)the National Natural Science Foundation of China(NSFC,Grant No.11929301)Simons FoundationThoth Technology IncAlexander von Humboldt Foundationthe Niagara supercomputers at the SciNet HPC Consortiumthe Canada Foundation for Innovationthe Government of OntarioOntario Research Fund—Research Excellencethe University of Toronto。
文摘We examine the possibility of applying the baryonic acoustic oscillation reconstruction method to improve the neutrino massΣm_νconstraint.Thanks to the Gaussianization of the process,we demonstrate that the reconstruction algorithm could improve the measurement accuracy by roughly a factor of two.On the other hand,the reconstruction process itself becomes a source of systematic error.While the algorithm is supposed to produce the displacement field from a density distribution,various approximations cause the reconstructed output to deviate on intermediate scales.Nevertheless,it is still possible to benefit from this Gaussianized field,given that we can carefully calibrate the“transfer function”between the reconstruction output and theoretical displacement divergence from simulations.The limitation of this approach is then set by the numerical stability of this transfer function.With an ensemble of simulations,we show that such systematic error could become comparable to statistical uncertainties for a DESI-like survey and be safely neglected for other less ambitious surveys.
基金supported by the National Natural Science Foundation of China (grant Nos. 11922303, 119201003 and 12021003)supported by Hubei province Natural Science Fund for the Distinguished Young Scholars (No.2019CFA052)supported by CAS Project for Young Scientists in Basic Research YSBR-006。
文摘The improvements in the sensitivity of the gravitational wave(GW) network enable the detection of several large redshift GW sources by third-generation GW detectors. These advancements provide an independent method to probe the large-scale structure of the universe by using the clustering of the binary black holes(BBHs). The black hole catalogs are complementary to the galaxy catalogs because of large redshifts of GW events, which may imply that BBHs are a better choice than galaxies to probe the large-scale structure of the universe and cosmic evolution over a large redshift range. To probe the large-scale structure, we used the sky position of the BBHs observed by third-generation GW detectors to calculate the angular correlation function and the bias factor of the population of BBHs. This method is also statistically significant as 5000 BBHs are simulated. Moreover, for the third-generation GW detectors, we found that the bias factor can be recovered to within 33% with an observational time of ten years. This method only depends on the GW source-location posteriors;hence, it can be an independent method to reveal the formation mechanisms and origin of the BBH mergers compared to the electromagnetic method.
基金supported by NSFC (No. 11803095)supported by NSFC (No. 11733010)
文摘The alignment between satellite and central galaxies serves as a proxy for addressing the issue of galaxy formation and evolution, and has been investigated abundantly in observations and theoretical works.Most scenarios indicate that the satellites preferentially are located along the major axis of their central galaxy. Recent work shows that the strength of alignment signals depends on the large-scale environment in observations. We use the publicly-released data from EAGLE to figure out whether the same effect can be found in the associated hydrodynamic simulation. We found much stronger environmental dependency of alignment signals in the simulation. We also explore change of alignments to address the formation of this effect.
文摘The size distributions of 2D and 3D Voronoi cells and of cells of Vp(2, 3),--2D cut of 3D Voronoi diagram--are explored, with the slngle-parameter (re-scaled) gamma distribution playing a central role in the analytical fitting. Observational evidence for a cellular universe is briefly reviewed. A simulated Vp(2, 3) map with galaxies lying on the cell boundaries is constructed to compare, as regards general appearance, with the observed CfA map of galaxies and voids, the parameters of the simulation being so chosen as to reproduce the largest observed void size.
基金the support of MOST2018YFE0120800,2020SKA0110402,NSFC-11822305,NSFC11773031,NSFC-11633004CAS Interdisciplinary Innovation Team+6 种基金the Chinese Academy of Sciences(CAS)instrument grant ZDKYYQ20200008the CAS Strategic Priority Research Program XDA15020200National Natural Science Foundation of China(NSFC,Grant Nos.11773034 and 11633004)the Chinese Academy of Sciences(CAS)Strategic Priority Research Program XDA15020200the CAS Interdisciplinary Innovation Team(JCTD-2019-05)the support of NSFC(Grant Nos.11473044 and 11973047)the Chinese Academy of Science grants QYZDJ-SSW-SLH017 and XDB 23040100。
文摘The Brans-Dicke(BD)theory is the simplest Scalar-Tensor theory of gravity,which can be considered as a candidate of modified Einstein’s theory of general relativity.In this work,we forecast the constraints on BD theory in the CSST galaxy clustering spectroscopic survey with a magnitude limit~23 AB mag for point-source 5σdetection.We generate mock data based on the zCOSMOS catalog and consider the observational and instrumental effects of the CSST spectroscopic survey.We predict galaxy power spectra in the BD theory from z=0 to 1.5,and the galaxy bias and other systematical parameters are also included.The Markov Chain Monte Carlo technique is employed to find the best-fits and probability distributions of the cosmological and systematical parameters.A BD parameterζis introduced,which satisfiesζ=In(1+(1/ω)).We find that the CSST spectroscopic galaxy clustering survey can give|ξ|<10^(-2),or equivalently|ω|>O(10^(2))and|■/G|<10^(-13),under the assumptionζ=0.These constraints are almost at the same order of magnitude compared to the joint constraints using the current cosmic microwave background,baryon acoustic oscillations and TypeⅠa supernova data,indicating that the CSST galaxy clustering spectroscopic survey would be powerful for constraining the BD theory and other modified gravity theories.
文摘In this work we have to deal with the axiomatization of cosmology, but it is only recently that we have hit upon a new mathematical approach to capitalize on our new set identities for the basic laws of cosmology. So our proposal of settlement is that we will propose some new laws (e.g., formation of the black hole). We introduce the concept of axiom cosmology. This principle describes the cosmology which can get freedom from the notion of the induction. We present a large-scale structure model of the universe, and this leads to successfully explanation of problem of closed universe or open universe (because from the outset it is theorem and its succinct proof). In this paper we prove that the non-singular point theorem means that a singularity cannot be mathematically defined nor physical. It allows us to overcome the mysterious, physical singularity conundrum and explain meaningful antimatter annihilations for general configurations.
基金the National SKA Program of China(Grant No.2020SKA0110401)the National Natural Science Foundation of China(Grant No.12033008)K.C.Wong Education Foundation。
文摘We present a GPU-accelerated cosmological simulation code,PhotoNs-GPU,based on an algorithm of Particle Mesh Fast Multipole Method(PM-FMM),and focus on the GPU utilization and optimization.A proper interpolated method for truncated gravity is introduced to speed up the special functions in kernels.We verify the GPU code in mixed precision and different levels of the interpolated method on GPU.A run with single precision is roughly two times faster than double precision for current practical cosmological simulations.But it could induce an unbiased small noise in power spectrum.Compared with the CPU version of PhotoNs and Gadget-2,the efficiency of the new code is significantly improved.Activated all the optimizations on the memory access,kernel functions and concurrency management,the peak performance of our test runs achieves 48%of the theoretical speed and the average performance approaches to~35%on GPU.
基金the support from the National Key Program for Science and Technology Research and Development(2017YFB0203300)the Strategic Priority Research Program of Chinese Academy of Sciences,Grant No.XDC01040100。
文摘We investigate a hybrid numerical algorithm aimed at large-scale cosmological N-body simulation for on-going and future high precision sky surveys.It makes use of a truncated Fast Multiple Method(FMM)for short-range gravity,incorporating a Particle Mesh(PM)method for long-range potential,which is applied to deal with extremely large particle number.In this work,we present a specific strategy to modify a conventional FMM by a Gaussian shaped factor and provide quantitative expressions for the interaction kernels between multipole expansions.Moreover,a proper Multipole Acceptance Criterion for the hybrid method is introduced to solve potential precision loss induced by the truncation.Such procedures reduce the amount of computation compared to an original FMM and decouple the global communication.A simplified version of code is introduced to verify the hybrid algorithm,accuracy and parallel implementation.
基金the Special Program for Applied Research on Super Computation of the NSFC-Guangdong Joint Fund(the second phase)supported under the U.S.Department of Energy contract DE-AC02-06CH11357+12 种基金General Financial Grant No.2015M570884Special Financial Grant No.2016T90009 from the China Postdoctoral Science Foundationsupport from the European Commission under a Marie-Sklodwoska-Curie European Fellowship(EU project 656869)support from Mo ST 863 program 2012AA121701NSFC grant 11373030CAS grant QYZDJ-SSW-SLH017supported by the National Natural Science Foundation of China(Grant Nos.11573006,11528306,10473002 and 11135009)the National Basic Research Program of China(973 program)under grant No.2012CB821804the Fundamental Research Funds for the Central UniversitiesSciNet is funded by:the Canada Foundation for Innovation under the auspices of Compute Canadathe Government of Ontariothe Ontario Research Fund Research Excellencethe University of Toronto
文摘Constraining neutrino mass remains an elusive challenge in modern physics.Precision measurements are expected from several upcoming cosmological probes of large-scale structure.Achieving this goal relies on an equal level of precision from theoretical predictions of neutrino clustering.Numerical simulations of the non-linear evolution of cold dark matter and neutrinos play a pivotal role in this process.We incorporate neutrinos into the cosmological N-body code CUBEP3M and discuss the challenges associated with pushing to the extreme scales demanded by the neutrino problem.We highlight code optimizations made to exploit modern high performance computing architectures and present a novel method of data compression that reduces the phase-space particle footprint from 24 bytes in single precision to roughly 9 bytes.We scale the neutrino problem to the Tianhe-2 supercomputer and provide details of our production run,named Tian Nu,which uses 86%of the machine(13 824 compute nodes).With a total of 2.97 trillion particles,Tian Nu is currently the world’s largest cosmological N-body simulation and improves upon previous neutrino simulations by two orders of magnitude in scale.We finish with a discussion of the unanticipated computational challenges that were encountered during the Tian Nu runtime.
基金the DAAD (German Academic Exchange Service) scholarshipfinancial support from The African Institute for Mathematical Sciences, University of KwaZulu-Natal+1 种基金The Dar Es Salaam University College of Education, Tanzaniasupport from the National Research Foundation of South Africa (Grant Nos. 105925 and 110984)
文摘We forecast the cosmological constraints of the neutral hydrogen(HI) intensity mapping(IM)technique with radio telescopes by assuming 1-year of observational time. The current and future radio telescopes that we consider here are Five-hundred-meter Aperture Spherical radio Telescope(FAST), Baryon acoustic oscillations In Neutral Gas Observations(BINGO), and Square Kilometre Array phase Ⅰ(SKA-Ⅰ) single-dish experiments. We also forecast the combined constraints of the three radio telescopes with Planck. We find that the 1σ errors of(w0, wa) for BINGO, FAST and SKA-Ⅰ with respect to the fiducial values are respectively,(0.9293, 3.5792),(0.4083, 1.5878) and(0.3158, 0.4622). This is equivalent to(56.04%, 55.64%) and(66.02%, 87.09%) improvements in constraining(w0, wa) for FAST and SKA-Ⅰ respectively relative to BINGO. Simulations further show that SKA-Ⅰ will put more stringent constraints than both FAST and BINGO when each of the experiments is combined with Planck measurements. The 1σ errors for(w0, wa), BINGO + Planck, FAST + Planck and SKA-Ⅰ + Planck covariance matrices are respectively(0.0832, 0.3520),(0.0791, 0.3313) and(0.0678, 0.2679) implying there is an improvement in(w0, wa) constraints of(4.93%, 5.88%) for FAST + Planck relative to BINGO + Planck and an improvement of(18.51%, 23.89%) in constraining(w0, wa) for SKA-Ⅰ + Planck relative to BINGO + Planck. We also compared the performance of Planck data plus each single-dish experiment relative to Planck alone,and find that the reduction in(w0, wa) 1σ errors for each experiment plus Planck, respectively, imply the(w0, wa) constraints improvement of(22.96%, 8.45%),(26.76%, 13.84%) and(37.22%, 30.33%) for BINGO + Planck, FAST + Planck and SKA-Ⅰ + Planck relative to Planck alone. For the nine cosmological parameters in consideration, we find that there is a trade-off between SKA-Ⅰ and FAST in constraining cosmological parameters, with each experiment being more superior in constraining a particular set of parameters.
基金supported by the National SKA Program of China(Grant Nos.2022SKA0110200,and 2022SKA0110202)National Natural Science Foundation of China(Grant Nos.12103037,11833005,and 11890692)+4 种基金111 Project(Grant No.B20019)Shanghai Natural Science Foundation(Grant No.19ZR1466800)the Science Research grants from the China Manned Space Project(Grant No.CMS-CSST-2021-A02)the Fundamental Research Funds for the Central Universities(Grant No.XJS221312)supported by the High-Performance Computing Platform of Xidian University。
文摘Herein,we present a deep-learning technique for reconstructing the dark-matter density field from the redshift-space distribution of dark-matter halos.We built a UNet-architecture neural network and trained it using the COmoving Lagrangian Acceleration fast simulation,which is an approximation of the N-body simulation with 5123 particles in a box size of 500 h^(-1)Mpc.Further,we tested the resulting UNet model not only with training-like test samples but also with standard N-body simulations,such as the Jiutian simulation with 61443particles in a box size of 1000 h^(-1)Mpc and the ELUCID simulation,which has a different cosmology.The real-space dark-matter density fields in the three simulations can be reconstructed reliably with only a small reduction of the cross-correlation power spectrum at 1%and 10%levels at k=0.1 and 0.3 h Mpc-1,respectively.The reconstruction clearly helps to correct for redshift-space distortions and is unaffected by the different cosmologies between the training(Planck2018)and test samples(WMAP5).Furthermore,we tested the application of the UNet-reconstructed density field to obtain the velocity&tidal field and found that this approach provides better results compared with the traditional approach based on the linear bias model,showing a 12.2%improvement in the correlation slope and a 21.1%reduction in the scatter between the predicted and true velocities.Thus,our method is highly efficient and has excellent extrapolation reliability beyond the training set.This provides an ideal solution for determining the three-dimensional underlying density field from the plentiful galaxy survey data.
基金support of the National Key R&D Program of China No. 2022YFF0503404, 2020SKA0110402,MOST-2018YFE0120800,NSFC-11822305, NSFC-11773031,NSFC-11633004, NSFC-11473044, NSFC-11973047the CAS Project for Young Scientists in Basic Research (No. YSBR-092)+1 种基金the Chinese Academy of Sciences grants QYZDJ-SSWSLH017, XDB 23040100, and XDA15020200supported by the science research grants from the China Manned Space Project with NO.CMS-CSST-2021-B01 and CMS-CSST-2021-A01。
文摘A significant excess of the stellar mass density at high redshift has been discovered from the early data release of James Webb Space Telescope(JWST),and it may require a high star formation efficiency.However,this will lead to large number density of ionizing photons in the epoch of reionization(EoR),so that the reionization history will be changed,which can arise tension with the current EoR observations.Warm dark matter(WDM),via the free streaming effect,can suppress the formation of small-scale structure as well as low-mass galaxies.This provides an effective way to decrease the ionizing photons when considering a large star formation efficiency in high-z massive galaxies without altering the cosmic reionization history.On the other hand,the constraints on the properties of WDM can be derived from the JWST observations.In this work,we study WDM as a possible solution to reconcile the JWST stellar mass density of high-z massive galaxies and reionization history.We find that,the JWST high-z comoving cumulative stellar mass density alone has no significant preference for either CDM or WDM model.But using the observational data of other stellar mass density measurements and reionization history,we obtain that the WDM particle mass with mw=0.51_(-0.12)^(+0.22) keV and star formation efficiency parameter f_(*)^(0)> 0.39 in 2σ confidence level can match both the JWST high-z comoving cumulative stellar mass density and the reionization history.
基金M.S.thanks DMRC for supportD.S.thanks the compeers of GD Goenka University for continuing assistance.
文摘Gamma-ray bursts(GRBs) are among the brightest objects in the Universe and, hence, can be observed up to a very high redshift. Properly calibrated empirical correlations between intensity and spectral correlations of GRBs can be used to estimate the cosmological parameters. However, the possibility of the evolution of GRBs with redshift is a long-standing puzzle. In this work, we used 162 long-duration GRBs to determine whether GRBs below and above a certain redshift have different properties. The GRBs are split into two groups, and we fit the Amati relation for each group separately. Our findings demonstrate that estimations of the Amati parameters for the two groups are substantially dissimilar. We perform simulations to investigate whether the selection effects could cause the difference. Our analysis shows that the differences may be either intrinsic or due to systematic errors in the data, and the selection effects are not their true origin. However, in-depth analysis with a new data set comprised of 119 long GRBs shows that intrinsic scatter may partly be responsible for such effects.
基金support from the National Natural Science Foundation of China(Grant No.11803094)the Science and Technology Program of Guangzhou,China(Grant No.202002030360)+4 种基金support from COLCIENCIAS(Contract No.287-2016,Project 1204-712-50459)support from the National Research Foundation(Grant Nos.2017R1D1A1B03034900,2017R1A2B2004644,and 2017R1A4A1015178)support from the Project for New Faculty of Shanghai JiaoTong University(Grant No.AF0720053)the National Science Foundation of China(Grant Nos.11533006,and 11433001)the National Basic Research Program of China(Grant No.2015CB857000)。
文摘We propose a light-weight deep convolutional neural network(CNN)to estimate the cosmological parameters from simulated 3-dimensional dark matter distributions with high accuracy.The training set is based on 465 realizations of a cubic box with a side length of 256 h-1 Mpc,sampled with 1283 particles interpolated over a cubic grid of 1283 voxels.These volumes have cosmological parameters varying within the flatΛCDM parameter space of 0.16≤?m≤0.46 and 2.0≤109 As≤2.3.The neural network takes as an input cubes with 32^3 oxels and has three convolution layers,three dense layers,together with some batch normalization and pooling layers.In the final predictions from the network we find a 2.5%bias on the primordial amplitudeσ8 that cannot easily be resolved by continued training.We correct this bias to obtain unprecedented accuracy in the cosmological parameter estimation with statistical uncertainties ofδ?m=0.0015 andδσ8=0.0029,which are several times better than the results of previous CNN works.Compared with a 2-point analysis method using the clustering region of 0-130 and 10-130 h-1 Mpc,the CNN constraints are several times and an order of magnitude more precise,respectively.Finally,we conduct preliminary checks of the error-tolerance abilities of the neural network,and find that it exhibits robustness against smoothing,masking,random noise,global variation,rotation,reflection,and simulation resolution.Those effects are well understood in typical clustering analysis,but had not been tested before for the CNN approach.Our work shows that CNN can be more promising than people expected in deriving tight cosmological constraints from the cosmic large scale structure.
基金supported by National Key R&D Program of China No.2023YFA1607803National Natural Science Foundation of China (NSFC,grant No.11925303)+9 种基金by the CAS Project for Young Scientists in Basic Research (No.YSBR-092)supported by the Chinese Scholarship Council (CSC) and the University of Portsmouthsupported by the STFC grant ST/W001225/1.Y.W.supported by National Natural Science Foundation of China (NSFC,grant Nos.12273048 and 12422301)by National Key R&D Program of China No.2022YFF0503404by the Youth Innovation Promotion Association CASby the Nebula Talents Program of NAOC.G.B.Z.supported by science research grants from the China Manned Space Project with No.CMS-CSST-2021-B01the New Cornerstone Science Foundation through the XPLORER prizesupported by the ICG,SEPNet and the University of Portsmouth。
文摘The baryon acoustic oscillations (BAO) reconstruction plays a crucial role in cosmological analysis for spectroscopic galaxy surveys because it can make the density field effectively more linear and more Gaussian.The combination of the power spectra before and after the BAO reconstruction helps break degeneracies among parameters,then improves the constraints on cosmological parameters.It is therefore important to estimate the covariance matrix between pre-and post-reconstructed power spectra.In this work,we use perturbation theory to estimate the covariance matrix of the related power spectra multipoles,and check the accuracy of the derived covariance model using a large suite of dark matter halo catalogs at z=0.5.We find that the diagonal part of the auto covariance is well described by the Gaussian prediction,while the cross covariance deviates from the Gaussian prediction quickly when k>0.1 h Mpc^(-1).Additionally,we find the non-Gaussian effect in the nondiagonal part of the cross covariance is comparable to,or even stronger than,the pre-reconstruction covariance.By adding the non-Gaussian contribution,we obtain good agreement between analytical and numerical covariance matrices in the non-diagonal part up to k■0.15 h Mpc^(-1).The agreement in the diagonal part is also improved,bu still under-predicts the correlation in the cross covariance block.
基金supported by the National SKA Program of China No. 2020SKA0110402the National Natural Science Foundation of China (NSFC) general program (grant No. 12073088)+1 种基金the National Key R&D Program of China (grant No. 2020YFC2201600)Guangdong Basic and Applied Basic Research Foundation (grant No. 2024A1515012573)。
文摘The deceleration coefficient q and the jerk coefficient j obtained by the Taylor expansion of the scale factor a(t)play an important role in the study of cosmology.The current value of these coefficients for a cosmological model reflects the transition time between the phases dominated by dark energy and matter and can be used to determine if and how much the universe is decelerating.Thus,these coefficient values offer a way of constraining a particular cosmology model.Research based on this scenario was completed by Orlando Luongo and Marco Muccino.However,some approaches in this method should be tested prudently because some conditions such as dd_(L)/d_(Z)>0 dH/d_(Z) and may not be guaranteed.In this study,we used the MAPAge model to reconstruct the jerk parameters (q0and j0) with DESI 2024 data.Using the MAPAge model ensures particular physical circumstances are satisfied in the approach of determining the jerk parameters.Compared to the previous method,which used the Taylor expansion series q0,j0,and s0as model-independent parameters,we obtained more physical and slightly different results for the jerk parameters.Our results suggest that the DESI 2024 BAO data set favours different jerk parameters compared to the jerk parameters in the standardΛCDM model.
基金supported by the National SKA Program of China(Grant Nos.2022SKA0110200 and 2022SKA0110202)the National Key Research and Development Program of China(Grant Nos.2023YFA1607800,2023YFA1607802,2023YFA1607804,and 2022YFF0503400)+6 种基金the National Natural Science Foundation of China(Grant Nos.12103037 and12273020)the 111 Project(Grant No.B20019)Shanghai Natural Science Foundation(Grant No.19ZR1466800)the science research grants from the China Manned Space Project(Grant Nos.CMS-CSST-2021-A02,CMS-CSST-2021-A03,and CMS-CSST-2021-B01)the Fundamental Research Funds for the Central Universities(Grant No.XJS221312)supported by Science Research Project of Hebei Education Department No.BJK2024134supported by the High-Performance Computing Platform of Xidian University。
文摘The measurement of cosmological distances using baryon acoustic oscillations(BAO)is crucial for studying the universe’s expansion.The China Space Station Telescope(CSST)galaxy redshift survey,with its vast volume and sky coverage,provides an opportunity to address key challenges in cosmology.However,redshift uncertainties in galaxy surveys can degrade both angular and radial distance estimates.In this study,we forecast the precision of BAO distance measurements using mock CSST galaxy samples,applying a two-point correlation function(2PCF)wedge approach to mitigate redshift errors.We simulate redshift uncertainties of σ_(0)=0.003 andσ_(0)=0.006,representative of expected CSST errors,and examine their effects on the BAO peak and distance scaling factors,α_(⊥)andα_(||),across redshift bins within 0.0<z≤1.0.The wedge 2PCF method proves more effective in detecting the BAO peak compared with the monopole 2PCF,particularly forσ_(0)=0.006.Constraints on the BAO peaks show thatα_(⊥)is well constrained around 1.0,regardless of σ_(0),with precision between 1%and 3%across redshift bins.In contrast,α_(||)measurements are more sensitive to increases inσ_(0).Forσ_(0)=0.003,the results remain close to the fiducial value,with uncertainties ranging between 4%and 9%;forσ_(0)=0.006,significant deviations from the fiducial value are observed.We also study the ability to measure parameters(Ω_(m),H_(0)r_(d))using distance measurements,proving robust constraints as a cosmological probe under CSST-like redshift uncertainties.These findings demonstrate that the CSST survey enables few-percent precision measurements of D_(A)using the wedge 2PCF method,highlighting its potential to place tight constraints on the universe’s expansion history and contribute to high-precision cosmological studies.
基金Project supported by the National Natural Science Foundation of China.
文摘Fractal dimensions of the volume--limited subsamples with various infrared luminositiessorted out from samples given by the IRAS galaxy redshift surveys in fields F15 and NGWhave been calculated. Results show that structures with scales larger than about 60 h_(50)^(-1)Mpcexist in the large--scale distribution of infrared galaxies; distribution of IRAS galaxies hasmulti--level fractal structure. That is, the distribution has fractal structure with definitefractal dimension D only in certain scale range, and as the scale increases to certain turningscale r_c,the distribution transits to another fractal structure with different D. TLe fractaldimensions on levels with larger scales are generally larger than that on the levels withsmallerscales. It consists with the observational feature of the large-scale distribution of galaxies,i.e. clustering of galaxies is the dominate character on rather small scales, but on largerscales it becomes distribution with voids.