Commit 5e4bb17c authored by James Willis's avatar James Willis
Browse files

Merge branch 'master' into FOF

Conflicts:
	src/task.c
parents f0a87025 e1bdccb3
...@@ -19,7 +19,7 @@ ...@@ -19,7 +19,7 @@
ACLOCAL_AMFLAGS = -I m4 ACLOCAL_AMFLAGS = -I m4
# Show the way... # Show the way...
SUBDIRS = src examples doc tests SUBDIRS = src examples doc tests tools
# Non-standard files that should be part of the distribution. # Non-standard files that should be part of the distribution.
EXTRA_DIST = INSTALL.swift .clang-format format.sh EXTRA_DIST = INSTALL.swift .clang-format format.sh
...@@ -1150,7 +1150,7 @@ esac ...@@ -1150,7 +1150,7 @@ esac
# Hydro scheme. # Hydro scheme.
AC_ARG_WITH([hydro], AC_ARG_WITH([hydro],
[AS_HELP_STRING([--with-hydro=<scheme>], [AS_HELP_STRING([--with-hydro=<scheme>],
[Hydro dynamics to use @<:@gadget2, minimal, pressure-entropy, pressure-energy, default, gizmo-mfv, gizmo-mfm, shadowfax, planetary, debug default: gadget2@:>@] [Hydro dynamics to use @<:@gadget2, minimal, pressure-entropy, pressure-energy, pressure-energy-monaghan, default, gizmo-mfv, gizmo-mfm, shadowfax, planetary, debug default: gadget2@:>@]
)], )],
[with_hydro="$withval"], [with_hydro="$withval"],
[with_hydro="gadget2"] [with_hydro="gadget2"]
...@@ -1177,6 +1177,9 @@ case "$with_hydro" in ...@@ -1177,6 +1177,9 @@ case "$with_hydro" in
pressure-energy) pressure-energy)
AC_DEFINE([HOPKINS_PU_SPH], [1], [Pressure-Energy SPH]) AC_DEFINE([HOPKINS_PU_SPH], [1], [Pressure-Energy SPH])
;; ;;
pressure-energy-monaghan)
AC_DEFINE([HOPKINS_PU_SPH_MONAGHAN], [1], [Pressure-Energy SPH with M&M Variable A.V.])
;;
default) default)
AC_DEFINE([DEFAULT_SPH], [1], [Default SPH]) AC_DEFINE([DEFAULT_SPH], [1], [Default SPH])
;; ;;
...@@ -1576,6 +1579,7 @@ AM_CONDITIONAL([HAVE_DOXYGEN], [test "$ac_cv_path_ac_pt_DX_DOXYGEN" != ""]) ...@@ -1576,6 +1579,7 @@ AM_CONDITIONAL([HAVE_DOXYGEN], [test "$ac_cv_path_ac_pt_DX_DOXYGEN" != ""])
# Handle .in files. # Handle .in files.
AC_CONFIG_FILES([Makefile src/Makefile examples/Makefile doc/Makefile doc/Doxyfile tests/Makefile]) AC_CONFIG_FILES([Makefile src/Makefile examples/Makefile doc/Makefile doc/Doxyfile tests/Makefile])
AC_CONFIG_FILES([tools/Makefile])
AC_CONFIG_FILES([tests/testReading.sh], [chmod +x tests/testReading.sh]) AC_CONFIG_FILES([tests/testReading.sh], [chmod +x tests/testReading.sh])
AC_CONFIG_FILES([tests/testActivePair.sh], [chmod +x tests/testActivePair.sh]) AC_CONFIG_FILES([tests/testActivePair.sh], [chmod +x tests/testActivePair.sh])
AC_CONFIG_FILES([tests/test27cells.sh], [chmod +x tests/test27cells.sh]) AC_CONFIG_FILES([tests/test27cells.sh], [chmod +x tests/test27cells.sh])
......
.. Parameter Files .. Parameter Files
Matthieu Schaller, 21st October 2018 Matthieu Schaller, 21st October 2018
.. _Parameter_File_label:
Parameter Files Parameter Files
=============== ===============
...@@ -140,7 +142,9 @@ background evolution of the Univese need to be specified here. These are: ...@@ -140,7 +142,9 @@ background evolution of the Univese need to be specified here. These are:
* The baryon density parameter :math:`\Omega_b`: ``Omega_b``, * The baryon density parameter :math:`\Omega_b`: ``Omega_b``,
* The radiation density parameter :math:`\Omega_r`: ``Omega_r``. * The radiation density parameter :math:`\Omega_r`: ``Omega_r``.
The last parameter can be omitted and will default to :math:`\Omega_r = 0`. The last parameter can be omitted and will default to :math:`\Omega_r = 0`. Note
that SWIFT will verify on start-up that the matter content of the initial conditions
matches the cosmology specified in this section.
This section als specifies the start and end of the simulation expressed in This section als specifies the start and end of the simulation expressed in
terms of scale-factors. The two parameters are: terms of scale-factors. The two parameters are:
...@@ -240,7 +244,6 @@ simulation: ...@@ -240,7 +244,6 @@ simulation:
r_cut_max: 4.5 # Default optional value r_cut_max: 4.5 # Default optional value
r_cut_min: 0.1 # Default optional value r_cut_min: 0.1 # Default optional value
SPH SPH
--- ---
...@@ -248,6 +251,67 @@ SPH ...@@ -248,6 +251,67 @@ SPH
Time Integration Time Integration
---------------- ----------------
Initial Conditions
------------------
This section of the parameter file contains all the options related to
the initial conditions. The main two parameters are
* The name of the initial conditions file: ``file_name``,
* Whether the problem uses periodic boundary conditions or not: ``periodic``.
The file path is relative to where the code is being executed. These
parameters can be complemented by some optional values to drive some
specific behaviour of the code.
* Whether to generate gas particles from the DM particles: ``generate_gas_in_ics`` (default: ``0``),
* Whether to activate an additional clean-up of the SPH smoothing lengths: ``cleanup_smoothing_lengths`` (default: ``0``)
The procedure used to generate gas particles from the DM ones is
outlined in the theory documents and is too long for a full
description here. The cleaning of the smoothing lengths is an
expensive operation but can be necessary in the cases where the
initial conditions are of poor quality and the values of the smoothing
lengths are far from the values they should have.
When starting from initial conditions created for Gadget, some
additional flags can be used to convert the values from h-full to
h-free and remove the additional :math:`\sqrt{a}` in the velocities:
* Whether to re-scale all the fields to remove powers of h from the quantities: ``cleanup_h_factors`` (default: ``0``),
* Whether to re-scale the velocities to remove the :math:`\sqrt{a}` assumed by Gadget : ``cleanup_velocity_factors`` (default: ``0``).
The h-factors are self-consistently removed according to their units
and this is applied to all the quantities irrespective of particle
types. The correct power of ``h`` is always calculated for each
quantity.
Finally, SWIFT also offers these options:
* A factor to re-scale all the smoothing-lengths by a fixed amount: ``smoothing_length_scaling`` (default: ``1.``),
* A shift to apply to all the particles: ``shift`` (default: ``[0.0,0.0,0.0]``),
* Whether to replicate the box along each axis: ``replicate`` (default: ``1``).
The shift is expressed in internal units. The option to replicate the
box is especially useful for weak-scaling tests. When set to an
integer >1, the box size is multiplied by this integer along each axis
and the particles are duplicated and shifted such as to create exact
copies of the simulation volume.
The full section to start a DM+hydro run from Gadget DM-only ICs would
be:
.. code:: YAML
InitialConditions:
file_name: my_ics.hdf5
periodic: 1
cleanup_h_factors: 1
cleanup_velocity_factors: 1
generate_gas_in_ics: 1
cleanup_smoothing_lengths: 1
Physical Constants Physical Constants
------------------ ------------------
......
...@@ -23,7 +23,10 @@ group sizes, the interesting data in the ``.catalog_group`` files are: ...@@ -23,7 +23,10 @@ group sizes, the interesting data in the ``.catalog_group`` files are:
+ The ``group_size``: gives a list of all the halos and the number of particles + The ``group_size``: gives a list of all the halos and the number of particles
in the halo, this list is numbered from 0 until the number of groups minus in the halo, this list is numbered from 0 until the number of groups minus
one. It is important that the groups are not ordered in any way [#order]_ one. It is important that the groups are not ordered in any way [#order]_.
It is also important to note that the group size includes both the bound and
unbound particles; always use the ``Offset`` and ``Offset_unbound`` data
when reading from the ``catalog_particles`` files.
+ The ``Num_of_groups`` or ``Total_num_of_groups``: gives the total number of + The ``Num_of_groups`` or ``Total_num_of_groups``: gives the total number of
groups in the snapshot. groups in the snapshot.
+ The ``Offset`` list: This list gives the offset off the particles. In the + The ``Offset`` list: This list gives the offset off the particles. In the
...@@ -55,6 +58,64 @@ Besides the ``.catalog_particles`` file, there is also a ...@@ -55,6 +58,64 @@ Besides the ``.catalog_particles`` file, there is also a
but only for the unbound particles, a particle can only be present in one of but only for the unbound particles, a particle can only be present in one of
these two lists. these two lists.
Extracting the particles in a given halo
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The ``.catalog_particles`` file returns particle IDs that need to be matched
with those in your snapshot to find the particles in the file that you
wish to extract. The python snippet below should give you an idea of how to
go about doing this for the bound particles.
First, we need to extract the offset from the ``.catalog_group`` file, and
work out how many _bound_ partices are in our halo. We can do this by
looking at the next offset. Then, we can ID match those with the snapshot
file, and get the mask for the _positions_ in the file that correspond
to our bound particles. (Note this requires ``numpy > 1.15.0``).
.. code-block:: python
:linenos:
import numpy as np
import h5py
snapshot_file = h5py.File("swift_snapshot.hdf5", "r")
group_file = h5py.File("velociraptor_output.catalog_group", "r")
particles_file = h5py.File("velociraptor_output.catalog_particles", "r")
halo = 100
# Grab the start position in the particles file to read from
halo_start_position = group_file["Offset"][halo]
halo_end_position = group_file["Offset"][halo + 1]
# We're done with that file now, best to close earlier rather than later
group_file.close()
# Get the relevant particle IDs for that halo; this includes particles
# of _all_ types.
particle_ids_in_halo = particles_file["Particle_IDs"][
halo_start_position:halo_end_position
]
# Again, we're done with that file.
particles_file.close()
# Now, the tricky bit. We need to create the correspondance between the
# positions in the snapshot file, and the ids.
# Let's look for the dark matter particles in that halo.
particle_ids_from_snapshot = snapshot_file["PartType1/ParticleIDs"][...]
_, indices_v, indices_p = np.intersect1d(
particle_ids_in_halo,
particle_ids_from_snapshot,
assume_unique=True,
return_indices=True,
)
# indices_p gives the positions in the particle file where we will find
# the co-ordinates that we're looking for! To get the positions of all of
# those particles,
particle_positions_in_halo = snapshot_file["PartType1/Coordinates"][indices_p]
Catalog_parttypes file Catalog_parttypes file
---------------------- ----------------------
......
...@@ -91,6 +91,7 @@ if __name__ == "__main__": ...@@ -91,6 +91,7 @@ if __name__ == "__main__":
# Creation of first frame # Creation of first frame
fig, ax = plt.subplots(1, 1, figsize=(1, 1), frameon=False) fig, ax = plt.subplots(1, 1, figsize=(1, 1), frameon=False)
ax.axis("off") # Remove annoying black frame.
data_x, data_y, density = load_and_extract("kelvinhelmholtz_0000.hdf5") data_x, data_y, density = load_and_extract("kelvinhelmholtz_0000.hdf5")
......
# tHIS FIle is part of SWIFT. # This file is part of SWIFT.
# Copyright (c) 2012 Pedro Gonnet (pedro.gonnet@durham.ac.uk), # Copyright (c) 2012 Pedro Gonnet (pedro.gonnet@durham.ac.uk),
# Matthieu Schaller (matthieu.schaller@durham.ac.uk). # Matthieu Schaller (matthieu.schaller@durham.ac.uk).
# #
...@@ -118,18 +118,3 @@ EXTRA_DIST = CoolingBox/coolingBox.yml CoolingBox/energy_plot.py CoolingBox/make ...@@ -118,18 +118,3 @@ EXTRA_DIST = CoolingBox/coolingBox.yml CoolingBox/energy_plot.py CoolingBox/make
# Default parameter file # Default parameter file
EXTRA_DIST += parameter_example.yml EXTRA_DIST += parameter_example.yml
# Scripts to plot task graphs
EXTRA_DIST += plot_tasks.py analyse_tasks.py process_plot_tasks_MPI process_plot_tasks
# Scripts to plot threadpool 'task' graphs
EXTRA_DIST += analyse_threadpool_tasks.py \
plot_threadpool.py \
process_plot_threadpool
# Script for scaling plot
EXTRA_DIST += plot_scaling_results.py \
plot_scaling_results_breakdown.py
# Script for gravity accuracy
EXTRA_DIST += plot_gravity_checks.py
...@@ -70,11 +70,11 @@ snap = int(sys.argv[1]) ...@@ -70,11 +70,11 @@ snap = int(sys.argv[1])
sim = h5py.File("sodShock_%04d.hdf5"%snap, "r") sim = h5py.File("sodShock_%04d.hdf5"%snap, "r")
boxSize = sim["/Header"].attrs["BoxSize"][0] boxSize = sim["/Header"].attrs["BoxSize"][0]
time = sim["/Header"].attrs["Time"][0] time = sim["/Header"].attrs["Time"][0]
scheme = sim["/HydroScheme"].attrs["Scheme"] scheme = str(sim["/HydroScheme"].attrs["Scheme"])
kernel = sim["/HydroScheme"].attrs["Kernel function"] kernel = str(sim["/HydroScheme"].attrs["Kernel function"])
neighbours = sim["/HydroScheme"].attrs["Kernel target N_ngb"] neighbours = sim["/HydroScheme"].attrs["Kernel target N_ngb"]
eta = sim["/HydroScheme"].attrs["Kernel eta"] eta = sim["/HydroScheme"].attrs["Kernel eta"]
git = sim["Code"].attrs["Git Revision"] git = str(sim["Code"].attrs["Git Revision"])
x = sim["/PartType0/Coordinates"][:,0] x = sim["/PartType0/Coordinates"][:,0]
v = sim["/PartType0/Velocities"][:,0] v = sim["/PartType0/Velocities"][:,0]
...@@ -82,6 +82,11 @@ u = sim["/PartType0/InternalEnergy"][:] ...@@ -82,6 +82,11 @@ u = sim["/PartType0/InternalEnergy"][:]
S = sim["/PartType0/Entropy"][:] S = sim["/PartType0/Entropy"][:]
P = sim["/PartType0/Pressure"][:] P = sim["/PartType0/Pressure"][:]
rho = sim["/PartType0/Density"][:] rho = sim["/PartType0/Density"][:]
try:
alpha = sim["/PartType0/Viscosity"][:]
plot_alpha = True
except:
plot_alpha = False
N = 1000 # Number of points N = 1000 # Number of points
x_min = -1. x_min = -1.
...@@ -259,14 +264,23 @@ ylabel("${\\rm{Internal~Energy}}~u$", labelpad=0) ...@@ -259,14 +264,23 @@ ylabel("${\\rm{Internal~Energy}}~u$", labelpad=0)
xlim(-0.5, 0.5) xlim(-0.5, 0.5)
ylim(0.8, 2.2) ylim(0.8, 2.2)
# Entropy profile --------------------------------- # Entropy/alpha profile ---------------------------------
subplot(235) subplot(235)
plot(x, S, '.', color='r', ms=4.0)
plot(x_s, s_s, '--', color='k', alpha=0.8, lw=1.2) if plot_alpha:
plot(x, alpha, '.', color='r', ms=4.0)
ylabel(r"${\rm{Viscosity}}~\alpha$", labelpad=0)
# Show location of shock
plot([x_56, x_56], [-100, 100], color="k", alpha=0.5, ls="dashed", lw=1.2)
ylim(0, 1)
else:
plot(x, S, '.', color='r', ms=4.0)
plot(x_s, s_s, '--', color='k', alpha=0.8, lw=1.2)
ylabel("${\\rm{Entropy}}~S$", labelpad=0)
ylim(0.8, 3.8)
xlabel("${\\rm{Position}}~x$", labelpad=0) xlabel("${\\rm{Position}}~x$", labelpad=0)
ylabel("${\\rm{Entropy}}~S$", labelpad=0)
xlim(-0.5, 0.5) xlim(-0.5, 0.5)
ylim(0.8, 3.8)
# Information ------------------------------------- # Information -------------------------------------
subplot(236, frameon=False) subplot(236, frameon=False)
...@@ -284,5 +298,6 @@ ylim(0, 1) ...@@ -284,5 +298,6 @@ ylim(0, 1)
xticks([]) xticks([])
yticks([]) yticks([])
tight_layout()
savefig("SodShock.png", dpi=200) savefig("SodShock.png", dpi=200)
...@@ -59,10 +59,10 @@ Scheduler: ...@@ -59,10 +59,10 @@ Scheduler:
cell_sub_size_self_hydro: 32000 # (Optional) Maximal number of interactions per sub-self hydro task (this is the default value). cell_sub_size_self_hydro: 32000 # (Optional) Maximal number of interactions per sub-self hydro task (this is the default value).
cell_sub_size_pair_grav: 256000000 # (Optional) Maximal number of interactions per sub-pair gravity task (this is the default value). cell_sub_size_pair_grav: 256000000 # (Optional) Maximal number of interactions per sub-pair gravity task (this is the default value).
cell_sub_size_self_grav: 32000 # (Optional) Maximal number of interactions per sub-self gravity task (this is the default value). cell_sub_size_self_grav: 32000 # (Optional) Maximal number of interactions per sub-self gravity task (this is the default value).
cell_sub_size_pair_stars: 256000000 # (Optional) Maximal number of interactions per sub-pair stars task (this is the default value). cell_sub_size_pair_stars: 256000000 # (Optional) Maximal number of interactions per sub-pair stars task (this is the default value).
cell_sub_size_self_stars: 32000 # (Optional) Maximal number of interactions per sub-self stars task (this is the default value). cell_sub_size_self_stars: 32000 # (Optional) Maximal number of interactions per sub-self stars task (this is the default value).
cell_split_size: 400 # (Optional) Maximal number of particles per cell (this is the default value). cell_split_size: 400 # (Optional) Maximal number of particles per cell (this is the default value).
cell_subdepth_grav: 2 # (Optional) Maximal depth the gravity tasks can be pushed down (this is the default value). cell_subdepth_diff_grav: 4 # (Optional) Maximal depth difference between leaves and a cell that gravity tasks can be pushed down to (this is the default value).
max_top_level_cells: 12 # (Optional) Maximal number of top-level cells in any dimension. The number of top-level cells will be the cube of this (this is the default value). max_top_level_cells: 12 # (Optional) Maximal number of top-level cells in any dimension. The number of top-level cells will be the cube of this (this is the default value).
tasks_per_cell: 0 # (Optional) The average number of tasks per cell. If not large enough the simulation will fail (means guess...). tasks_per_cell: 0 # (Optional) The average number of tasks per cell. If not large enough the simulation will fail (means guess...).
mpi_message_limit: 4096 # (Optional) Maximum MPI task message size to send non-buffered, KB. mpi_message_limit: 4096 # (Optional) Maximum MPI task message size to send non-buffered, KB.
......
#!/usr/bin/env python
import sys
import glob
import re
import numpy as np
import matplotlib.pyplot as plt
params = {'axes.labelsize': 14,
'axes.titlesize': 18,
'font.size': 12,
'legend.fontsize': 12,
'xtick.labelsize': 14,
'ytick.labelsize': 14,
'text.usetex': True,
'figure.figsize': (12, 10),
'figure.subplot.left' : 0.06,
'figure.subplot.right' : 0.99 ,
'figure.subplot.bottom' : 0.06 ,
'figure.subplot.top' : 0.99 ,
'figure.subplot.wspace' : 0.14 ,
'figure.subplot.hspace' : 0.14 ,
'lines.markersize' : 6,
'lines.linewidth' : 3.,
'text.latex.unicode': True
}
plt.rcParams.update(params)
plt.rc('font',**{'family':'sans-serif','sans-serif':['Times']})
min_error = 1e-7
max_error = 3e-1
num_bins = 64
# Construct the bins
bin_edges = np.linspace(np.log10(min_error), np.log10(max_error), num_bins + 1)
bin_size = (np.log10(max_error) - np.log10(min_error)) / num_bins
bins = 0.5*(bin_edges[1:] + bin_edges[:-1])
bin_edges = 10**bin_edges
bins = 10**bins
# Colours
cols = ['#332288', '#88CCEE', '#117733', '#DDCC77', '#CC6677']
# Time-step to plot
step = int(sys.argv[1])
periodic = int(sys.argv[2])
# Find the files for the different expansion orders
order_list = glob.glob("gravity_checks_swift_step%.4d_order*.dat"%step)
num_order = len(order_list)
# Get the multipole orders
order = np.zeros(num_order)
for i in range(num_order):
order[i] = int(order_list[i][35])
order = sorted(order)
order_list = sorted(order_list)
# Read the exact accelerations first
if periodic:
data = np.loadtxt('gravity_checks_exact_periodic_step%.4d.dat'%step)
else:
data = np.loadtxt('gravity_checks_exact_step%.4d.dat'%step)
exact_ids = data[:,0]
exact_pos = data[:,1:4]
exact_a = data[:,4:7]
exact_pot = data[:,7]
# Sort stuff
sort_index = np.argsort(exact_ids)
exact_ids = exact_ids[sort_index]
exact_pos = exact_pos[sort_index, :]
exact_a = exact_a[sort_index, :]
exact_pot = exact_pot[sort_index]
exact_a_norm = np.sqrt(exact_a[:,0]**2 + exact_a[:,1]**2 + exact_a[:,2]**2)
print "Number of particles tested:", np.size(exact_ids)
# Start the plot
plt.figure()
count = 0
# Get the Gadget-2 data if existing
if periodic:
gadget2_file_list = glob.glob("forcetest_gadget2_periodic.txt")
else:
gadget2_file_list = glob.glob("forcetest_gadget2.txt")
if len(gadget2_file_list) != 0:
gadget2_data = np.loadtxt(gadget2_file_list[0])
gadget2_ids = gadget2_data[:,0]
gadget2_pos = gadget2_data[:,1:4]
gadget2_a_exact = gadget2_data[:,4:7]
gadget2_a_grav = gadget2_data[:, 7:10]
# Sort stuff
sort_index = np.argsort(gadget2_ids)
gadget2_ids = gadget2_ids[sort_index]
gadget2_pos = gadget2_pos[sort_index, :]
gadget2_a_exact = gadget2_a_exact[sort_index, :]
gadget2_exact_a_norm = np.sqrt(gadget2_a_exact[:,0]**2 + gadget2_a_exact[:,1]**2 + gadget2_a_exact[:,2]**2)
gadget2_a_grav = gadget2_a_grav[sort_index, :]
# Cross-checks
if not np.array_equal(exact_ids, gadget2_ids):
print "Comparing different IDs !"
if np.max(np.abs(exact_pos - gadget2_pos)/np.abs(gadget2_pos)) > 1e-6:
print "Comparing different positions ! max difference:"
index = np.argmax(exact_pos[:,0]**2 + exact_pos[:,1]**2 + exact_pos[:,2]**2 - gadget2_pos[:,0]**2 - gadget2_pos[:,1]**2 - gadget2_pos[:,2]**2)
print "Gadget2 (id=%d):"%gadget2_ids[index], gadget2_pos[index,:], "exact (id=%d):"%exact_ids[index], exact_pos[index,:], "\n"
diff = np.abs(exact_a_norm - gadget2_exact_a_norm) / np.abs(gadget2_exact_a_norm)
max_diff = np.max(diff)
if max_diff > 2e-6:
print "Comparing different exact accelerations !"
print "Median=", np.median(diff), "Mean=", np.mean(diff), "99%=", np.percentile(diff, 99)
print "max difference ( relative diff =", max_diff, "):"
#index = np.argmax(exact_a[:,0]**2 + exact_a[:,1]**2 + exact_a[:,2]**2 - gadget2_a_exact[:,0]**2 - gadget2_a_exact[:,1]**2 - gadget2_a_exact[:,2]**2)
index = np.argmax(diff)
print "a_exact --- Gadget2:", gadget2_a_exact[index,:], "exact:", exact_a[index,:]
print "pos --- Gadget2: (id=%d):"%gadget2_ids[index], gadget2_pos[index,:], "exact (id=%d):"%gadget2_ids[index], gadget2_pos[index,:],"\n"
# Compute the error norm
diff = gadget2_a_exact - gadget2_a_grav
norm_diff = np.sqrt(diff[:,0]**2 + diff[:,1]**2 + diff[:,2]**2)
norm_a = np.sqrt(gadget2_a_exact[:,0]**2 + gadget2_a_exact[:,1]**2 + gadget2_a_exact[:,2]**2)
norm_error = norm_diff / norm_a
error_x = abs(diff[:,0]) / norm_a
error_y = abs(diff[:,1]) / norm_a
error_z = abs(diff[:,2]) / norm_a
# Bin the error
norm_error_hist,_ = np.histogram(norm_error, bins=bin_edges, density=False) / (np.size(norm_error) * bin_size)
error_x_hist,_ = np.histogram(error_x, bins=bin_edges, density=False) / (np.size(norm_error) * bin_size)
error_y_hist,_ = np.histogram(error_y, bins=bin_edges, density=False) / (np.size(norm_error) * bin_size)
error_z_hist,_ = np.histogram(error_z, bins=bin_edges, density=False) / (np.size(norm_error) * bin_size)
norm_median = np.median(norm_error)
median_x = np.median(error_x)
median_y = np.median(error_y)
median_z = np.median(error_z)
norm_per99 = np.percentile(norm_error,99)
per99_x = np.percentile(error_x,99)
per99_y = np.percentile(error_y,99)
per99_z = np.percentile(error_z,99)
norm_max = np.max(norm_error)
max_x = np.max(error_x)
max_y = np.max(error_y)
max_z = np.max(error_z)
print "Gadget-2 ---- "
print "Norm: median= %f 99%%= %f max= %f"%(norm_median, norm_per99, norm_max)
print "X : median= %f 99%%= %f max= %f"%(median_x, per99_x, max_x)
print "Y : median= %f 99%%= %f max= %f"%(median_y, per99_y, max_y)
print "Z : median= %f 99%%= %f max= %f"%(median_z, per99_z, max_z)
print ""
plt.subplot(231)
plt.text(min_error * 1.5, 1.55, "$50\\%%\\rightarrow%.4f~~ 99\\%%\\rightarrow%.4f$"%(norm_median, norm_per99), ha="left", va="top", alpha=0.8)
plt.semilogx(bins, norm_error_hist, 'k--', label="Gadget-2", alpha=0.8)
plt.subplot(232)
plt.semilogx(bins, error_x_hist, 'k--', label="Gadget-2", alpha=0.8)
plt.text(min_error * 1.5, 1.55, "$50\\%%\\rightarrow%.4f~~ 99\\%%\\rightarrow%.4f$"%(median_x, per99_x), ha="left", va="top", alpha=0.8)
plt.subplot(233)
plt.semilogx(bins, error_y_hist, 'k--', label="Gadget-2", alpha=0.8)
plt.text(min_error * 1.5, 1.55, "$50\\%%\\rightarrow%.4f~~ 99\\%%\\rightarrow%.4f$"%(median_y, per99_y), ha="left", va="top", alpha=0.8)
plt.subplot(234)
plt.semilogx(bins, error_z_hist, 'k--', label="Gadget-2", alpha=0.8)
plt.text(min_error * 1.5, 1.55, "$50\\%%\\rightarrow%.4f~~ 99\\%%\\rightarrow%.4f$"%(median_z, per99_z), ha="left", va="top", alpha=0.8)
count += 1
# Plot the different histograms
for i in range(num_order):
data = np.loadtxt(order_list[i])
ids = data[:,0]
pos = data[:,1:4]
a_grav = data[:, 4:7]
pot = data[:, 7]
# Sort stuff
sort_index = np.argsort(ids)
ids = ids[sort_index]
pos = pos[sort_index, :]
a_grav = a_grav[sort_index, :]
pot = pot[sort_index]
# Cross-checks
if not np.array_equal(exact_ids, ids):
print "Comparing different IDs !"
if np.max(np.abs(exact_pos - pos)/np.abs(pos)) > 1e-6:
print "Comparing different positions ! max difference:"
index = np.argmax(exact_pos[:,0]**2 + exact_pos[:,1]**2 + exact_pos[:,2]**2 - pos[:,0]**2 - pos[:,1]**2 - pos[:,2]**2)
print "SWIFT (id=%d):"%ids[index], pos[index,:], "exact (id=%d):"%exact_ids[index], exact_pos[index,:], "\n"
# Compute the error norm
diff = exact_a - a_grav
diff_pot = exact_pot - pot