Skip to content
GitLab
Explore
Sign in
Primary navigation
Search or go to…
Project
SWIFTsim
Manage
Activity
Members
Labels
Plan
Issues
Issue boards
Milestones
Wiki
Code
Merge requests
Repository
Branches
Commits
Tags
Repository graph
Compare revisions
Snippets
Deploy
Releases
Model registry
Monitor
Incidents
Analyze
Value stream analytics
Contributor analytics
Repository analytics
Model experiments
Help
Help
Support
GitLab documentation
Compare GitLab plans
GitLab community forum
Contribute to GitLab
Provide feedback
Keyboard shortcuts
?
Snippets
Groups
Projects
Show more breadcrumbs
SWIFT
SWIFTsim
Commits
4e6c833b
Commit
4e6c833b
authored
May 29, 2018
by
Matthieu Schaller
Browse files
Options
Downloads
Patches
Plain Diff
Updated the INSTALL.swift file with the new configuration options.
parent
ec52c170
No related branches found
No related tags found
1 merge request
!552
More standard ways of linking other allocators - Correctly apply tc-malloc recommended flags.
Changes
1
Show whitespace changes
Inline
Side-by-side
Showing
1 changed file
INSTALL.swift
+63
-46
63 additions, 46 deletions
INSTALL.swift
with
63 additions
and
46 deletions
INSTALL.swift
+
63
−
46
View file @
4e6c833b
...
...
@@ -96,16 +96,20 @@ SWIFT depends on a number of third party libraries that should be available
before
you
can
build
it
.
-
HDF5
:
a
HDF5
library
(
v
.
1.8
.
x
or
higher
)
is
required
to
read
and
-
HDF5
:
A
HDF5
library
(
v
.
1.8
.
x
or
higher
)
is
required
to
read
and
write
particle
data
.
One
of
the
commands
"h5cc"
or
"h5pcc"
should
be
available
.
If
"h5pcc"
is
located
them
a
parallel
HDF5
built
for
the
version
of
MPI
located
should
be
provided
.
If
the
command
is
not
available
then
it
can
be
located
using
the
"--with-hfd5"
configure
option
.
The
value
should
be
the
full
path
to
the
"h5cc"
or
"h5pcc"
commands
.
SWIFT
can
make
a
very
effective
use
of
a
parallel
build
of
the
library
when
running
on
more
than
one
node
.
This
is
highly
recommended
.
-
MPI
:
t
o
run
on
more
than
one
node
an
MPI
library
that
fully
-
MPI
:
T
o
run
on
more
than
one
node
an
MPI
library
that
fully
supports
MPI_THREAD_MULTIPLE
.
Before
running
configure
the
"mpirun"
command
should
be
available
in
the
shell
.
If
your
command
isn
'
t
called
"mpirun"
then
define
the
"MPIRUN"
...
...
@@ -116,50 +120,63 @@ before you can build it.
much
like
the
CC
one
.
Use
this
when
your
MPI
compiler
has
a
none
-
standard
name
.
-
GSL
:
To
use
cosmological
time
integration
,
a
version
of
the
GSL
-
GSL
:
To
use
cosmological
time
integration
,
a
version
of
the
GSL
must
be
available
.
-
libtool
:
The
build
system
relies
on
libtool
as
well
as
the
other
autotools
.
-
FFTW
3
.
x
:
To
run
with
periodic
gravity
forces
,
a
build
of
the
fftw
3
library
must
be
available
.
Note
that
SWIFT
does
not
make
use
of
the
parallel
capability
of
FFTW
.
Calculations
are
done
by
single
MPI
nodes
independently
.
-
libtool
:
The
build
system
relies
on
libtool
as
well
as
the
other
autotools
.
Optional
Dependencies
=====================
-
METIS
:
a
build
of
the
METIS
library
can
be
optionally
used
to
-
METIS
:
a
build
of
the
METIS
library
can
be
optionally
used
to
optimize
the
load
between
MPI
nodes
(
requires
an
MPI
library
)
.
This
should
be
found
in
the
standard
installation
directories
,
or
pointed
at
using
the
"--with-metis"
configuration
option
.
In
this
case
the
top
-
level
installation
directory
of
the
METIS
build
should
be
given
.
Note
to
use
METIS
you
should
at
least
supply
"--with-metis"
.
-
libNUMA
:
a
build
of
the
NUMA
library
can
be
used
to
pin
the
threads
to
the
physical
core
of
the
machine
SWIFT
is
running
on
.
This
is
not
always
necessary
as
the
OS
scheduler
may
do
a
good
job
at
distributing
the
threads
among
the
different
cores
on
each
computing
node
.
-
TCMalloc
:
a
build
of
the
TCMalloc
library
(
part
of
gperftools
)
can
be
used
to
obtain
faster
allocations
than
the
standard
C
malloc
function
part
of
glibc
.
The
option
"-with-tcmalloc"
configuration
option
.
In
this
case
the
top
-
level
installation
directory
of
the
METIS
build
should
be
given
.
Note
to
use
METIS
you
should
at
least
supply
"--with-metis"
.
-
libNUMA
:
a
build
of
the
NUMA
library
can
be
used
to
pin
the
threads
to
the
physical
core
of
the
machine
SWIFT
is
running
on
.
This
is
not
always
necessary
as
the
OS
scheduler
may
do
a
good
job
at
distributing
the
threads
among
the
different
cores
on
each
computing
node
.
-
tcmalloc
/
jemalloc
/
TBBmalloc
:
a
build
of
the
tcmalloc
library
(
part
of
gperftools
),
jemalloc
or
TBBmalloc
can
be
used
be
used
to
obtain
faster
and
more
scalable
allocations
than
the
standard
C
malloc
function
part
of
glibc
.
Using
one
of
these
is
highly
recommended
on
systems
with
many
cores
per
node
.
One
of
the
options
"--with-tcmalloc"
,
"--with-jemalloc"
or
"--with-tbbmalloc"
should
be
passed
to
the
configuration
script
to
use
it
.
-
gperftools
:
a
build
of
gperftools
can
be
used
to
obtain
good
profiling
of
the
code
.
The
option
"-with-profiler"
needs
to
be
passed
to
the
configuration
script
to
use
it
.
-
gperftools
:
a
build
of
gperftools
can
be
used
to
obtain
good
profiling
of
the
code
.
The
option
"-with-profiler"
needs
to
be
passed
to
the
configuration
script
to
use
it
.
-
DOXYGEN
:
the
doxygen
library
is
required
to
create
the
SWIFT
API
-
DOXYGEN
:
the
doxygen
library
is
required
to
create
the
SWIFT
API
documentation
.
-
python
:
Examples
and
solution
script
use
python
and
rely
on
the
numpy
library
version
1.8
.
2
or
higher
.
-
python
:
Examples
and
solution
script
use
python
and
rely
on
the
numpy
library
version
1.8
.
2
or
higher
.
SWIFT
Coding
style
...
...
...
...
This diff is collapsed.
Click to expand it.
Preview
0%
Loading
Try again
or
attach a new file
.
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Save comment
Cancel
Please
sign in
to comment