A Lightweight UID Mapping Scheme for the Lustre

Transcripción

A Lightweight UID Mapping Scheme for the Lustre
A
Lightweight
UID
Mapping
Scheme
for
the
Lustre
Filesystem
Joshua
Walgenbach
So?ware
Engineer,
Data
Capacitor
Project
Indiana
University
[email protected]
Lustre
• 
• 
• 
• 
Distributed
Network
Filesystem
Separates
Metadata
from
Data
Data
Scales
Horizontally
Works
Great
as
a
Filesystem
for
Clusters
Lustre
Scalable Object Storage
MDS
metadata server
Client
OSS
object storage
server
Workflow
is
Not
Local
•  High
speed
networks
means
you
can
collect
data
somewhere
and
analyze
it
across
the
country,
or
the
world.
•  The
tradiPonal
method
is
copying
data
from
one
point
to
another.
•  Need
centralized
filesystem
to
manage
workflow.
Data
Ownership
•  UNIX
tracks
file
ownership
with
simple
integers
(UIDs)
•  Obviously
won’t
work
across
organizaPons
•  Need
a
method
to
map
users
into
a
single
UID
space
IU
UID
Mapping
Lightweight
Not
everyone
needs
/
wants
kerberos
Not
everyone
needs
/
wants
encrypPon
Only
change
MDS
code
Want
to
maximize
clients
we
can
serve
Simple
enough
to
port
forward
UID
Mapping
•  Lookup
calls
pluggable
kernel
module
–  Binary
tree
stored
in
memory
–  Based
on
NID
or
NID
range
–  Remote
UID
mapped
to
EffecPve
UID
•  Other
lookup
schemes
are
possible
Announced
One
Year
Ago
• 
• 
• 
• 
• 
IU
Lustre
WAN
Service
Dedicated
system
of
360
TB
for
research
Serve
on
a
per
project
basis
Single
DDN
S2A9950
Controller
6
Dell
2950
Servers
–  4
Object
Storage
Servers
–  1
pair
of
Metadata
Servers
for
failover
IU’s
Lustre
WAN
on
TeraGrid
•  ProducPon
mount
at
PSC
–  AlPx
4700
and
Login
Nodes
–  GridFTP
and
Gatekeeper
nodes
•  Test
mounts
at
LSU,
TACC,
SDSC
–  Login
nodes
–  LSU
and
SDSC
ready
to
deploy
on
some
compute
nodes
•  Future
plans
for
NCSA,
ORNL,
and
Purdue
EVIA
–
University
of
Michigan
University
of
Michigan
Uncompressed
Digital
Video
IU
HPSS
SAMBA
IU
Windows
Tools
CReSIS
–
University
of
Kansas
Greenland
Ice
Sheet
Data
From
University
of
Kansas
IU
HPSS
IU’s
Quarry
Supercomputer
3D
Hydrodynamic
Workflow
HPSS
PSC
AlPx
4700
IU
Astronomy
Department
Lustre
Across
the
WAN
IU
Client
Server
PSC
17
ms
latency
Metadata
Server
Object
Storage
Servers
Network
IU
‐
PSC
2007
Bandwidth
Challenge:
Five
ApplicaPons
Simultaneously
•  AcquisiPon
and
VisualizaPon
–  Live
Instrument
Data
•  Chemistry
–  Rare
Archival
Material
•  HumaniPes
•  AcquisiPon,
Analysis,
and
VisualizaPon
–  Trace
Data
•  Computer
Science
–  SimulaPon
Data
•  Life
Science
•  High
Energy
Physics
Data
acquisiPon
from
a
global
consorPum
of
crystallography
faciliPes
Post processed images of the palm leaves
Sample images of the palm leaf of Sarvamoola granthas illustrating the
performance of the image processing algorithms. (a) Stitched 8 bit grayscale
image without normalization and contrast enhancement, (b) Final image after
contrast enhancement
SimulaPon
of
a
High
Energy
ReacPon
Performance
Analysis
with
Vampir
Alzheimer’s
Amyloid
PepPde
Analysis
Challenge
Results
Many
Thanks
•  Stephen
Simms,
JusPn
Miller,
Nathan
Heald,
James
McGookey,,
Scom
Michael,
Mam
Link
(IU)
•  Kit
Westneat
(DDN)
•  Sun/CFS
support
and
engineering
•  Michael
Kluge,
Guido
Juckeland
,
Robert
Henschel,
Mamhias
Mueller
(ZIH,
Dresden)
•  Thorbjorn
Axellson
(CReSIS)
•  Craig
Prescom
(UFL)
•  Ariel
MarPnez
(LONI)
•  Greg
Pike
and
ORNL
•  Doug
Balog,
Josephine
Palencia,
and
PSC
Support
for
this
work
provided
by
the
NaPonal
Science
FoundaPon
is
gratefully
acknowledged
and
appreciated
(CNS‐0521433)


Documentos relacionados