Scientific Data Management Center ISIC http sdmcenter lbl
Scientific Data Management Center (ISIC) http: //sdmcenter. lbl. gov contains extensive publication list 1
Scientific Data Management Center Participating Institutions Center PI: Arie Shoshani LBNL DOE Laboratories co-PIs: Bill Gropp, Rob Ross ANL Arie Shoshani, Doron Rotem LBNL Terence Critchlow, Chandrika Kamath LLNL Nagiza Samatova, Andy White ORNL Universities co-PIs : Mladen Vouk Alok Choudhary Reagan Moore, Bertram Ludaescher Calton Pu Steve Parker North Carolina State Northwestern UC San Diego (SDSC) Georgia Tech U of Utah (future) 2
Phases of Scientific Exploration q Data Generation § From large scale simulations or experiments § Fast data growth with computational power § examples • HENP: 100 Teraops and 10 Petabytes by 2006 • Climate: Spatial Resolution: T 42 (280 km) -> T 85 (140 km) -> T 170 (70 km), T 42: about 1 TB/100 year run => factor of ~ 10 -20 § Problems • • Can’t dump the data to storage fast enough – waste of compute resources Can’t move terabytes of data over WAN robustly – waste of scientist’s time Can’t steer the simulation – waste of time and resource Need to reorganize and transform data – large data intensive tasks slowing progress 3
Phases of Scientific Exploration q Data Analysis § Analysis of large data volume § Can’t fit all data in memory § Problems • • • Find the relevant data – need efficient indexing Cluster analysis – need linear scaling Feature selection – efficient high-dimensional analysis Data heterogeneity – combine data from diverse sources Streamline analysis steps – output of one step needs to match input of next 4
Example Data Flow in TSI Logistical Network Courtesy: John Blondin 5
Goal: Reduce the Data Management Overhead • Efficiency • Example: parallel I/O, indexing, matching storage structures to the application • Effectiveness • Example: Access data by attributes-not files, facilitate massive data movement • New algorithms • Example: Specialized PCA techniques to separate signals or to achieve better spatial data compression • Enabling ad-hoc exploration of data • Example: by enabling exploratory “run and render” capability to analyze and visualize simulation output while the code is running 6
Approach § Use an integrated framework that: SDM Framework • Provides a scientific workflow capability Scientific Process Automation Layer • Supports data mining and analysis tools • Accelerates storage and access to data § Simplify data management tasks for the scientist • Hide details of underlying parallel and indexing technology • Permit assembly of modules using a simple graphical workflow description tool Scientific Application Data Mining & Analysis Layer Scientific Understanding Storage Efficient Access Layer 7
Technology Details by Layer 8
Accomplishments: Storage Efficient Access (SEA) Parallel Virtual File System: Shared memory communication Enhancements and deployment q Developed Parallel net. CDF § § q Enhanced ROMIO: § § q Provides MPI access to PVFS Advanced parallel file system interfaces for more efficient access P 1 P 2 P 3 P 0 P 1 P 2 P 3 Parallel net. CDF Parallel File System Before Parallel File System After Developed PVFS 2 § § q Enables high performance parallel I/O to net. CDF datasets Achieves up to 10 fold performance improvement over HDF 5 P 0 Adds Myrinet GM and Infini. Band support improved fault tolerance asynchronous I/O offered by Dell and HP for Clusters Deployed an HPSS Storage Resource Manager (SRM) with PVFS § § Automatic access of HPSS files to PVFS through MPI-IO library SRM is a middleware component FLASH I/O Benchmark Performance (8 x 8 x 8 block sizes) 9
Robust Multi-file Replication Anywhere q Problem: move thousands of files robustly § § § q Takes many hours Need error recovery Mass storage systems failures Network failures Use Storage Resource Managers (SRMs) Data. Mover SRM-COPY (thousands of files) NCAR SRM-GET (one Get list of files LBNL file at a time) SRM (performs writes) (performs reads) Grid. FTP GET (pull mode) Problem: too slow § § Use parallel streams Use concurrent transfers Use large FTP windows Pre-stage files from MSS Disk Cache Network transfer archive files stage files 10
File tracking helps to identify bottlenecks Shows that archiving is the bottleneck 11
File tracking shows recovery from transient failures Total: 45 GBs 12
Accomplishments: Data Mining and Analysis (DMA) q Developed Parallel-VTK § § q Developed “region tracking” tool § § q Used for accurate for signal separation Used for discovering key parameters that correlate with observed data Developed highly effective data reduction § § q For exploring 2 D/3 D scientific databases Using bitmap technology to identify regions based on multi-attribute conditions Implemented Independent Component Analysis (ICA) module § § q Efficient 2 D/3 D Parallel Scientific Visualization for Net. CDF and HDF files Built on top of Pnet. CDF Achieves 15 fold reduction with high level of accuracy Using parallel Principle Component Analysis (PCA) technology Combustion region tracking Developed ASPECT § § § A framework that supports a rich set of pluggable data analysis tools Including all the tools above A rich suite of statistical tools based on R package El Nino signal (red) and estimation (blue) closely match 13
ASPECT Analysis Environment Data Select Data Access Correlate Render Display (temp, pressure) From astro-data Where (step=101) (entropy>1000); Data Mining & Analysis Layer Sample (temp, pressure) Select Data Use Bitmap (condition) Storage Efficient Access Layer Take Sample Get variables (var-names, ranges) Bitmap Index Selection Run R analysis Run p. VTK filter R Analysis Tool Read Data (buffer-name) Write Data Parallel Net. CDF Visualize scatter plot in QT p. VTK Tool Read Data (buffer-name) PVFS Hardware, OS, and MSS (HPSS) 14
Accomplishments: Scientific Process Automation (SPA) Unique requirements of scientific WFs § Moving large volumes between modules • Tightlly-coupled efficient data movement § Specification of granularity-based iteration • e. g. In spatio-temporal simulations – a time step is a “granule” § Support for data transformation • complex data types (including file formats, e. g. net. CDF, HDF) § Dynamic steering of workflow by user • Dynamic user examination of results workflow steps defined graphically Developed a working scientific work flow system § Automatic microarray analysis § Using web-wrapping tools developed by the center § Using Kepler WF engine § Kepler is an adaptation of the UC Berkeley tool, Ptolemy workflow results presented to user 15
GUI for setting up and running workflows 16
Re-applying Technology SDM technology, developed for one application, can be effectively targeted at many other applications … Technology Initial Application New Applications Parallel Net. CDF Astrophysics Climate Parallel VTK Astrophysics Climate Compressed bitmaps HENP Combustion, Astrophysics Storage Resource Managers HENP Astrophysics Feature Selection Climate Fusion Scientific Workflow Biology Astrophysics (planned) 17
Broad Impact of the SDM Center… Astrophysics: High speed storage technology, parallel Net. CDF, parallel VTK, and ASPECT integration software used for Terascale Supernova Initiative (TSI) and FLASH simulations Tony Mezzacappa – ORNL, John Blondin –NCSU, Mike Zingale – U of Chicago, Mike Papka – ANL ASCI FLASH – parallel Net. CDF Climate: High speed storage technology, Parallel Net. CDF, and ICA technology used for Climate Modeling projects Ben Santer – LLNL, John Drake – ORNL, John Michalakes – NCAR Dimensionality reduction Combustion: Compressed Bitmap Indexing used for fast generation of flame regions and tracking their progress over time Wendy Koegler, Jacqueline Chen – Sandia Lab Region growing 18
Broad Impact (cont. ) Biology: Kepler workflow system and web-wrapping technology used for executing complex highly repetitive workflow tasks for processing microarray data Matt Coleman - LLNL Building a scientific workflow High Energy Physics: Compressed Bitmap Indexing and Storage Resource Managers used for locating desired subsets of data (events) and automatically retrieving data from HPSS Doug Olson - LBNL, Eric Hjort – LBNL, Jerome Lauret - BNL Fusion: Dynamic monitoring of HPSS file transfers A combination of PCA and ICA technology used to identify the key parameters that are relevant to the presence of edge harmonic oscillations in a Tokomak Keith Burrell - General Atomics Identifying key parameters for the DIII-D Tokamak 19
Goals for Years 4 -5 q Fully develop the integrated SDM framework § § q Implement the 3 layer framework on SDM center facility Provide a way to select only components needed Develop self-guiding web pages on the use of SDM components Use existing successful examples as guides Generalize components for reuse § Develop general interfaces between components in the layers § support loosely-coupled WSDL interfaces § Support tightly-coupled components for efficient dataflow q Integrate operation of components in the framework § Hide details form user – automate parallel access and indexing § Develop a reusable library of components that can be selected for use in the workflow system 20
- Slides: 20