SlideShare a Scribd company logo
Ryan Saptarshi Ray Int. Journal of Engineering Research and Applications www.ijera.com
ISSN: 2248-9622, Vol. 6, Issue 2, (Part - 1) February 2016, pp.49-52
www.ijera.com 49|P a g e
Distributed Shared Memory – A Survey and Implementation
Using Openshmem
Ryan Saptarshi Ray, Utpal Kumar Ray, Ashish Anand, Dr. Parama Bhaumik
Junior Research Fellow Department of Information Technology, Jadavpur University Kolkata, India
Assistant Professor Department of Information Technology, Jadavpur University Kolkata, India
M. E. Software Engineering Student Department of Information Technology, Jadavpur University Kolkata,
India
Assistant Professor Department of Information Technology, Jadavpur University Kolkata, India
Abstract
Parallel programs nowadays are written either in multiprocessor or multicomputer environment. Both these
concepts suffer from some problems. Distributed Shared Memory (DSM) systems is a new and attractive area of
research recently, which combines the advantages of both shared-memory parallel processors (multiprocessors)
and distributed systems (multi-computers). An overview of DSM is given in the first part of the paper. Later we
have shown how parallel programs can be implemented in DSM environment using Open SHMEM.
I. Introduction
Parallel Processing
The past few years have marked the start of a
historic transition from sequential to parallel
computation. The necessity to write parallel programs
is increasing as systems are getting more complex
while processor speed increases are slowing down.
Generally one has the idea that a program will run
faster if one buys a next-generation processor. But
currently that is not the case. While the next-
generation chip will have more CPUs, each
individual CPU will be no faster than the previous
year’s model. If one wants programs to run faster,
one must learn to write parallel programs as currently
multi-core processors are becoming more and more
popular. Parallel Programming means using multiple
computing resources like processors for
programming so that the time required to perform
computations is reduced. Parallel Processing
Systems are designed to speed up the execution of
programs by dividing the program into multiple
fragments and processing these fragments
simultaneously. Parallel systems deal with the
simultaneous use of multiple computer resources.
Parallel systems can be - a single computer with
multiple processors, or a number
of computers connected by a network to form a
parallel processing cluster or a combination of both.
Cluster computing has become very common for
applications that exhibit large amount of control
parallelism. Concurrent execution of batch jobs and
parallel servicing of web and other requests [1] as in
Condor [2], which achieve very high throughput rates
have become very popular. Some workloads can
benefit from concurrently running processes on
separate machines and can achieve speedup on
networks of workstation using cluster technologies
such as the MPI programming interface [3]. Under
MPI, machines may explicitly pass messages, but do
not share variables or memory regions directly.
Parallel computing systems usually fall into two
large classifications, according to their memory
system organization: shared and distributed-memory
systems.
Multiprocessor Environment
A shared-memory system [4] (often called a
tightly coupled multiprocessor) makes a global
physical memory equally accessible to all processors.
These systems enable simple data sharing through a
uniform mechanism of reading and writing shared
structures in the common memory. This system has
advantages of ease of programming and portability.
However, shared-memory multiprocessors typically
suffer from increased contention and longer latencies
in accessing the shared memory, which degrades
peak performance and limits scalability compared to
distributed systems. Memory system design also
tends to be complex.
Multicomputer Environment
In contrast, a distributed-memory system (often
called a multicomputer) consists of multiple
independent processing nodes with local memory
modules, connected by a general interconnection
network. The scalable nature of distributed-memory
systems makes systems with very high computing
power possible. However, communication between
processes residing on different nodes involves a
message-passing model that requires explicit use of
send/receive primitives. Also, process migration
imposes problems because of different address
RESEARCH ARTICLE OPEN ACCESS
Ryan Saptarshi Ray Int. Journal of Engineering Research and Applications www.ijera.com
ISSN: 2248-9622, Vol. 6, Issue 2, (Part - 1) February 2016, pp.49-52
www.ijera.com 50|P a g e
spaces. Therefore, compared to shared-memory
systems, hardware problems are easier and software
problems more complex in distributed-memory
systems. [5]
Distributed shared memory (DSM) is an
alternative to the above mentioned approaches that
operates over networks of workstations. DSM
combines the advantages of shared memory parallel
computer and distributed systems. [5],[6]
II. DSM – An Overview
In early days of distributed computing, it was
implicitly assumed that programs on machines with
no physically shared memory obviously ran in
different address spaces. In 1986, Kai Li proposed a
different scheme in his PhD dissertation entitled,
“Shared Virtual Memory on loosely Coupled
Microprocessors”, it opened up a new area of
research that is known as Distributed Shared Memory
(DSM) systems. [7]
A DSM system logically implements the shared-
memory model on a physically distributed-memory
system. DSM is a model of inter-process
communications in distributed system. In DSM,
processes running on separate hosts can access a
shared address space. The underlying DSM system
provides its clients with a shared, coherent memory
address space. Each client can access any memory
location in the shared address space at any time and
see the value last written by any client. The primary
advantage of DSM is the simpler abstraction it
provides to the application programmer. The
communication mechanism is entirely hidden from
the application writer so that the programmer does
not have to be conscious of data movements between
processes and complex data structures can be passed
by reference. [8]
DSM can be implemented in hardware
(Hardware DSM) as well as software (Software
DSM). Hardware implementation requires addition of
special network interfaces and cache coherence
circuits to the system to make remote memory access
look like local memory access. So, Hardware DSM is
very expensive. Software implementation is
advantageous as in this case only software has to be
installed. In Software DSM a software layer is added
between the OS and application layers and kernel of
OS may or may not be modified. Software DSM is
more widely used as it is cheaper and easier to
implement than Hardware DSM.
III. DSM – Pros and Cons Pros
Because of the combined advantages of the
shared-memory and distributed systems, DSM
approach is a viable solution for large-scale, high-
performance systems with a reduced cost of parallel
software development. [5]
In multiprocessor systems there is an upper limit
to the number of processors which can be added to a
single system. But in DSM according to requirement
any number of systems can be added. DSM systems
are also cheaper and more scalable than both
multiprocessors and multi-computer systems. In
DSM message passing overhead is much less than
multi-computer systems.
Cons
Consistency can be an important issue in DSM
as different processors access, cache and update a
shared single memory space. Partial failures or/and
lack of global state view can also lead to
inconsistency.
IV. Implementation of DSM using
OpenSHMEM
An Overview – OpenSHMEM
OpenSHMEM is a standard for SHMEM library
implementations which can be used to write parallel
programs in DSM environment. SHMEM is a
communications library that is used for Partitioned
Global Address Space (PGAS) [9] style
programming. The key features of SHMEM include
one-sided point-to-point and collective
communication, a shared memory view, and atomic
operations that operate on globally visible or
“symmetric” variables in the program. [10]
Code Example
The code below shows implementation of
parallel programs in DSM environment using
OpenSHMEM.
#include <stdio.h>
#include <shmem.h> //SHMEM library is included
#define LIMIT 7
long pSync[SHMEM_BARRIER_SYNC_SIZE];
int
pWrk[SHMEM_REDUCE_MIN_WRKDATA_SIZE
];
int global_data[LIMIT] = {1,2,3,4,5,6,7};
int result[LIMIT];
int main(int argc, char **argv)
{
int rank, size, number, i, j;
int local_data[LIMIT];
start_pes(0);
size = num_pes();
rank = my_pe();
shmem_barrier(0,0,3,pSync);
if (rank == 0)
{
for(i=0; i<LIMIT; i++)
local_data[i] = 0;
//Local array is initialized
}
Ryan Saptarshi Ray Int. Journal of Engineering Research and Applications www.ijera.com
ISSN: 2248-9622, Vol. 6, Issue 2, (Part - 1) February 2016, pp.49-52
www.ijera.com 51|P a g e
else
{
if (rank%2 == 1)
{
for(i=0; i<LIMIT; i++)
{
local_data[i] = global_data[i] + 1;
}
}shmem_quiet();
if(rank%2 == 0)
{
for(i=0; i<LIMIT; i++)
{
local_data[i] = global_data[i] - 1;
}
}shmem_quiet();
}
shmem_int_sum_to_all(result,
local_data,LIMIT,0,0,size, pWrk,pSync);
shmem_quiet();
if (rank == 0)
{
printf("Updated Datan");
for(i=0; i<LIMIT; i++)
printf("%3d", result[i]);
printf("n");
}
shmem_barrier_all();
return 0;
}
In the above program, an array of integers is taken as
input. Increment operation and decrement operation
are performed on the array by multiple Processing
Elements (PEs) in the network. PEs with odd rank
perform increment and those with even rank perform
decrement on the array. Finally sum of these values is
shown as output.
Various functions of SHMEM library are used here.
Below we are giving a brief overview of these
functions.
start_pes() – This routine should be the first
statement in a SHMEM parallel program. It allocates
a block of memory from the symmetric heap.
num_pes() – This routine returns the total number of
PEs running in an application.
my_pe() – This routine returns the processing
element (PE) number of the calling PE. It accepts no
arguments. The result is an integer between 0 and
npes - 1, where npes is the total number of PEs
executing the current program.
shmem_barrier(PE_start, logPE_stride, PE_size,
pSync) – This routine does not return until the subset
of PEs specified by PE_start,
logPE_stride and PE_size, has entered this routine at
the same point of the execution path. The arguments
are as follows:
PE_start – It is the lowest virtual PE number of the
active set of PEs. PE_start must be of type integer.
logPE_stride - The log (base 2) of the stride between
consecutive virtual PE numbers in the active set.
logPE_stride must be of type integer.
PE_size – It is the number of PEs in the active set.
PE_size must be of type integer. pSync - It is a
symmetric work array.
shmem_quiet() – It is one of the most useful routines
as it ensures ordering of delivery of several remote
operations.
shmem_int_sum_to_all(target, source, nreduce,
PE_start, logPE_stride, PE_size, pWrk, pSync) – It
is a reduction routine which computes one or more
reductions across symmetric arrays on multiple
virtual PEs. Some of the arguments are same as
mentioned above and the rest are as follows: target –
It is a symmetric array of length nreduce elements to
receive the results of the reduction operations.
source – It is a symmetric array, of length nreduce
elements, that contains one element for each separate
reduction operation. The source argument must have
the same data type as target.
nreduce – It is the number of elements in the target
and source arrays.
pWrk – It is a symmetric work array. The pWrk
argument must have the same data type as target.
shmem_barrier_all() – This routine does not return
until all other PEs have entered this routine at the
same point of the execution path.
The code is compiled as following:
$oshcc <filename> -o <object_filename>
The code is executed as following:
$oshrun –np <PE_size> --hostfile <hostfile_name>
<object_filename>
Here hostfile is a file containing the ip addresses of
all PEs in the network. [13]
Output of the above code for PE_size = 3 was as
shown below:
2 4 6 8 10 12 14
V. STM in DSM Environment
Software Transactional Memory (STM) [12] is a
promising new approach to programming shared-
memory parallel processors. It is an alternative
approach to locks for solving the problem of
synchronization in parallel programs. It allows
portions of a program to execute in isolation, without
regard to other, concurrently executing tasks. A
programmer can reason about the correctness of code
within a transaction and need not worry about
complex interactions with other, concurrently
executing parts of the program. Up till now STM
codes have been executed in multiprocessor
environment only. Many works are going on to
implement STM in DSM environment (such as
Atomic RMI) and it is expected that this will lead to
improved performance of STM. [11] Atomic RMI is
a distributed transactional memory frame-work that
supports the control flow model of execution. Atomic
Ryan Saptarshi Ray Int. Journal of Engineering Research and Applications www.ijera.com
ISSN: 2248-9622, Vol. 6, Issue 2, (Part - 1) February 2016, pp.49-52
www.ijera.com 52|P a g e
RMI extends Java RMI with distributed transactions
that can run on many Java virtual machines located
on different network nodes which can hosts a number
shared remote objects.
VI. Conclusion
The main objective of this paper was to provide a
description of the Distributed Shared Memory
systems. A special attempt was made to provide an
example of implementation of parallel programs in
DSM environment using OpenSHMEM. From our
point of view it seems further works in exploring and
implementing DSM systems to achieve improved
performance is quite promising.
References
[1]. Luiz Andre Barroso, Jeffrey Dean, Urs
Holzle. “Web Search For a Planet: The
Google Cluster Architecture,” In: IEEE
Micro,23(2):22-28, March-April 2003.
[2]. M. Litzkow, M. Livny, and M. Mutka,
"Condor - A Hunter of Idle Workstations",
In: Proceedings of the 8th International
Conference of Distributed Computing
Systems, June, 1988.
[3]. Message Passing Interface (MPI) standard.
http://www-unix.mcs.anl.gov/mpi/
[4]. M. J. Flynn, Computer Architecture:
Pipelined and Parallel Processor Design,
Jones and Barlett, Boston, 1995.
[5]. Jelica Protic, Milo Tomasevic, Veljko
Milutinovic, “A Survey of Distributed
Shared Memory Systems” Proceedings of
the 28th Annual Hawaii International
Conference on System Sciences, 1995.
[6]. V. Lo, “Operating Systems Enhancements
for Distributed Shared Memory”, Advances
in Computers, Vol. 39, 1994.
[7]. Kai Li, “Shared Virtual Memory on Loosely
Coupled Microprocessors” PhD Thesis,
Yale University, September 1986.
[8]. S. Zhou, M. Stumn, Kai Li, D. Wortman,
“Heterogeneous Distributed Shared
Memory”, IEEE Trans. On Parallel and
Distributed Systems, 3(5), 1991.
[9]. PGAS Forum. http://www.pgas.org/
[10]. B. Chapman, T. Curtis, S. Pophale, S. Poole,
J. Kuehn, C. Koelbel, L. Smith “Introducing
OpenSHMEM, SHMEM for the PGAS
Community”, Partitioned Global Address
Space Conference 2010.
[11]. Konrad Siek, Paweł T. Wojciechowski,
“Atomic RMI: A Distributed Transactional
Memory Framework” Poznan University of
Technology, Poland, March 2015.
[12]. Ryan Saptarshi Ray, “Writing Lock-Free
Code using Software Transactional
Memory”, Department of IT, Jadavpur
University, 2012.
[13]. http://openshmem.org/site/Documentation/
Manpages/Browse

More Related Content

DOCX
Distributed system unit II according to syllabus of RGPV, Bhopal
PPT
Chap 4
PPTX
Distributed file systems chapter 9
DOCX
Distributed system notes unit I
PPTX
Distributed Shared Memory Systems
PPT
Distributed & parallel system
PPTX
Distributed shred memory architecture
PPT
Distributed OS - An Introduction
Distributed system unit II according to syllabus of RGPV, Bhopal
Chap 4
Distributed file systems chapter 9
Distributed system notes unit I
Distributed Shared Memory Systems
Distributed & parallel system
Distributed shred memory architecture
Distributed OS - An Introduction

What's hot (20)

DOC
Centralized vs distrbution system
PPT
Dsm (Distributed computing)
PPTX
Distributed operating system
PDF
istributed system
DOCX
Rep on grid computing
PPTX
Distributed os
PDF
Performance evaluation of larger matrices over cluster of four nodes using mpi
PPTX
Buffer management
PPT
Chapter 1-distribute Computing
PDF
Geo distributed parallelization pacts in map reduce
PPT
Distributed Systems
PPTX
distributed Computing system model
PPTX
Distribution transparency and Distributed transaction
PPT
Fundamentals
PPTX
Aos distibutted system
DOC
Wiki 2
PPTX
Distributed Computing
PDF
The Parallel Architecture Approach, Single Program Multiple Data (Spmd) Imple...
PPT
Distributed system
PPT
Distributed Systems
Centralized vs distrbution system
Dsm (Distributed computing)
Distributed operating system
istributed system
Rep on grid computing
Distributed os
Performance evaluation of larger matrices over cluster of four nodes using mpi
Buffer management
Chapter 1-distribute Computing
Geo distributed parallelization pacts in map reduce
Distributed Systems
distributed Computing system model
Distribution transparency and Distributed transaction
Fundamentals
Aos distibutted system
Wiki 2
Distributed Computing
The Parallel Architecture Approach, Single Program Multiple Data (Spmd) Imple...
Distributed system
Distributed Systems
Ad

Viewers also liked (19)

PDF
Total Monomeric Anthocyanin and Total Flavonoid Content of Processed Purple P...
PDF
Seismic Microzonation Study in Tabriz Metropolitan City for Earthquake Risk M...
PDF
Using Remote Sensing Techniques For Monitoring Ecological Changes In Lakes: C...
PDF
Ground water distillation by basin type solar still for different basin water...
PDF
Framework for Bridges Maintenance in Egypt
PDF
A Survey of provenance management in wireless sensor network
PDF
Routing in Cognitive Radio Networks - A Survey
PDF
Effects of A Simulated Power Cut in AMS on Milk Yield Valued by Statistics Model
PDF
A study on Quality Attributes of Ghee based on packaging materials and storag...
PDF
Application of GIS and MODFLOW to Ground Water Hydrology- A Review
PDF
A Method to Determine End-Points ofStraight Lines Detected Using the Hough Tr...
PDF
Absorption Reduction Capacity with Chromium (Cr) and Cadmium (Cd) Contaminant...
PDF
A New Surface Modification Technique and Their Characterisation: Friction Sti...
PDF
Evaluation of Effect of Lateral Forces on Multi-Storeyed Rcc Frame by Conside...
PDF
Stress –Strain Modal of Plasticity
PDF
A Wear Analysis of Composite Ball Materials using Tribometer
PDF
Mechanical and Chemical Properties of Bamboo/Glass Fibers Reinforced Polyeste...
PDF
A Comparative Study on Profile Based Location Management for Personal Communi...
PDF
Stability Analysis of Quadruped-imitating Walking Robot Based on Inverted Pen...
Total Monomeric Anthocyanin and Total Flavonoid Content of Processed Purple P...
Seismic Microzonation Study in Tabriz Metropolitan City for Earthquake Risk M...
Using Remote Sensing Techniques For Monitoring Ecological Changes In Lakes: C...
Ground water distillation by basin type solar still for different basin water...
Framework for Bridges Maintenance in Egypt
A Survey of provenance management in wireless sensor network
Routing in Cognitive Radio Networks - A Survey
Effects of A Simulated Power Cut in AMS on Milk Yield Valued by Statistics Model
A study on Quality Attributes of Ghee based on packaging materials and storag...
Application of GIS and MODFLOW to Ground Water Hydrology- A Review
A Method to Determine End-Points ofStraight Lines Detected Using the Hough Tr...
Absorption Reduction Capacity with Chromium (Cr) and Cadmium (Cd) Contaminant...
A New Surface Modification Technique and Their Characterisation: Friction Sti...
Evaluation of Effect of Lateral Forces on Multi-Storeyed Rcc Frame by Conside...
Stress –Strain Modal of Plasticity
A Wear Analysis of Composite Ball Materials using Tribometer
Mechanical and Chemical Properties of Bamboo/Glass Fibers Reinforced Polyeste...
A Comparative Study on Profile Based Location Management for Personal Communi...
Stability Analysis of Quadruped-imitating Walking Robot Based on Inverted Pen...
Ad

Similar to Distributed Shared Memory – A Survey and Implementation Using Openshmem (20)

PPT
parallel programming models
PDF
CS8603_Notes_003-1_edubuzz360.pdf
PDF
Operating Systems Structure1- Explain briefly why the objectives o.pdf
PPTX
PPTX
Cloud computing
DOC
Symmetric multiprocessing and Microkernel
PDF
Module 2.pdf
PPT
Embedded Linux
PDF
Cluster Computing
PDF
In-Memory Compute Grids… Explained
PDF
Towards high performance computing(hpc) through parallel programming paradigm...
PPTX
Types of operating system
PDF
An asynchronous replication model to improve data available into a heterogene...
PPTX
Assignment-1 Updated Version advanced comp.pptx
PDF
Resist Dictionary Attacks Using Password Based Protocols For Authenticated Ke...
PPTX
paradigms cloud.pptx
PPT
Grid Presentation
PPTX
Distributed Systems.pptx
PPTX
PGAS Programming Model
parallel programming models
CS8603_Notes_003-1_edubuzz360.pdf
Operating Systems Structure1- Explain briefly why the objectives o.pdf
Cloud computing
Symmetric multiprocessing and Microkernel
Module 2.pdf
Embedded Linux
Cluster Computing
In-Memory Compute Grids… Explained
Towards high performance computing(hpc) through parallel programming paradigm...
Types of operating system
An asynchronous replication model to improve data available into a heterogene...
Assignment-1 Updated Version advanced comp.pptx
Resist Dictionary Attacks Using Password Based Protocols For Authenticated Ke...
paradigms cloud.pptx
Grid Presentation
Distributed Systems.pptx
PGAS Programming Model

Recently uploaded (20)

PDF
composite construction of structures.pdf
PPTX
Lesson 3_Tessellation.pptx finite Mathematics
PDF
Monitoring Global Terrestrial Surface Water Height using Remote Sensing - ARS...
PPTX
24AI201_AI_Unit_4 (1).pptx Artificial intelligence
PDF
Arduino robotics embedded978-1-4302-3184-4.pdf
PPTX
“Next-Gen AI: Trends Reshaping Our World”
PDF
SM_6th-Sem__Cse_Internet-of-Things.pdf IOT
PPTX
OOP with Java - Java Introduction (Basics)
PDF
Evaluating the Democratization of the Turkish Armed Forces from a Normative P...
PDF
Embodied AI: Ushering in the Next Era of Intelligent Systems
PPTX
Practice Questions on recent development part 1.pptx
PPTX
bas. eng. economics group 4 presentation 1.pptx
PPTX
Engineering Ethics, Safety and Environment [Autosaved] (1).pptx
PPTX
Unit 5 BSP.pptxytrrftyyydfyujfttyczcgvcd
PDF
Geotechnical Engineering, Soil mechanics- Soil Testing.pdf
PPTX
Internship_Presentation_Final engineering.pptx
PDF
ETO & MEO Certificate of Competency Questions and Answers
PPTX
FINAL REVIEW FOR COPD DIANOSIS FOR PULMONARY DISEASE.pptx
PPTX
436813905-LNG-Process-Overview-Short.pptx
composite construction of structures.pdf
Lesson 3_Tessellation.pptx finite Mathematics
Monitoring Global Terrestrial Surface Water Height using Remote Sensing - ARS...
24AI201_AI_Unit_4 (1).pptx Artificial intelligence
Arduino robotics embedded978-1-4302-3184-4.pdf
“Next-Gen AI: Trends Reshaping Our World”
SM_6th-Sem__Cse_Internet-of-Things.pdf IOT
OOP with Java - Java Introduction (Basics)
Evaluating the Democratization of the Turkish Armed Forces from a Normative P...
Embodied AI: Ushering in the Next Era of Intelligent Systems
Practice Questions on recent development part 1.pptx
bas. eng. economics group 4 presentation 1.pptx
Engineering Ethics, Safety and Environment [Autosaved] (1).pptx
Unit 5 BSP.pptxytrrftyyydfyujfttyczcgvcd
Geotechnical Engineering, Soil mechanics- Soil Testing.pdf
Internship_Presentation_Final engineering.pptx
ETO & MEO Certificate of Competency Questions and Answers
FINAL REVIEW FOR COPD DIANOSIS FOR PULMONARY DISEASE.pptx
436813905-LNG-Process-Overview-Short.pptx

Distributed Shared Memory – A Survey and Implementation Using Openshmem

  • 1. Ryan Saptarshi Ray Int. Journal of Engineering Research and Applications www.ijera.com ISSN: 2248-9622, Vol. 6, Issue 2, (Part - 1) February 2016, pp.49-52 www.ijera.com 49|P a g e Distributed Shared Memory – A Survey and Implementation Using Openshmem Ryan Saptarshi Ray, Utpal Kumar Ray, Ashish Anand, Dr. Parama Bhaumik Junior Research Fellow Department of Information Technology, Jadavpur University Kolkata, India Assistant Professor Department of Information Technology, Jadavpur University Kolkata, India M. E. Software Engineering Student Department of Information Technology, Jadavpur University Kolkata, India Assistant Professor Department of Information Technology, Jadavpur University Kolkata, India Abstract Parallel programs nowadays are written either in multiprocessor or multicomputer environment. Both these concepts suffer from some problems. Distributed Shared Memory (DSM) systems is a new and attractive area of research recently, which combines the advantages of both shared-memory parallel processors (multiprocessors) and distributed systems (multi-computers). An overview of DSM is given in the first part of the paper. Later we have shown how parallel programs can be implemented in DSM environment using Open SHMEM. I. Introduction Parallel Processing The past few years have marked the start of a historic transition from sequential to parallel computation. The necessity to write parallel programs is increasing as systems are getting more complex while processor speed increases are slowing down. Generally one has the idea that a program will run faster if one buys a next-generation processor. But currently that is not the case. While the next- generation chip will have more CPUs, each individual CPU will be no faster than the previous year’s model. If one wants programs to run faster, one must learn to write parallel programs as currently multi-core processors are becoming more and more popular. Parallel Programming means using multiple computing resources like processors for programming so that the time required to perform computations is reduced. Parallel Processing Systems are designed to speed up the execution of programs by dividing the program into multiple fragments and processing these fragments simultaneously. Parallel systems deal with the simultaneous use of multiple computer resources. Parallel systems can be - a single computer with multiple processors, or a number of computers connected by a network to form a parallel processing cluster or a combination of both. Cluster computing has become very common for applications that exhibit large amount of control parallelism. Concurrent execution of batch jobs and parallel servicing of web and other requests [1] as in Condor [2], which achieve very high throughput rates have become very popular. Some workloads can benefit from concurrently running processes on separate machines and can achieve speedup on networks of workstation using cluster technologies such as the MPI programming interface [3]. Under MPI, machines may explicitly pass messages, but do not share variables or memory regions directly. Parallel computing systems usually fall into two large classifications, according to their memory system organization: shared and distributed-memory systems. Multiprocessor Environment A shared-memory system [4] (often called a tightly coupled multiprocessor) makes a global physical memory equally accessible to all processors. These systems enable simple data sharing through a uniform mechanism of reading and writing shared structures in the common memory. This system has advantages of ease of programming and portability. However, shared-memory multiprocessors typically suffer from increased contention and longer latencies in accessing the shared memory, which degrades peak performance and limits scalability compared to distributed systems. Memory system design also tends to be complex. Multicomputer Environment In contrast, a distributed-memory system (often called a multicomputer) consists of multiple independent processing nodes with local memory modules, connected by a general interconnection network. The scalable nature of distributed-memory systems makes systems with very high computing power possible. However, communication between processes residing on different nodes involves a message-passing model that requires explicit use of send/receive primitives. Also, process migration imposes problems because of different address RESEARCH ARTICLE OPEN ACCESS
  • 2. Ryan Saptarshi Ray Int. Journal of Engineering Research and Applications www.ijera.com ISSN: 2248-9622, Vol. 6, Issue 2, (Part - 1) February 2016, pp.49-52 www.ijera.com 50|P a g e spaces. Therefore, compared to shared-memory systems, hardware problems are easier and software problems more complex in distributed-memory systems. [5] Distributed shared memory (DSM) is an alternative to the above mentioned approaches that operates over networks of workstations. DSM combines the advantages of shared memory parallel computer and distributed systems. [5],[6] II. DSM – An Overview In early days of distributed computing, it was implicitly assumed that programs on machines with no physically shared memory obviously ran in different address spaces. In 1986, Kai Li proposed a different scheme in his PhD dissertation entitled, “Shared Virtual Memory on loosely Coupled Microprocessors”, it opened up a new area of research that is known as Distributed Shared Memory (DSM) systems. [7] A DSM system logically implements the shared- memory model on a physically distributed-memory system. DSM is a model of inter-process communications in distributed system. In DSM, processes running on separate hosts can access a shared address space. The underlying DSM system provides its clients with a shared, coherent memory address space. Each client can access any memory location in the shared address space at any time and see the value last written by any client. The primary advantage of DSM is the simpler abstraction it provides to the application programmer. The communication mechanism is entirely hidden from the application writer so that the programmer does not have to be conscious of data movements between processes and complex data structures can be passed by reference. [8] DSM can be implemented in hardware (Hardware DSM) as well as software (Software DSM). Hardware implementation requires addition of special network interfaces and cache coherence circuits to the system to make remote memory access look like local memory access. So, Hardware DSM is very expensive. Software implementation is advantageous as in this case only software has to be installed. In Software DSM a software layer is added between the OS and application layers and kernel of OS may or may not be modified. Software DSM is more widely used as it is cheaper and easier to implement than Hardware DSM. III. DSM – Pros and Cons Pros Because of the combined advantages of the shared-memory and distributed systems, DSM approach is a viable solution for large-scale, high- performance systems with a reduced cost of parallel software development. [5] In multiprocessor systems there is an upper limit to the number of processors which can be added to a single system. But in DSM according to requirement any number of systems can be added. DSM systems are also cheaper and more scalable than both multiprocessors and multi-computer systems. In DSM message passing overhead is much less than multi-computer systems. Cons Consistency can be an important issue in DSM as different processors access, cache and update a shared single memory space. Partial failures or/and lack of global state view can also lead to inconsistency. IV. Implementation of DSM using OpenSHMEM An Overview – OpenSHMEM OpenSHMEM is a standard for SHMEM library implementations which can be used to write parallel programs in DSM environment. SHMEM is a communications library that is used for Partitioned Global Address Space (PGAS) [9] style programming. The key features of SHMEM include one-sided point-to-point and collective communication, a shared memory view, and atomic operations that operate on globally visible or “symmetric” variables in the program. [10] Code Example The code below shows implementation of parallel programs in DSM environment using OpenSHMEM. #include <stdio.h> #include <shmem.h> //SHMEM library is included #define LIMIT 7 long pSync[SHMEM_BARRIER_SYNC_SIZE]; int pWrk[SHMEM_REDUCE_MIN_WRKDATA_SIZE ]; int global_data[LIMIT] = {1,2,3,4,5,6,7}; int result[LIMIT]; int main(int argc, char **argv) { int rank, size, number, i, j; int local_data[LIMIT]; start_pes(0); size = num_pes(); rank = my_pe(); shmem_barrier(0,0,3,pSync); if (rank == 0) { for(i=0; i<LIMIT; i++) local_data[i] = 0; //Local array is initialized }
  • 3. Ryan Saptarshi Ray Int. Journal of Engineering Research and Applications www.ijera.com ISSN: 2248-9622, Vol. 6, Issue 2, (Part - 1) February 2016, pp.49-52 www.ijera.com 51|P a g e else { if (rank%2 == 1) { for(i=0; i<LIMIT; i++) { local_data[i] = global_data[i] + 1; } }shmem_quiet(); if(rank%2 == 0) { for(i=0; i<LIMIT; i++) { local_data[i] = global_data[i] - 1; } }shmem_quiet(); } shmem_int_sum_to_all(result, local_data,LIMIT,0,0,size, pWrk,pSync); shmem_quiet(); if (rank == 0) { printf("Updated Datan"); for(i=0; i<LIMIT; i++) printf("%3d", result[i]); printf("n"); } shmem_barrier_all(); return 0; } In the above program, an array of integers is taken as input. Increment operation and decrement operation are performed on the array by multiple Processing Elements (PEs) in the network. PEs with odd rank perform increment and those with even rank perform decrement on the array. Finally sum of these values is shown as output. Various functions of SHMEM library are used here. Below we are giving a brief overview of these functions. start_pes() – This routine should be the first statement in a SHMEM parallel program. It allocates a block of memory from the symmetric heap. num_pes() – This routine returns the total number of PEs running in an application. my_pe() – This routine returns the processing element (PE) number of the calling PE. It accepts no arguments. The result is an integer between 0 and npes - 1, where npes is the total number of PEs executing the current program. shmem_barrier(PE_start, logPE_stride, PE_size, pSync) – This routine does not return until the subset of PEs specified by PE_start, logPE_stride and PE_size, has entered this routine at the same point of the execution path. The arguments are as follows: PE_start – It is the lowest virtual PE number of the active set of PEs. PE_start must be of type integer. logPE_stride - The log (base 2) of the stride between consecutive virtual PE numbers in the active set. logPE_stride must be of type integer. PE_size – It is the number of PEs in the active set. PE_size must be of type integer. pSync - It is a symmetric work array. shmem_quiet() – It is one of the most useful routines as it ensures ordering of delivery of several remote operations. shmem_int_sum_to_all(target, source, nreduce, PE_start, logPE_stride, PE_size, pWrk, pSync) – It is a reduction routine which computes one or more reductions across symmetric arrays on multiple virtual PEs. Some of the arguments are same as mentioned above and the rest are as follows: target – It is a symmetric array of length nreduce elements to receive the results of the reduction operations. source – It is a symmetric array, of length nreduce elements, that contains one element for each separate reduction operation. The source argument must have the same data type as target. nreduce – It is the number of elements in the target and source arrays. pWrk – It is a symmetric work array. The pWrk argument must have the same data type as target. shmem_barrier_all() – This routine does not return until all other PEs have entered this routine at the same point of the execution path. The code is compiled as following: $oshcc <filename> -o <object_filename> The code is executed as following: $oshrun –np <PE_size> --hostfile <hostfile_name> <object_filename> Here hostfile is a file containing the ip addresses of all PEs in the network. [13] Output of the above code for PE_size = 3 was as shown below: 2 4 6 8 10 12 14 V. STM in DSM Environment Software Transactional Memory (STM) [12] is a promising new approach to programming shared- memory parallel processors. It is an alternative approach to locks for solving the problem of synchronization in parallel programs. It allows portions of a program to execute in isolation, without regard to other, concurrently executing tasks. A programmer can reason about the correctness of code within a transaction and need not worry about complex interactions with other, concurrently executing parts of the program. Up till now STM codes have been executed in multiprocessor environment only. Many works are going on to implement STM in DSM environment (such as Atomic RMI) and it is expected that this will lead to improved performance of STM. [11] Atomic RMI is a distributed transactional memory frame-work that supports the control flow model of execution. Atomic
  • 4. Ryan Saptarshi Ray Int. Journal of Engineering Research and Applications www.ijera.com ISSN: 2248-9622, Vol. 6, Issue 2, (Part - 1) February 2016, pp.49-52 www.ijera.com 52|P a g e RMI extends Java RMI with distributed transactions that can run on many Java virtual machines located on different network nodes which can hosts a number shared remote objects. VI. Conclusion The main objective of this paper was to provide a description of the Distributed Shared Memory systems. A special attempt was made to provide an example of implementation of parallel programs in DSM environment using OpenSHMEM. From our point of view it seems further works in exploring and implementing DSM systems to achieve improved performance is quite promising. References [1]. Luiz Andre Barroso, Jeffrey Dean, Urs Holzle. “Web Search For a Planet: The Google Cluster Architecture,” In: IEEE Micro,23(2):22-28, March-April 2003. [2]. M. Litzkow, M. Livny, and M. Mutka, "Condor - A Hunter of Idle Workstations", In: Proceedings of the 8th International Conference of Distributed Computing Systems, June, 1988. [3]. Message Passing Interface (MPI) standard. http://www-unix.mcs.anl.gov/mpi/ [4]. M. J. Flynn, Computer Architecture: Pipelined and Parallel Processor Design, Jones and Barlett, Boston, 1995. [5]. Jelica Protic, Milo Tomasevic, Veljko Milutinovic, “A Survey of Distributed Shared Memory Systems” Proceedings of the 28th Annual Hawaii International Conference on System Sciences, 1995. [6]. V. Lo, “Operating Systems Enhancements for Distributed Shared Memory”, Advances in Computers, Vol. 39, 1994. [7]. Kai Li, “Shared Virtual Memory on Loosely Coupled Microprocessors” PhD Thesis, Yale University, September 1986. [8]. S. Zhou, M. Stumn, Kai Li, D. Wortman, “Heterogeneous Distributed Shared Memory”, IEEE Trans. On Parallel and Distributed Systems, 3(5), 1991. [9]. PGAS Forum. http://www.pgas.org/ [10]. B. Chapman, T. Curtis, S. Pophale, S. Poole, J. Kuehn, C. Koelbel, L. Smith “Introducing OpenSHMEM, SHMEM for the PGAS Community”, Partitioned Global Address Space Conference 2010. [11]. Konrad Siek, Paweł T. Wojciechowski, “Atomic RMI: A Distributed Transactional Memory Framework” Poznan University of Technology, Poland, March 2015. [12]. Ryan Saptarshi Ray, “Writing Lock-Free Code using Software Transactional Memory”, Department of IT, Jadavpur University, 2012. [13]. http://openshmem.org/site/Documentation/ Manpages/Browse