SlideShare a Scribd company logo
International Journal Of Computational Engineering Research (ijceronline.com) Vol. 2 Issue. 5



                 A Data Throughput Prediction Using Scheduling And
                              Assignment Technique
                      M.Rajarajeswari 1, P.R.Kandasamy2, T.Ravichandran3
                       1. Research Scholar, Dept. of Mathematics Karpagam University,Coimbatore.
                   2.Professor and Head, Dept. of M.CA Hindusthan Institute of Technology,coimbatore.
                             3.The Principal, Hindusthan Institute of Technology, coimbatore.


Abstract:
Task computing is a computation to fill the gap between tasks (what user wants to be done), and services (functionalities that
are available to the user). Task computing seeks to redefine how users interact with and use computing environments. Wide
distributed many-task computing (MTC) environment aims to bridge the gap between two computing paradigms, high
throughput computing (HTC) and high-performance computing (HPC). In a wide distributed many-task computing
environment, data sharing between participating clusters may become a major performance constriction. In this project, we
present the design and implementation of an application-layer data throughput prediction and optimization service for many-
task computing in widely distributed environments using Operation research. This service uses multiple parallel TCP streams
which are used to find maximum data distribution stream through assignment model which is to improve the end-to-end
throughput of data transfers in the network. A novel mathematical model (optimization model) is developed to determine the
number of parallel streams, required to achieve the best network performance. This model can predict the optimal number of
parallel streams with as few as three prediction points (i.e. three Switching points). We implement this new service in the
Stork Data Scheduler model, where the prediction points can be obtained using Iperf and GridFTP samplings technique. Our
results show that the prediction cost plus the optimized transfer time is much less than the non optimized transfer time in
most cases. As a result, Stork data model evaluates and transfer jobs with optimization service based sampling rate and no. of
task is given as input, so our system can be completed much earlier, compared to non optimized data transfer jobs.

Key words: Optimization, Assignment Technique, Stork scheduling Data throughput.

Modules:
1) Construction of Grid Computing Architecture.
 2) Applying Optimization Service.
 3) Integration with Stork Data cheduler.
4) Applying Quantity Control of Sampling Data.
5) Performance Comparison.

Existing System:
TCP is the most widely adopted transport protocol but it has major bottleneck. So we have gone for other different
implementation techniques, in both at the transport and application layers, to overcome the inefficient network utilization of
the TCP protocol. At the transport layer, different variations of TCP have been implemented to more efficiently utilize high-
speed networks. At the application layer, other improvements are proposed on top of the regular TCP, such as opening
multiple parallel streams or tuning the buffer size. Parallel TCP streams are able to achieve high network throughput by
behaving like a single giant stream, which is the combination of n streams, and getting an unfair share of the available
bandwidth.

Disadvantage Of System:
     In a widely distributed many-task computing ernvionment, data communication between participating clusters may
       become a major performance bottleneck.
     TCP to fully utilize the available network bandwidth. This becomes a major bottleneck, especially in wide-area high
       speed networks, where both bandwidth and delay properties are too large, which, in turn, results in a large delay
       before the bandwidth is fully saturated.
     Inefficient network utilization.


Issn 2250-3005(online)                                         September| 2012                              Page 1306
International Journal Of Computational Engineering Research (ijceronline.com) Vol. 2 Issue. 5



Proposed System:
   We present the design and implementation of a service that will provide the user with the optimal number of parallel TCP
streams as well as a provision of the estimated time and throughput for a specific data transfer. A novel mathematical model
(optimization model) is developed to determine the number of parallel streams, required to achieve the best network
performance. This model can predict the optimal number of parallel streams with as few as three prediction points (i.e. three
Switching points). We implement this new service in the Stork Data Scheduler model, where the prediction points can be
obtained using Iperf and GridFTP samplings technique.

Advantage of System:
    The prediction models, the quantity control of sampling and the algorithms applied using the mathematical models.
    We have improved an existing prediction model by using three prediction points and adapting a full second order
       equation or an equation where the order is determined dynamically. We have designed an exponentially increasing
       sampling strategy to get the data pairs for prediction
    The algorithm to instantiate the throughput function with respect to the number of parallel streams can avoid the
       ineffectiveness of the prediction models due to some unexpected sampling data pairs.
    We propose to find a solution to satisfy both the time limitation and the accuracy requirements. Our approach
       doubles the number of parallel streams for every iteration of sampling, and observes the corresponding throughput.
    We implement this new service in the Stork Data Scheduler, where the prediction points can be obtained using Iperf
       and GridFTP samplings

 Implementation module:
 In this project we have implemented the optimization service, based on both Iperf and GridFTP. The structure of our design
 and presents two scenarios based on both, GridFTP and Iperf versions of the service. For the GridFTP version, these hosts
 would have GridFTP servers. For the Iperf version, these hosts would have Iperf servers running as well as a small remote
 module (TranServer) that will make a request to Iperf. Optimization server is the orchestrator host, designated to perform
 the optimization of TCP parameters and store the resultant data. It also has to be recognized by the sites, since the third-
 party sampling of throughput data will be performed by it. User/Client represents the host that sends out the request of
 optimization to the server. All of these hosts are connected via LAN. When a user wants to transfer data between two site ,
 the user will first send a request that consists of source and destination addresses, file size, and an optional buffer size
 parameter to the optimization server, which process the request and respond to the user with the optimal parallel stream
 number to do the transfer. The buffer size parameter is an optional parameter, which is given to the GridFTP protocol to set
 the buffer size to a different value than the system set buffer size. At the same time, the optimization server will estimate the
 optimal throughput that can be achieved, and the time needed to finish the specified transfer between two site. This
 information is then returned back to the User/Client making the request. Stork is a batch scheduler, specialized in data
 placement and movement. In this implementation, Stork is extended to support both estimation and optimization tasks. A
 task is categorized as an estimation task, if only estimated information regarding to the specific data movement is reported
 without the actual transfer. On the other hand, a task is categorized as optimization, if the specific data movement is
 performed, according to the optimized estimation results.

Mathematical Model :
          A novel mathematical model (optimization model) is developed to determine the number of parallel streams,
required to achieve the best network performance. This model can predict the optimal number of parallel streams with as few
as three prediction points (i.e. three Switching points).We propose to find a solution to satisfy both the time limitation and the
accuracy requirements. Our approach doubles the number of parallel streams for every iteration of sampling, and observes the
corresponding throughput. While the throughput increases, if the slope of the curve is below a threshold between successive
iterations, the sampling stops. Another stopping condition is: if the throughput decreases compared to the previous iteration
before reaching that threshold.
Assignment Problem:
Consider an matrix with n rows and n columns, rows will be consider as grid let and columns will as jobs.
Like,




Issn 2250-3005(online)                                           September| 2012                               Page 1307
International Journal Of Computational Engineering Research (ijceronline.com) Vol. 2 Issue. 5




                                         Job 1       Job 2    …….           Job n

                           Grid 1          Task       Task2 …              Task
                                           1                ….             n
                           Grid 2           Tas       Task …               Task
                                            k1        2     ….             n
                           …….
                                            Tas        Task          …     Task
                           Grid n           k1         2             ….    n
There will be more than one job for each grid so assign problem occur. to solve this problem we are going for new
mathematical model to solve this problem. Three condition occur
    1) Find out the minim value of each row and subtract least value with same row task value. Make least value as zero.
        If process matrix diagonally comes as “0” then process stop and job will assign successfully and job assign
        successfully.
                                                 Job 1       Job2                 Job n


                                    Grid
                                                     10       3           … 0
                                    1
                                                                          …
                                    Grid                                  .
                                                                          … 7
                                                      6          0
                                    2                                     …
                                    …….                                     .
                                                      0          6         … 7
                                    Grid                                   …
                                    n                                         .
2) Find out the minim value of each row and Subtract least value with same row task value. Make least value as zero. if
   column wise matrix diagonally comes as “0” then , then Find out minim value of each column and subtract least value
   with same row column value. Make least value as zero. Then if process matrix diagonally comes as “0” then process
   stop and job will assign successfully and job assign successfully.

                                             Job 1        Job 2       …….         Job n

                                Grid 1           10          0            …   3
                                                                          …
                                Grid 2            6          0            …
                                                                          .   7
                                                                          ….
                                 …….
                                                 6           0              … 7
                                Grid n                                     ….
    3) Find out the minim value of each row and subtract least value with same row task value. Make least value as zero.
       If column wise matrix diagonally comes as “0” then, then Find out minim value of each column and subtract least
       value with same row column value. Make least value as zero. Then if process matrix diagonally will comes as “0”
       then process stop and that job will be discard .


Issn 2250-3005(online)                                           September| 2012                        Page 1308
International Journal Of Computational Engineering Research (ijceronline.com) Vol. 2 Issue. 5




                                                   Job 1       Job 2         …….              Job n

                                     Grid 1            10            0         …          3
                                                                               …
                                     Grid 2              6           0         …
                                                                               .          7
                                                                               ….
                                     …….
                                                       6             0           …        7
                                     Grid n                                      ….


                                                 Job 1       Job 2         …….            Job n

                                  Grid 1
                                                    10         0             … 4
                                                                             …
                                  Grid 2             6          0            …
                                                                             .   7
                                                                             ….
                                  …….
                                                     0          6             …       7
                                  Grid n                                      ….
Experimental results:
 Test Scenario         Pre-Condition             Test Case                      Expected Output             Actual Output         Result
   Applying         Client send request        Check the client              Client Requested will be        File received         Pass
 Optimization       to EOS server then      request will be Valid         response to client with some       successfully
    Service         that request will be   request or not(only txt           information (number of
                     transfer to Gridftp      format file can be           stream taken to transfer the
                           server            download/Estimate           file and throughput achieve at
                                            from Gridftp server )            the time of transfer file)
 Integration with    If More than one       Stork data scheduler           Priority will be assigned to       According to         Pass
    Stork Data          Client make        make schedule for the          each user request(first come     priority the request
    Scheduler         request to EOS         incoming request             first process) / if estimation   process in Gridftp
                           server             exceed than one            request will be give most first        server and
                                           request/ checking the                     priority               response given to
                                              is Estimation or                                                    client
                                              optimization Of
                                                   service
Performance Comparison




Issn 2250-3005(online)                                                   September| 2012                                   Page 1309
International Journal Of Computational Engineering Research (ijceronline.com) Vol. 2 Issue. 5




Conclusion:
          This study describes the design and implementation of a network throughput prediction and optimization service for
many-task computing in widely distributed environments. This involves the selection of prediction models, the mathematical
models. We have improved an existing prediction model by using three prediction points and adapting a full second order
equation, or an equation where the order is determined dynamically. We have designed an exponentially increasing sampling
strategy to get the data pairs prediction. We implement this new service in the Stork Data Scheduler, where the prediction
points can be obtained using Iperf and GridFTP samplings. The experimental results justify our improved models as well as
the algorithms applied to the implementation. When used within the Stork Data Scheduler, the optimization service decreases
the total transfer time for a large number of data transfer jobs submitted to the scheduler significantly compared to the non
optimized Stork transfers.

Reference:
[1]    I. Raicu, I. Foster, and Y. Zhao, “Many-Task Computing for Grids and Supercomputers,” Proc. IEEE Workshop Many-
       Task Computing on Grids and Supercomputers (MTAGS), 2008.
[2]    Louisiana Optical Network Initiative (LONI), http://www.loni.org/, 2010.
[3]    Energy Sciences Network (ESNet), http://www.es.net/, 2010. [4] TeraGrid, http://www.teragrid.org/, 2010.
[5]    S. Floyd, “Rfc 3649: Highspeed TCP for Large Congestion Windows,” 2003.
[6]    R. Kelly, “Scalable TCP: Improving Performance in Highspeed Wide Area Networks,” Computer Comm. Rev., vol. 32,
       Experiments,” IEEE Network, vol. 19, no. 1, pp. 4-11, Feb. 2005. no. 2, pp. 83- 91, 2003.
[7]    C. Jin, D.X. Wei, S.H. Low, G. Buhrmaster, J. Bunn, D.H. Choe, R.L.A. Cottrell, J.C. Doyle, W. Feng, O. Martin, H.
       Newman, F. Paganini, S. Ravot, and S. Singh, “Fast TCP: From Theory to Networks,” Computer Comm. Rev., vol. 32,
       Experiments,” IEEE Network, vol. 19, no. 1, pp. 4-11, Feb. 2005.
[8]    H. Sivakumar, S. Bailey, and R.L. Grossman, “Psockets: The Case for Application-Level Network Striping for Data
       Intensive Applications Using High Speed Wide Area Networks,” Proc.
       IEEE Super Computing Conf. (SC ’00), p. 63, Nov. 2000.
[9]    J. Lee, D. Gunter, B. Tierney, B. Allcock, J. Bester, J. Bresnahan, and S. Tuecke, “Applied Techniques for High
       Bandwidth Data Transfers across Wide Area Networks,” Proc. Int’l Conf. Computing in High Energy and Nuclear
       Physics (CHEP ’01), Sept. 2001.
[10]   H. Balakrishman, V.N. Padmanabhan, S. Seshan, and R.H.K.M. Stemm, “TCP Behavior of a Busy Internet Server:
       Analysis and Improvements,” Proc. IEEE INFOCOM ’98, pp. 252-262, Mar. 1998.
[11]   T.J. Hacker, B.D. Noble, and B.D. Atley, “Adaptive Data Block Scheduling for Parallel Streams,” Proc. IEEE Int’l
       Symp. High Performance Distributed Computing (HPDC ’05), pp. 265-275, July 2005.
[12]   L. Eggert, J. Heideman, and J. Touch, “Effects of Ensemble TCP,” ACM Computer Comm. Rev., vol. 30, no. 1, pp. 15-
       29, 2000.
[13]   R.P. Karrer, J. Park, and J. Kim, “TCP-ROME: Performance and Fairness in Parallel Downloads for Web and Real
       Time Multimedia Streaming Applications,” technical report, Deutsche Telekom Labs, 2006.
[14]   D. Lu, Y. Qiao, and P.A. Dinda, “Characterizing and Predicting TCP Throughput on the Wide Area Network,” Proc.
       IEEE Int’l Conf. Distributed Computing Systems (ICDCS ’05), pp. 414-424, 2005.
[15]   E. Yildirim, D. Yin, and T. Kosar, “Prediction of Optimal Parallelism Level in Wide Area Data Transfers,” to be
       published in IEEE Trans. Parallel and Distributed Systems, 2010.



Issn 2250-3005(online)                                        September| 2012                              Page 1310

More Related Content

ODP
Chapter - 04 Basic Communication Operation
PPT
Chap4 slides
PPT
All-Reduce and Prefix-Sum Operations
PDF
Solution(1)
PDF
Todtree
PPT
Chap4 slides
PPT
Chap3 slides
PPTX
Communication costs in parallel machines
Chapter - 04 Basic Communication Operation
Chap4 slides
All-Reduce and Prefix-Sum Operations
Solution(1)
Todtree
Chap4 slides
Chap3 slides
Communication costs in parallel machines

What's hot (20)

PPT
Chap9 slides
PDF
A Comparison of Serial and Parallel Substring Matching Algorithms
PDF
Vol 16 No 2 - July-December 2016
PDF
Journal paper 1
PPT
Chap5 slides
PDF
Parallel Batch-Dynamic Graphs: Algorithms and Lower Bounds
PDF
Fairness in Transfer Control Protocol for Congestion Control in Multiplicativ...
PPTX
Broadcast in Hypercube
PDF
Basic communication operations - One to all Broadcast
PDF
Congestion Control through Load Balancing Technique for Mobile Networks: A Cl...
PPT
Chapter 4 pc
PPTX
A Tale of Data Pattern Discovery in Parallel
PDF
Eryk_Kulikowski_a4
PPT
Query optimization for_sensor_networks
PPT
Chap6 slides
PPT
Chap8 slides
PDF
A QUANTITATIVE ANALYSIS OF HANDOVER TIME AT MAC LAYER FOR WIRELESS MOBILE NET...
PDF
TOWARDS REDUCTION OF DATA FLOW IN A DISTRIBUTED NETWORK USING PRINCIPAL COMPO...
PPT
Chapter 3 pc
PPT
Chap2 slides
Chap9 slides
A Comparison of Serial and Parallel Substring Matching Algorithms
Vol 16 No 2 - July-December 2016
Journal paper 1
Chap5 slides
Parallel Batch-Dynamic Graphs: Algorithms and Lower Bounds
Fairness in Transfer Control Protocol for Congestion Control in Multiplicativ...
Broadcast in Hypercube
Basic communication operations - One to all Broadcast
Congestion Control through Load Balancing Technique for Mobile Networks: A Cl...
Chapter 4 pc
A Tale of Data Pattern Discovery in Parallel
Eryk_Kulikowski_a4
Query optimization for_sensor_networks
Chap6 slides
Chap8 slides
A QUANTITATIVE ANALYSIS OF HANDOVER TIME AT MAC LAYER FOR WIRELESS MOBILE NET...
TOWARDS REDUCTION OF DATA FLOW IN A DISTRIBUTED NETWORK USING PRINCIPAL COMPO...
Chapter 3 pc
Chap2 slides
Ad

Similar to IJCER (www.ijceronline.com) International Journal of computational Engineering research (20)

PDF
FrackingPaper
PDF
Comprehensive Performance Evaluation on Multiplication of Matrices using MPI
PDF
Pretzel: optimized Machine Learning framework for low-latency and high throug...
PDF
cis97003
PDF
Job Scheduling on the Grid Environment using Max-Min Firefly Algorithm
PDF
A Novel Approach in Scheduling Of the Real- Time Tasks In Heterogeneous Multi...
PDF
Computer Network Performance Evaluation Based on Different Data Packet Size U...
PDF
Gk3611601162
PDF
Adaptive check-pointing and replication strategy to tolerate faults in comput...
PDF
E01113138
PDF
DESIGN OF DELAY COMPUTATION METHOD FOR CYCLOTOMIC FAST FOURIER TRANSFORM
PDF
Bounded ant colony algorithm for task Allocation on a network of homogeneous ...
PDF
1844 1849
PDF
1844 1849
PDF
Achieving Portability and Efficiency in a HPC Code Using Standard Message-pas...
PDF
user_defined_functions_forinterpolation
PDF
Implementing Map Reduce Based Edmonds-Karp Algorithm to Determine Maximum Flo...
PDF
DOCX
Network Flow Pattern Extraction by Clustering Eugine Kang
ODP
work load characterization
FrackingPaper
Comprehensive Performance Evaluation on Multiplication of Matrices using MPI
Pretzel: optimized Machine Learning framework for low-latency and high throug...
cis97003
Job Scheduling on the Grid Environment using Max-Min Firefly Algorithm
A Novel Approach in Scheduling Of the Real- Time Tasks In Heterogeneous Multi...
Computer Network Performance Evaluation Based on Different Data Packet Size U...
Gk3611601162
Adaptive check-pointing and replication strategy to tolerate faults in comput...
E01113138
DESIGN OF DELAY COMPUTATION METHOD FOR CYCLOTOMIC FAST FOURIER TRANSFORM
Bounded ant colony algorithm for task Allocation on a network of homogeneous ...
1844 1849
1844 1849
Achieving Portability and Efficiency in a HPC Code Using Standard Message-pas...
user_defined_functions_forinterpolation
Implementing Map Reduce Based Edmonds-Karp Algorithm to Determine Maximum Flo...
Network Flow Pattern Extraction by Clustering Eugine Kang
work load characterization
Ad

Recently uploaded (20)

PPTX
cloud_computing_Infrastucture_as_cloud_p
PDF
A comparative analysis of optical character recognition models for extracting...
PPTX
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
PDF
Accuracy of neural networks in brain wave diagnosis of schizophrenia
PDF
Approach and Philosophy of On baking technology
PDF
Microsoft Solutions Partner Drive Digital Transformation with D365.pdf
PDF
Assigned Numbers - 2025 - Bluetooth® Document
PDF
Video forgery: An extensive analysis of inter-and intra-frame manipulation al...
PDF
From MVP to Full-Scale Product A Startup’s Software Journey.pdf
PDF
Enhancing emotion recognition model for a student engagement use case through...
PDF
NewMind AI Weekly Chronicles - August'25-Week II
PDF
DP Operators-handbook-extract for the Mautical Institute
PDF
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
PDF
Univ-Connecticut-ChatGPT-Presentaion.pdf
PDF
Getting Started with Data Integration: FME Form 101
PDF
DASA ADMISSION 2024_FirstRound_FirstRank_LastRank.pdf
PPTX
TLE Review Electricity (Electricity).pptx
PDF
A novel scalable deep ensemble learning framework for big data classification...
PDF
Zenith AI: Advanced Artificial Intelligence
PDF
Web App vs Mobile App What Should You Build First.pdf
cloud_computing_Infrastucture_as_cloud_p
A comparative analysis of optical character recognition models for extracting...
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
Accuracy of neural networks in brain wave diagnosis of schizophrenia
Approach and Philosophy of On baking technology
Microsoft Solutions Partner Drive Digital Transformation with D365.pdf
Assigned Numbers - 2025 - Bluetooth® Document
Video forgery: An extensive analysis of inter-and intra-frame manipulation al...
From MVP to Full-Scale Product A Startup’s Software Journey.pdf
Enhancing emotion recognition model for a student engagement use case through...
NewMind AI Weekly Chronicles - August'25-Week II
DP Operators-handbook-extract for the Mautical Institute
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
Univ-Connecticut-ChatGPT-Presentaion.pdf
Getting Started with Data Integration: FME Form 101
DASA ADMISSION 2024_FirstRound_FirstRank_LastRank.pdf
TLE Review Electricity (Electricity).pptx
A novel scalable deep ensemble learning framework for big data classification...
Zenith AI: Advanced Artificial Intelligence
Web App vs Mobile App What Should You Build First.pdf

IJCER (www.ijceronline.com) International Journal of computational Engineering research

  • 1. International Journal Of Computational Engineering Research (ijceronline.com) Vol. 2 Issue. 5 A Data Throughput Prediction Using Scheduling And Assignment Technique M.Rajarajeswari 1, P.R.Kandasamy2, T.Ravichandran3 1. Research Scholar, Dept. of Mathematics Karpagam University,Coimbatore. 2.Professor and Head, Dept. of M.CA Hindusthan Institute of Technology,coimbatore. 3.The Principal, Hindusthan Institute of Technology, coimbatore. Abstract: Task computing is a computation to fill the gap between tasks (what user wants to be done), and services (functionalities that are available to the user). Task computing seeks to redefine how users interact with and use computing environments. Wide distributed many-task computing (MTC) environment aims to bridge the gap between two computing paradigms, high throughput computing (HTC) and high-performance computing (HPC). In a wide distributed many-task computing environment, data sharing between participating clusters may become a major performance constriction. In this project, we present the design and implementation of an application-layer data throughput prediction and optimization service for many- task computing in widely distributed environments using Operation research. This service uses multiple parallel TCP streams which are used to find maximum data distribution stream through assignment model which is to improve the end-to-end throughput of data transfers in the network. A novel mathematical model (optimization model) is developed to determine the number of parallel streams, required to achieve the best network performance. This model can predict the optimal number of parallel streams with as few as three prediction points (i.e. three Switching points). We implement this new service in the Stork Data Scheduler model, where the prediction points can be obtained using Iperf and GridFTP samplings technique. Our results show that the prediction cost plus the optimized transfer time is much less than the non optimized transfer time in most cases. As a result, Stork data model evaluates and transfer jobs with optimization service based sampling rate and no. of task is given as input, so our system can be completed much earlier, compared to non optimized data transfer jobs. Key words: Optimization, Assignment Technique, Stork scheduling Data throughput. Modules: 1) Construction of Grid Computing Architecture. 2) Applying Optimization Service. 3) Integration with Stork Data cheduler. 4) Applying Quantity Control of Sampling Data. 5) Performance Comparison. Existing System: TCP is the most widely adopted transport protocol but it has major bottleneck. So we have gone for other different implementation techniques, in both at the transport and application layers, to overcome the inefficient network utilization of the TCP protocol. At the transport layer, different variations of TCP have been implemented to more efficiently utilize high- speed networks. At the application layer, other improvements are proposed on top of the regular TCP, such as opening multiple parallel streams or tuning the buffer size. Parallel TCP streams are able to achieve high network throughput by behaving like a single giant stream, which is the combination of n streams, and getting an unfair share of the available bandwidth. Disadvantage Of System:  In a widely distributed many-task computing ernvionment, data communication between participating clusters may become a major performance bottleneck.  TCP to fully utilize the available network bandwidth. This becomes a major bottleneck, especially in wide-area high speed networks, where both bandwidth and delay properties are too large, which, in turn, results in a large delay before the bandwidth is fully saturated.  Inefficient network utilization. Issn 2250-3005(online) September| 2012 Page 1306
  • 2. International Journal Of Computational Engineering Research (ijceronline.com) Vol. 2 Issue. 5 Proposed System: We present the design and implementation of a service that will provide the user with the optimal number of parallel TCP streams as well as a provision of the estimated time and throughput for a specific data transfer. A novel mathematical model (optimization model) is developed to determine the number of parallel streams, required to achieve the best network performance. This model can predict the optimal number of parallel streams with as few as three prediction points (i.e. three Switching points). We implement this new service in the Stork Data Scheduler model, where the prediction points can be obtained using Iperf and GridFTP samplings technique. Advantage of System:  The prediction models, the quantity control of sampling and the algorithms applied using the mathematical models.  We have improved an existing prediction model by using three prediction points and adapting a full second order equation or an equation where the order is determined dynamically. We have designed an exponentially increasing sampling strategy to get the data pairs for prediction  The algorithm to instantiate the throughput function with respect to the number of parallel streams can avoid the ineffectiveness of the prediction models due to some unexpected sampling data pairs.  We propose to find a solution to satisfy both the time limitation and the accuracy requirements. Our approach doubles the number of parallel streams for every iteration of sampling, and observes the corresponding throughput.  We implement this new service in the Stork Data Scheduler, where the prediction points can be obtained using Iperf and GridFTP samplings Implementation module: In this project we have implemented the optimization service, based on both Iperf and GridFTP. The structure of our design and presents two scenarios based on both, GridFTP and Iperf versions of the service. For the GridFTP version, these hosts would have GridFTP servers. For the Iperf version, these hosts would have Iperf servers running as well as a small remote module (TranServer) that will make a request to Iperf. Optimization server is the orchestrator host, designated to perform the optimization of TCP parameters and store the resultant data. It also has to be recognized by the sites, since the third- party sampling of throughput data will be performed by it. User/Client represents the host that sends out the request of optimization to the server. All of these hosts are connected via LAN. When a user wants to transfer data between two site , the user will first send a request that consists of source and destination addresses, file size, and an optional buffer size parameter to the optimization server, which process the request and respond to the user with the optimal parallel stream number to do the transfer. The buffer size parameter is an optional parameter, which is given to the GridFTP protocol to set the buffer size to a different value than the system set buffer size. At the same time, the optimization server will estimate the optimal throughput that can be achieved, and the time needed to finish the specified transfer between two site. This information is then returned back to the User/Client making the request. Stork is a batch scheduler, specialized in data placement and movement. In this implementation, Stork is extended to support both estimation and optimization tasks. A task is categorized as an estimation task, if only estimated information regarding to the specific data movement is reported without the actual transfer. On the other hand, a task is categorized as optimization, if the specific data movement is performed, according to the optimized estimation results. Mathematical Model : A novel mathematical model (optimization model) is developed to determine the number of parallel streams, required to achieve the best network performance. This model can predict the optimal number of parallel streams with as few as three prediction points (i.e. three Switching points).We propose to find a solution to satisfy both the time limitation and the accuracy requirements. Our approach doubles the number of parallel streams for every iteration of sampling, and observes the corresponding throughput. While the throughput increases, if the slope of the curve is below a threshold between successive iterations, the sampling stops. Another stopping condition is: if the throughput decreases compared to the previous iteration before reaching that threshold. Assignment Problem: Consider an matrix with n rows and n columns, rows will be consider as grid let and columns will as jobs. Like, Issn 2250-3005(online) September| 2012 Page 1307
  • 3. International Journal Of Computational Engineering Research (ijceronline.com) Vol. 2 Issue. 5 Job 1 Job 2 ……. Job n Grid 1 Task Task2 … Task 1 …. n Grid 2 Tas Task … Task k1 2 …. n ……. Tas Task … Task Grid n k1 2 …. n There will be more than one job for each grid so assign problem occur. to solve this problem we are going for new mathematical model to solve this problem. Three condition occur 1) Find out the minim value of each row and subtract least value with same row task value. Make least value as zero. If process matrix diagonally comes as “0” then process stop and job will assign successfully and job assign successfully. Job 1 Job2 Job n Grid 10 3 … 0 1 … Grid . … 7 6 0 2 … ……. . 0 6 … 7 Grid … n . 2) Find out the minim value of each row and Subtract least value with same row task value. Make least value as zero. if column wise matrix diagonally comes as “0” then , then Find out minim value of each column and subtract least value with same row column value. Make least value as zero. Then if process matrix diagonally comes as “0” then process stop and job will assign successfully and job assign successfully. Job 1 Job 2 ……. Job n Grid 1 10 0 … 3 … Grid 2 6 0 … . 7 …. ……. 6 0 … 7 Grid n …. 3) Find out the minim value of each row and subtract least value with same row task value. Make least value as zero. If column wise matrix diagonally comes as “0” then, then Find out minim value of each column and subtract least value with same row column value. Make least value as zero. Then if process matrix diagonally will comes as “0” then process stop and that job will be discard . Issn 2250-3005(online) September| 2012 Page 1308
  • 4. International Journal Of Computational Engineering Research (ijceronline.com) Vol. 2 Issue. 5 Job 1 Job 2 ……. Job n Grid 1 10 0 … 3 … Grid 2 6 0 … . 7 …. ……. 6 0 … 7 Grid n …. Job 1 Job 2 ……. Job n Grid 1 10 0 … 4 … Grid 2 6 0 … . 7 …. ……. 0 6 … 7 Grid n …. Experimental results: Test Scenario Pre-Condition Test Case Expected Output Actual Output Result Applying Client send request Check the client Client Requested will be File received Pass Optimization to EOS server then request will be Valid response to client with some successfully Service that request will be request or not(only txt information (number of transfer to Gridftp format file can be stream taken to transfer the server download/Estimate file and throughput achieve at from Gridftp server ) the time of transfer file) Integration with If More than one Stork data scheduler Priority will be assigned to According to Pass Stork Data Client make make schedule for the each user request(first come priority the request Scheduler request to EOS incoming request first process) / if estimation process in Gridftp server exceed than one request will be give most first server and request/ checking the priority response given to is Estimation or client optimization Of service Performance Comparison Issn 2250-3005(online) September| 2012 Page 1309
  • 5. International Journal Of Computational Engineering Research (ijceronline.com) Vol. 2 Issue. 5 Conclusion: This study describes the design and implementation of a network throughput prediction and optimization service for many-task computing in widely distributed environments. This involves the selection of prediction models, the mathematical models. We have improved an existing prediction model by using three prediction points and adapting a full second order equation, or an equation where the order is determined dynamically. We have designed an exponentially increasing sampling strategy to get the data pairs prediction. We implement this new service in the Stork Data Scheduler, where the prediction points can be obtained using Iperf and GridFTP samplings. The experimental results justify our improved models as well as the algorithms applied to the implementation. When used within the Stork Data Scheduler, the optimization service decreases the total transfer time for a large number of data transfer jobs submitted to the scheduler significantly compared to the non optimized Stork transfers. Reference: [1] I. Raicu, I. Foster, and Y. Zhao, “Many-Task Computing for Grids and Supercomputers,” Proc. IEEE Workshop Many- Task Computing on Grids and Supercomputers (MTAGS), 2008. [2] Louisiana Optical Network Initiative (LONI), http://www.loni.org/, 2010. [3] Energy Sciences Network (ESNet), http://www.es.net/, 2010. [4] TeraGrid, http://www.teragrid.org/, 2010. [5] S. Floyd, “Rfc 3649: Highspeed TCP for Large Congestion Windows,” 2003. [6] R. Kelly, “Scalable TCP: Improving Performance in Highspeed Wide Area Networks,” Computer Comm. Rev., vol. 32, Experiments,” IEEE Network, vol. 19, no. 1, pp. 4-11, Feb. 2005. no. 2, pp. 83- 91, 2003. [7] C. Jin, D.X. Wei, S.H. Low, G. Buhrmaster, J. Bunn, D.H. Choe, R.L.A. Cottrell, J.C. Doyle, W. Feng, O. Martin, H. Newman, F. Paganini, S. Ravot, and S. Singh, “Fast TCP: From Theory to Networks,” Computer Comm. Rev., vol. 32, Experiments,” IEEE Network, vol. 19, no. 1, pp. 4-11, Feb. 2005. [8] H. Sivakumar, S. Bailey, and R.L. Grossman, “Psockets: The Case for Application-Level Network Striping for Data Intensive Applications Using High Speed Wide Area Networks,” Proc. IEEE Super Computing Conf. (SC ’00), p. 63, Nov. 2000. [9] J. Lee, D. Gunter, B. Tierney, B. Allcock, J. Bester, J. Bresnahan, and S. Tuecke, “Applied Techniques for High Bandwidth Data Transfers across Wide Area Networks,” Proc. Int’l Conf. Computing in High Energy and Nuclear Physics (CHEP ’01), Sept. 2001. [10] H. Balakrishman, V.N. Padmanabhan, S. Seshan, and R.H.K.M. Stemm, “TCP Behavior of a Busy Internet Server: Analysis and Improvements,” Proc. IEEE INFOCOM ’98, pp. 252-262, Mar. 1998. [11] T.J. Hacker, B.D. Noble, and B.D. Atley, “Adaptive Data Block Scheduling for Parallel Streams,” Proc. IEEE Int’l Symp. High Performance Distributed Computing (HPDC ’05), pp. 265-275, July 2005. [12] L. Eggert, J. Heideman, and J. Touch, “Effects of Ensemble TCP,” ACM Computer Comm. Rev., vol. 30, no. 1, pp. 15- 29, 2000. [13] R.P. Karrer, J. Park, and J. Kim, “TCP-ROME: Performance and Fairness in Parallel Downloads for Web and Real Time Multimedia Streaming Applications,” technical report, Deutsche Telekom Labs, 2006. [14] D. Lu, Y. Qiao, and P.A. Dinda, “Characterizing and Predicting TCP Throughput on the Wide Area Network,” Proc. IEEE Int’l Conf. Distributed Computing Systems (ICDCS ’05), pp. 414-424, 2005. [15] E. Yildirim, D. Yin, and T. Kosar, “Prediction of Optimal Parallelism Level in Wide Area Data Transfers,” to be published in IEEE Trans. Parallel and Distributed Systems, 2010. Issn 2250-3005(online) September| 2012 Page 1310