SlideShare a Scribd company logo
2
Most read
3
Most read
11
Most read
Communication and synchronization in distributed systems Johanna OrtizDiego NiñoAlejandro Velandia
Communication in distributed systemsIn a distributed system there is no shared memory and thus the whole nature of the communication between processes should be reconsidered. The processes, to communicate, must adhere to rules known as protocols. For distributed systems in a wide area, these protocols often take the form of several layers and each layer has its own goals and rules. Messages are exchanged in various ways, and there are many design options in this regard, an important option is the "remote procedure call. It is also important to consider the possibilities of communication between groups of processes, not only between two processes.
client-server modelTwo different roles in the interactionClient: requesting service. Request: Operation + dataServer: provides service. response: result
RPCThe model client - server is a convenient way of structuring a S. O. distributed, but has a flaw The essential paradigm that is built around communication is the input / output. The procedures send / receive are reserved for conducting e / s. A different option was raised by Birrell and Nelson: Allow programs that call procedures located on other machines. When a process in the machine "A" calls a procedure in the machine "B": The process that makes the call is suspended. The implementation of the procedure is done in "B". The information can be transported from one place to another using the parameters and can return in the subsequent procedure. The programmer does not worry about a transfer of messages or e / s. This method is called Remote Procedure Call or RPC. The procedure makes the call and the person receiving it run on different machines, ie using different address spaces.
proxy or cache modelthree different roles in the interaction:Client: requesting service Server: provides service Proxy: agent
Multilayer modelserver can be a client of another server typical web applications:presentation + business logic + Data access
Peer-to-peer model Dialogue protocol:chordin entities among themselvesthe end of each stage entities synchronize and exchange information
communication characteristicsblocking or non-blocking operation modeShipping:blocking: the sender is blocked until it has been successfully sent to destination.non-blocking: the sender stores the data in a       kernel buffer and resumes executionreception: reception non-blocking: If there is available data is read by the receiver, otherwise indicates that no message had.blocking reception: if there is no available data the receiver is blocked
ReliabilityIssues related to reliability of communication; - Ensuring that the message was received on node (s) target (s) - Maintenance of order in the delivery of messages - Flow control to avoid "flooding" the receiving node - Fragmentation of the messages to eliminate limitations on size Maximum messages If the communication system does not guarantee some of these aspects, it must send the application
Communication in groups:Destination of message is a group of processes:   Multicast Possible applications in distributed systems - Using multiple updates replicated data. - Use of replicated services. - Collective operations in parallel computation. Implementation depends on whether the network provides multicast - If no, is implemented by sending N messages A process can belong to several groups There is a group address EI group usually has a dynamic nature - You can add and remove group processes - Management of membership must be coordinated with the communication
Com design aspects group· Models of groups:- Open Group. · External process can send message to group - Typically used to replicate data or services - Group closed. - Only group processes can send messages. · Is commonly used in parallel processing (model peer-to-peer · Atomicity - Or get the message all processes or none
Order reception of messages:three choices: - Order FIFO: Messages from one source reach each receiver in the order they are sent. · There are no guarantees on messages from different issuers - Causal ordering: If the messages sent between two emitting a possible relationship "cause and effect, all group process first receive the message "cause" and then message "effect." - If no connection, no guarantee any delivery order - Definition of "causality" is discussed in "Synchronization" - Total Management: All messages (various sources) sent a group are received in the same order for all items.
SYNCHRONIZATION OF DISTRIBUTED SYSTEMS
SYNCHRONIZATION OF DISTRIBUTED SYSTEMSAlgorithms forclocksynchronizationCristian'salgorithmBerkeley Algorithm Algorithm with AverageAlgorithms for Mutual ExclusionCentralizationDistributed
Cristian's algorithmThis algorithm is based on the use of coordinated universal time (acronym in English, UTC) which is received by a computer within the distributed system. This team, called receptor UTC, in turn receives periodic requests from the time of other machines to each system of which sends a reply in the shortest possible time UTC requested report, which all machines update their system time and keeping it synchronized across the system. The receiver receives the UTC time through various means available, including mention the airwaves, the Internet, among others. A major problem in this algorithm is that time can run backwards: The UTC time of the receiver can not be shorter than the time of the machine that requested time. The UTC server must process requests for time with the concept of disruption, which affects the attention span. The range of transmission of the request and its response must be taken into account for synchronization. The propagation time adds to the time server to synchronize the sender when it receives the response.
Berkeley AlgorithmA distributed system based on the Berkeley algorithm has no coordinated universal time (UTC) instead, the system manages its own time. For synchronizing the time in the system, there is also a time server that unlike Cristian algorithm behaves proactively. This server performs periodic sampling of the time have some of the machines in the system, which calculates an average time, which is sent to all machines in the system to synchronize.
Algorithm with Average This algorithm does not have a server to control, centralize and maintain time synchronization in the system. In contrast, each machine in the system informs its local time with each message you send requires another machine or machines of the system. From that moment, each machine locally initialize a timer, whose duration is fixed interval and length. From that moment, each machine averages your local time using the hours you report the rest of the machines that interact with it.
Algorithms for Mutual ExclusionThese algorithms are defined to ensure compliance of mutual exclusion between processes that require access to a critical region of the system. Centralized: This algorithm simulates the operating philosophy of mutual exclusion used in uniprocessor systems. To do this, there is a machine in the distributed system which is responsible for controlling access to the various critical sections, which is called the coordinator. Each system process that requires access to a critical section, you must request access to the coordinator, which is awarded in the event that the critical section is available, otherwise placed in a queue to process applicant. When a process received access to the critical section completes its task, inform equally to the coordinator to enable it to grant access to a next requesting process or that is in queue. This algorithm presents a major constraint, namely that the coordinator represents a single point of control for access to the various critical sections of the distributed system, which becomes a bottleneck that can affect the efficiency of processes running in the system. Similarly, any failure to present the result in the cessation coordinator processes.
DistributedThis algorithm was developed to eliminate the latent problem in the centralized algorithm. Therefore, its approach is based on not having a single coordinator to control access to critical sections of the distributed system. In this sense, each process that requires access to a critical section, send your request to all processes in the system, identifying themselves as the critical section who wish to access. Each receiving process sends its response to the process requesting   indicating one of the following possible answers: Critical section not in use by the receiving process. Response Message: OK. Critical section used by the receiving process.   Response Message: Not applicable, place the transmitter in process queue. Critical section in use but not requested by the receiving process. Response Message: OK, if the application is earlier than the receiver. Response Message: Not applicable, if the request is downstream of the receptor,   sender puts the process in queue. However, this algorithm also contains a problem, namely that if a process has a failure can not send its response to a request from a sender process, so this will be interpreted as a denial of access, blocking all processes requesting access to any critical section.
Ring (Token Ring)This algorithm establishes a logical ring of processes, software-controlled through the which circulates a token or control (token) between each process. When a process receives the tab, you can enter a   critical section if required, to process all tasks, leaving the critical section and deliver the card to the next process of the ring. This process is repeated continuously in the ring of processes. When a process receives the information and does not require entering a critical section, it passes the tab immediately to the next process. This algorithm contains a weakness associated with the possible loss of the card for access control to critical sections. If this occurs, the system processes assume that the card is in use by some process that is in the critical section.
ElectionThese algorithms are designed to elect a coordinator process. In the same ensures that once the   coordinator election process, it concluded with the agreement of all the system processes the election of a new coordinator. The big fella (Garcia Molina) This algorithm is initiated when a process determines that there is any response to requests made to the process coordinator. At this time, this process sends to all processes older than him a message of electing a new coordinator, which can lead to the following scenarios: A process with a number greater than the sender of the message process, answer OK, which was elected as coordinator of the system. No process responds election message, which the sender process is elected as the coordinator process. Ring This algorithm operates similarly to the algorithm of the big fella, with the difference that in this method has the following variants: The election message is circulated to all system processes, and processes not only larger than the issuer. Each part of the message process identification. Once the complete message back to the ring and sending process, who sets the new coordinator to process the larger number. It circulates through the ring a new message indicating who is the coordinator of the system.
Atomic TransactionsSynchronization method is a high level, unlike the revised methods so far, does not hold the developer on issues of mutual exclusion, prevent crashes and failover.   On the contrary, this method guides the developer's effort to real problems and substance of the synchronization of distributed systems. The concept of atomic transactions is to ensure that all processes that make a transaction should   implemented fully and satisfactorily. Of a breakdown in one of the processes, the entire transaction fails,   reversed the same and proceeded to restart.
Threads, Processes (Threads)Today's operating systems can support multiple threads of control within a process.   Two notable features in the processes is that threads share a single address space,   and in turn, simulate a multi-ordered, as if it were separate processes in parallel. Only in a multiprocessor machine with may actually run parallel processes. The wires can be placed in four states: Running, when the process is running. Locked, when a process depends on a critical resource. Usually, when it can be used again. Over, when the task ends. Implementation of a Yarn Package There are two ways to implement threads: In the user When performing the installation of user-level packages, the core must not know of its existence, so the kernel will handle only a single thread. The threads are executed in the runtime system in groups of procedures. In the event the system or procedure required to suspend a thread in its handling, the thread stores the records in a table, look for unlocked and reload the machine registers with initial values. Its main advantages are: Each process or thread can have its own algorithm or process planning. The exchange is faster, and identifiers used in the core. It has a more scalable processes increase.
In the CoreUnlike the implementation on the client, the implementation in the kernel does not need the runtime management;   every process in the same table manages its processes, even if it means higher cost in resources and processing time machine. One of the most important advantages is that it requires blocking calls to the system.

More Related Content

PPT
Chapter 3: Processes
PPTX
Multithreading models.ppt
PPTX
wireless network IEEE 802.11
PPT
remote procedure calls
PDF
Inter-Process Communication in distributed systems
PPTX
Media Access Control
PPT
Chapter 4 a interprocess communication
PPT
distributed shared memory
Chapter 3: Processes
Multithreading models.ppt
wireless network IEEE 802.11
remote procedure calls
Inter-Process Communication in distributed systems
Media Access Control
Chapter 4 a interprocess communication
distributed shared memory

What's hot (20)

PPT
Chapter 8: Switching
PPT
Naming in Distributed Systems
PPTX
Communication in Distributed Systems
PPTX
INTER PROCESS COMMUNICATION (IPC).pptx
PPT
Chapter 6-Consistency and Replication.ppt
PPT
File replication
DOCX
VIRTUALIZATION STRUCTURES TOOLS.docx
DOCX
Load balancing in Distributed Systems
PDF
Distributed Operating System_1
PPT
Lamport’s algorithm for mutual exclusion
PPTX
Characteristics of cloud computing
PPTX
Computer Network - Network Layer
PPTX
CLOUD COMPUTING UNIT-1
PPTX
Cloud computing using Eucalyptus
PDF
QOS (Quality of Services) - Computer Networks
PPT
Synchronization in distributed systems
DOCX
Levels of Virtualization.docx
PDF
Virtualization and cloud Computing
PPTX
ENCAPSULATION AND TUNNELING
Chapter 8: Switching
Naming in Distributed Systems
Communication in Distributed Systems
INTER PROCESS COMMUNICATION (IPC).pptx
Chapter 6-Consistency and Replication.ppt
File replication
VIRTUALIZATION STRUCTURES TOOLS.docx
Load balancing in Distributed Systems
Distributed Operating System_1
Lamport’s algorithm for mutual exclusion
Characteristics of cloud computing
Computer Network - Network Layer
CLOUD COMPUTING UNIT-1
Cloud computing using Eucalyptus
QOS (Quality of Services) - Computer Networks
Synchronization in distributed systems
Levels of Virtualization.docx
Virtualization and cloud Computing
ENCAPSULATION AND TUNNELING
Ad

Viewers also liked (20)

PPT
Communications is distributed systems
PPT
Clock Synchronization (Distributed computing)
PDF
Clock Synchronization in Distributed Systems
PPTX
Analysis & Design Method for OSGi-based Development
PPTX
Python queue solution with asyncio and kafka
PDF
美团技术沙龙04 - Kv Tair best practise
PPT
16.Distributed System Structure
PPT
Cryptography (Distributed computing)
PDF
Lotus Notes Presentation - Please loo into it!!
PDF
IoT Seminar (Oct. 2016) Alex Edelmann - Bosch Singapore
PPTX
Distributed System - Security
PPT
Name services
PPT
Clock synchronization in distributed system
PPT
Naming And Binding (Distributed computing)
PPT
Mutual exclusion
PDF
Processes and Processors in Distributed Systems
PPT
Blue gene technology
PDF
codecentric AG: CQRS and Event Sourcing Applications with Cassandra
PDF
Distributed Systems Naming
PPT
Group Communication (Distributed computing)
Communications is distributed systems
Clock Synchronization (Distributed computing)
Clock Synchronization in Distributed Systems
Analysis & Design Method for OSGi-based Development
Python queue solution with asyncio and kafka
美团技术沙龙04 - Kv Tair best practise
16.Distributed System Structure
Cryptography (Distributed computing)
Lotus Notes Presentation - Please loo into it!!
IoT Seminar (Oct. 2016) Alex Edelmann - Bosch Singapore
Distributed System - Security
Name services
Clock synchronization in distributed system
Naming And Binding (Distributed computing)
Mutual exclusion
Processes and Processors in Distributed Systems
Blue gene technology
codecentric AG: CQRS and Event Sourcing Applications with Cassandra
Distributed Systems Naming
Group Communication (Distributed computing)
Ad

Similar to Communication And Synchronization In Distributed Systems (20)

DOCX
Operating System- INTERPROCESS COMMUNICATION.docx
PDF
Client Server Model and Distributed Computing
DOC
Distributed Mutual exclusion algorithms
PPT
Module-6 process managedf;jsovj;ksdv;sdkvnksdnvldknvlkdfsment.ppt
PPTX
Lecture 3 Inter Process Communication.pptx
DOCX
Basic features of distributed system
PPT
Ch4 OS
 
PPT
Process
PPT
PDF
Analysis of mutual exclusion algorithms with the significance and need of ele...
PDF
Task communication
PPTX
Chapter 6 Concurrency: Deadlock and Starvation
PDF
A fault tolerant tokenbased atomic broadcast algorithm relying on responsive ...
DOCX
Availability tactics
PPTX
Message Passing, Remote Procedure Calls and Distributed Shared Memory as Com...
PDF
CS9222 ADVANCED OPERATING SYSTEMS
PPT
Chapter 18 - Distributed Coordination
PPT
Process Management.ppt
PPT
24. Advanced Transaction Processing in DBMS
Operating System- INTERPROCESS COMMUNICATION.docx
Client Server Model and Distributed Computing
Distributed Mutual exclusion algorithms
Module-6 process managedf;jsovj;ksdv;sdkvnksdnvldknvlkdfsment.ppt
Lecture 3 Inter Process Communication.pptx
Basic features of distributed system
Ch4 OS
 
Process
Analysis of mutual exclusion algorithms with the significance and need of ele...
Task communication
Chapter 6 Concurrency: Deadlock and Starvation
A fault tolerant tokenbased atomic broadcast algorithm relying on responsive ...
Availability tactics
Message Passing, Remote Procedure Calls and Distributed Shared Memory as Com...
CS9222 ADVANCED OPERATING SYSTEMS
Chapter 18 - Distributed Coordination
Process Management.ppt
24. Advanced Transaction Processing in DBMS

Recently uploaded (20)

PDF
Accuracy of neural networks in brain wave diagnosis of schizophrenia
PDF
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
PPTX
Programs and apps: productivity, graphics, security and other tools
PDF
A comparative analysis of optical character recognition models for extracting...
PDF
Reach Out and Touch Someone: Haptics and Empathic Computing
PDF
Encapsulation theory and applications.pdf
PDF
gpt5_lecture_notes_comprehensive_20250812015547.pdf
PPTX
20250228 LYD VKU AI Blended-Learning.pptx
PDF
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
PDF
Optimiser vos workloads AI/ML sur Amazon EC2 et AWS Graviton
PPTX
1. Introduction to Computer Programming.pptx
PPTX
Big Data Technologies - Introduction.pptx
PDF
Dropbox Q2 2025 Financial Results & Investor Presentation
PDF
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
PPTX
Tartificialntelligence_presentation.pptx
PDF
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
PDF
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
PDF
NewMind AI Weekly Chronicles - August'25-Week II
PPTX
A Presentation on Artificial Intelligence
PDF
Agricultural_Statistics_at_a_Glance_2022_0.pdf
Accuracy of neural networks in brain wave diagnosis of schizophrenia
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
Programs and apps: productivity, graphics, security and other tools
A comparative analysis of optical character recognition models for extracting...
Reach Out and Touch Someone: Haptics and Empathic Computing
Encapsulation theory and applications.pdf
gpt5_lecture_notes_comprehensive_20250812015547.pdf
20250228 LYD VKU AI Blended-Learning.pptx
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
Optimiser vos workloads AI/ML sur Amazon EC2 et AWS Graviton
1. Introduction to Computer Programming.pptx
Big Data Technologies - Introduction.pptx
Dropbox Q2 2025 Financial Results & Investor Presentation
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
Tartificialntelligence_presentation.pptx
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
NewMind AI Weekly Chronicles - August'25-Week II
A Presentation on Artificial Intelligence
Agricultural_Statistics_at_a_Glance_2022_0.pdf

Communication And Synchronization In Distributed Systems

  • 1. Communication and synchronization in distributed systems Johanna OrtizDiego NiñoAlejandro Velandia
  • 2. Communication in distributed systemsIn a distributed system there is no shared memory and thus the whole nature of the communication between processes should be reconsidered. The processes, to communicate, must adhere to rules known as protocols. For distributed systems in a wide area, these protocols often take the form of several layers and each layer has its own goals and rules. Messages are exchanged in various ways, and there are many design options in this regard, an important option is the "remote procedure call. It is also important to consider the possibilities of communication between groups of processes, not only between two processes.
  • 3. client-server modelTwo different roles in the interactionClient: requesting service. Request: Operation + dataServer: provides service. response: result
  • 4. RPCThe model client - server is a convenient way of structuring a S. O. distributed, but has a flaw The essential paradigm that is built around communication is the input / output. The procedures send / receive are reserved for conducting e / s. A different option was raised by Birrell and Nelson: Allow programs that call procedures located on other machines. When a process in the machine "A" calls a procedure in the machine "B": The process that makes the call is suspended. The implementation of the procedure is done in "B". The information can be transported from one place to another using the parameters and can return in the subsequent procedure. The programmer does not worry about a transfer of messages or e / s. This method is called Remote Procedure Call or RPC. The procedure makes the call and the person receiving it run on different machines, ie using different address spaces.
  • 5. proxy or cache modelthree different roles in the interaction:Client: requesting service Server: provides service Proxy: agent
  • 6. Multilayer modelserver can be a client of another server typical web applications:presentation + business logic + Data access
  • 7. Peer-to-peer model Dialogue protocol:chordin entities among themselvesthe end of each stage entities synchronize and exchange information
  • 8. communication characteristicsblocking or non-blocking operation modeShipping:blocking: the sender is blocked until it has been successfully sent to destination.non-blocking: the sender stores the data in a kernel buffer and resumes executionreception: reception non-blocking: If there is available data is read by the receiver, otherwise indicates that no message had.blocking reception: if there is no available data the receiver is blocked
  • 9. ReliabilityIssues related to reliability of communication; - Ensuring that the message was received on node (s) target (s) - Maintenance of order in the delivery of messages - Flow control to avoid "flooding" the receiving node - Fragmentation of the messages to eliminate limitations on size Maximum messages If the communication system does not guarantee some of these aspects, it must send the application
  • 10. Communication in groups:Destination of message is a group of processes:   Multicast Possible applications in distributed systems - Using multiple updates replicated data. - Use of replicated services. - Collective operations in parallel computation. Implementation depends on whether the network provides multicast - If no, is implemented by sending N messages A process can belong to several groups There is a group address EI group usually has a dynamic nature - You can add and remove group processes - Management of membership must be coordinated with the communication
  • 11. Com design aspects group· Models of groups:- Open Group. · External process can send message to group - Typically used to replicate data or services - Group closed. - Only group processes can send messages. · Is commonly used in parallel processing (model peer-to-peer · Atomicity - Or get the message all processes or none
  • 12. Order reception of messages:three choices: - Order FIFO: Messages from one source reach each receiver in the order they are sent. · There are no guarantees on messages from different issuers - Causal ordering: If the messages sent between two emitting a possible relationship "cause and effect, all group process first receive the message "cause" and then message "effect." - If no connection, no guarantee any delivery order - Definition of "causality" is discussed in "Synchronization" - Total Management: All messages (various sources) sent a group are received in the same order for all items.
  • 14. SYNCHRONIZATION OF DISTRIBUTED SYSTEMSAlgorithms forclocksynchronizationCristian'salgorithmBerkeley Algorithm Algorithm with AverageAlgorithms for Mutual ExclusionCentralizationDistributed
  • 15. Cristian's algorithmThis algorithm is based on the use of coordinated universal time (acronym in English, UTC) which is received by a computer within the distributed system. This team, called receptor UTC, in turn receives periodic requests from the time of other machines to each system of which sends a reply in the shortest possible time UTC requested report, which all machines update their system time and keeping it synchronized across the system. The receiver receives the UTC time through various means available, including mention the airwaves, the Internet, among others. A major problem in this algorithm is that time can run backwards: The UTC time of the receiver can not be shorter than the time of the machine that requested time. The UTC server must process requests for time with the concept of disruption, which affects the attention span. The range of transmission of the request and its response must be taken into account for synchronization. The propagation time adds to the time server to synchronize the sender when it receives the response.
  • 16. Berkeley AlgorithmA distributed system based on the Berkeley algorithm has no coordinated universal time (UTC) instead, the system manages its own time. For synchronizing the time in the system, there is also a time server that unlike Cristian algorithm behaves proactively. This server performs periodic sampling of the time have some of the machines in the system, which calculates an average time, which is sent to all machines in the system to synchronize.
  • 17. Algorithm with Average This algorithm does not have a server to control, centralize and maintain time synchronization in the system. In contrast, each machine in the system informs its local time with each message you send requires another machine or machines of the system. From that moment, each machine locally initialize a timer, whose duration is fixed interval and length. From that moment, each machine averages your local time using the hours you report the rest of the machines that interact with it.
  • 18. Algorithms for Mutual ExclusionThese algorithms are defined to ensure compliance of mutual exclusion between processes that require access to a critical region of the system. Centralized: This algorithm simulates the operating philosophy of mutual exclusion used in uniprocessor systems. To do this, there is a machine in the distributed system which is responsible for controlling access to the various critical sections, which is called the coordinator. Each system process that requires access to a critical section, you must request access to the coordinator, which is awarded in the event that the critical section is available, otherwise placed in a queue to process applicant. When a process received access to the critical section completes its task, inform equally to the coordinator to enable it to grant access to a next requesting process or that is in queue. This algorithm presents a major constraint, namely that the coordinator represents a single point of control for access to the various critical sections of the distributed system, which becomes a bottleneck that can affect the efficiency of processes running in the system. Similarly, any failure to present the result in the cessation coordinator processes.
  • 19. DistributedThis algorithm was developed to eliminate the latent problem in the centralized algorithm. Therefore, its approach is based on not having a single coordinator to control access to critical sections of the distributed system. In this sense, each process that requires access to a critical section, send your request to all processes in the system, identifying themselves as the critical section who wish to access. Each receiving process sends its response to the process requesting   indicating one of the following possible answers: Critical section not in use by the receiving process. Response Message: OK. Critical section used by the receiving process.   Response Message: Not applicable, place the transmitter in process queue. Critical section in use but not requested by the receiving process. Response Message: OK, if the application is earlier than the receiver. Response Message: Not applicable, if the request is downstream of the receptor,   sender puts the process in queue. However, this algorithm also contains a problem, namely that if a process has a failure can not send its response to a request from a sender process, so this will be interpreted as a denial of access, blocking all processes requesting access to any critical section.
  • 20. Ring (Token Ring)This algorithm establishes a logical ring of processes, software-controlled through the which circulates a token or control (token) between each process. When a process receives the tab, you can enter a   critical section if required, to process all tasks, leaving the critical section and deliver the card to the next process of the ring. This process is repeated continuously in the ring of processes. When a process receives the information and does not require entering a critical section, it passes the tab immediately to the next process. This algorithm contains a weakness associated with the possible loss of the card for access control to critical sections. If this occurs, the system processes assume that the card is in use by some process that is in the critical section.
  • 21. ElectionThese algorithms are designed to elect a coordinator process. In the same ensures that once the   coordinator election process, it concluded with the agreement of all the system processes the election of a new coordinator. The big fella (Garcia Molina) This algorithm is initiated when a process determines that there is any response to requests made to the process coordinator. At this time, this process sends to all processes older than him a message of electing a new coordinator, which can lead to the following scenarios: A process with a number greater than the sender of the message process, answer OK, which was elected as coordinator of the system. No process responds election message, which the sender process is elected as the coordinator process. Ring This algorithm operates similarly to the algorithm of the big fella, with the difference that in this method has the following variants: The election message is circulated to all system processes, and processes not only larger than the issuer. Each part of the message process identification. Once the complete message back to the ring and sending process, who sets the new coordinator to process the larger number. It circulates through the ring a new message indicating who is the coordinator of the system.
  • 22. Atomic TransactionsSynchronization method is a high level, unlike the revised methods so far, does not hold the developer on issues of mutual exclusion, prevent crashes and failover.   On the contrary, this method guides the developer's effort to real problems and substance of the synchronization of distributed systems. The concept of atomic transactions is to ensure that all processes that make a transaction should   implemented fully and satisfactorily. Of a breakdown in one of the processes, the entire transaction fails,   reversed the same and proceeded to restart.
  • 23. Threads, Processes (Threads)Today's operating systems can support multiple threads of control within a process.   Two notable features in the processes is that threads share a single address space,   and in turn, simulate a multi-ordered, as if it were separate processes in parallel. Only in a multiprocessor machine with may actually run parallel processes. The wires can be placed in four states: Running, when the process is running. Locked, when a process depends on a critical resource. Usually, when it can be used again. Over, when the task ends. Implementation of a Yarn Package There are two ways to implement threads: In the user When performing the installation of user-level packages, the core must not know of its existence, so the kernel will handle only a single thread. The threads are executed in the runtime system in groups of procedures. In the event the system or procedure required to suspend a thread in its handling, the thread stores the records in a table, look for unlocked and reload the machine registers with initial values. Its main advantages are: Each process or thread can have its own algorithm or process planning. The exchange is faster, and identifiers used in the core. It has a more scalable processes increase.
  • 24. In the CoreUnlike the implementation on the client, the implementation in the kernel does not need the runtime management;   every process in the same table manages its processes, even if it means higher cost in resources and processing time machine. One of the most important advantages is that it requires blocking calls to the system.