SlideShare a Scribd company logo
Introduction to Docker
History of Docker
2004
Solaris Containers /
Zones technology
introduced
2008
Linux containers
(LXC 1.0)
introduced
2013
Solomon Hykes
starts Docker as an
internal project
within dotCloud
Mar 2013
Docker released
to open source
Feb 2016
Docker introduces first
commercial product – now
called Docker Enterprise
Edition
Today
Open source community includes:
- 3,300+
contributors
- 43,000+
stars
- 12,000+
forks
A History Lesson
One application on one physical
server
In the Dark Ages
Historical limitations of application deployment
• Slow deployment times
• Huge costs
• Wasted resources
• Difficult to scale
• Difficult to migrate
• Vendor lock in
17
A History Lesson
Hypervisor-based Virtualization
• One physical server can contain multiple applications
• Each application runs in a virtual machine (VM)
Benefits of VMs
• Better resource pooling
– One physical machine divided into multiple virtual machines
• Easier to scale
• VMs in the cloud
– Rapid elasticity
– Pay as you go model
Limitations of VMs
• Each VM stills requires
– CPU allocation
– Storage
– RAM
– An entire guest operating system
• The more VMs you run, the more resources you need
• Guest OS means wasted resources
• Application portability not guaranteed
• Standardized packaging for
software and dependencies
• Isolate apps from each other
• Share the same OS kernel
• Works with all major Linux and
Windows Server
What is a container?
Comparing Containers and VMs
Containers are an app
level construct
VMs are an infrastructure level
construct to turn one machine
into many servers
Containers and VMs together
Containers and VMs together provide a tremendous amount of
flexibility for IT to optimally deploy and manage apps.
DEV
PROD
Key Benefits of Docker Containers
Speed
• No OS to boot =
applications
online in seconds
Portability
• Less
dependencies
between process
layers = ability to
move between
infrastructure
Efficiency
• Less OS
overhead
• Improved VM
density
Docker Basics
Image
The basis of a Docker container. The content at rest.
Container
The image when it is ‘running.’ The standard unit for app service
Engine
The software that executes commands for containers. Networking and volumes are part of
Engine. Can be clustered together.
Registry
Stores, distributes and manages Docker images
Control Plane
Management plane for container and cluster orchestration
Building a Software Supply Chain
Image Registry
Traditional
Microservices
DEVELOPERS IT OPERATIONS
Control Plane
Docker registry
A Docker registry is a storage and distribution system for named Docker images. The
same image might have multiple different versions, identified by their tags.
A Docker registry is organized into Docker repositories , where a repository holds all the
versions of a specific image.
The registry allows Docker users to pull images locally, as well as push new images to the
registry (given adequate access permissions when applicable).
By default, the Docker engine interacts with DockerHub , Docker’s public registry instance.
However, it is possible to run on-premise the open-source Docker registry/distribution, as
well as a commercially supported version called Docker Trusted Registry .
Run your first docker
Docker run
One of the first and most important commands Docker users learn is the docker run
command. This comes as no surprise since its primary function is to build and run
containers.
There are many different ways to run a container. By adding attributes to the basic syntax,
you can configure a container to run in detached mode, set a container name, mount a
volume, and perform many more tasks.
docker run [OPTIONS] IMAGE [COMMAND] [ARG...]
Docker run
> docker run hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
[...]
Status: Downloaded newer image for hello-world:latest
Hello from Docker!
This message shows that your installation appears to be working correctly.
[...]
Build your own docker
Docker file basics
A Dockerfile is a simple text file that contains a list of commands that the Docker client calls
while creating an image.
It's a simple way to automate the image creation process.
The commands you write in a Dockerfile are almost identical to their equivalent Linux
commands: this means you don't really have to learn new syntax to create your own
dockerfiles.
Docker file directives
FROM
The from directive is used to set base image for the subsequent instructions. A Dockerfile must have
FROM directive with valid image name as the first instruction.
FROM ubuntu:20.04
RUN
Using RUN directing ,you can run any command to image during build time. For example you can install
required packages during the build of image.
RUN apt-get update
RUN apt-get install -y apache2 automake build-essential curl
Docker file directives
COPY
The COPY directive used for copying files and directories from host system to the image during build.
For example the first commands will copy all the files from hosts html/ directory /var/www/html image
directory.
Second command will copy all files with extension .conf to /etc/apache2/sites-available/ directory.
COPY html/* /var/www/html/
COPY *.conf /etc/apache2/sites-available/
WORKDIR
The WORKDIR directive used to sets the working directory for any RUN, CMD, ENTRYPOINT, COPY
and ADD commands during build.
WORKDIR /opt
Docker file directives
CMD
The CMD directive is used to run the service or software contained by your image, along with any
arguments during the launching the container. CMD uses following basic syntax
CMD ["executable","param1","param2"]
CMD ["executable","param1","param2"]
For example, to start Apache service during launch of container, Use the following command.
CMD ["apachectl", "-D", "FOREGROUND"]
EXPOSE
The EXPOSE directive indicates the ports on which a container will listen for the connections. After that
you can bind host system port with container and use them.
EXPOSE 80
EXPOSE 443
Docker file directives
ENV
The ENV directive is used to set environment variable for specific service of container.
ENV PATH=$PATH:/usr/local/pgsql/bin/
ENV PG_MAJOR=9.6.0
VOLUME
The VOLUME directive creates a mount point with the specified name and marks it as holding externally
mounted volumes from native host or other containers.
VOLUME ["/data"]
Sample docker file
Given this Dockerfile:
FROM alpine
CMD ["echo", "Hello Tor Vergata!"]
Build and run it:
docker build -t hello .
docker run --rm hello
This will output:
Hello Tor Vergata!
Sample docker file
FROM nginx:latest
RUN touch /testfile
COPY ./index.html /usr/share/nginx/html/index.html
Docker build / push
Use Docker build to build your image locally
docker build -t <registry>/<image name>:<tag> .
And Docker push to publish your image on registry
docker push <registry>/<image name>:<tag>
Data persistence
Data persistence
Docker containers provide you with a writable layer on top to make changes to your running container.
But these changes are bound to the container’s lifecycle: If the container is deleted (not stopped), you
lose your changes.
Let’s take a hypothetical scenario where you are running a database in a container without any data
persistence configured.
You create some tables and add some rows to them: but, if some reason, you need to delete this
container, as soon as the container is deleted all your tables and their corresponding data get lost.
Docker provides us with a couple of solutions to persist your data even if the container is deleted.
The two possible ways to persist your data are:
• Bind Mounts
• Volumes
Bind mounts
Bind mounts have been around since the early days of Docker.
When you use a bind mount, a file or directory on the host machine is mounted into a
container.
The file or directory is referenced by its absolute path on the host machine.
By contrast, when you use a volume, a new directory is created within Docker’s storage directory
on the host machine, and Docker manages that directory’s contents.
The file or directory does not need to exist on the Docker host already.
It is created on demand if it does not yet exist.
Bind mounts are very performant, but they rely on the host machine’s filesystem having a specific
directory structure available.
If you are developing new Docker applications, consider using named volumes instead.
Docker volumes
Volumes are the preferred mechanism for persisting data generated by and used by Docker
containers.
While bind mounts are dependent on the directory structure and OS of the host machine,
volumes are completely managed by Docker.
Volumes have several advantages over bind mounts:
● Volumes are easier to back up or migrate than bind mounts.
● You can manage volumes using Docker CLI commands or the Docker API.
● Volumes work on both Linux and Windows containers.
● Volumes can be more safely shared among multiple containers.
● Volume drivers let you store volumes on remote hosts or cloud providers, to encrypt the
contents of volumes, or to add other functionality.
● New volumes can have their content pre-populated by a container.
Bind mounts vs volumes
Docker volumes
> docker volume create my-volume
> docker run -d --name test -v my-volume:/app nginx:latest
Docker networking
Docker network basics
Docker Networking is used to connect docker container with each other and with the
outside world.
Docker uses CNM (Container Network Model) for networking.
This model standardizes the steps required to provide networking for containers using
multiple network drivers.
Bridge networking
Bridge network is a default network created automatically
when you deploy a container.
Bridge network uses a software bridge that allows containers
connected to the same bridge network to communicate.
Bridge networks are used on containers that are running on
the same Docker daemon host.
The bridge network creates a private internal isolated
network to the host so containers on this network can
communicate.
Host networking
This takes out any network isolation between the docker
host and the docker containers.
Host mode networking can be useful to optimize
performance.
It does not require network address translation (NAT).
The host networking driver only works on Linux hosts, and
is not supported on Docker Desktop for Mac, Docker
Desktop for Windows, or Docker EE for Windows Server.
Overlay networking
Overlay networking is used if container on node A wants
to talk to node B then to make communication between
them we use Overlay networking.
Overlay networking uses VXLAN to create an Overlay
network.
This has the advantage of providing maximum portability
across various cloud and on-premises networks.
By default, the Overlay network is encrypted with the AES
algorithm.
Exposing ports
By default, when we create any containers it doesn’t publish or
expose the application ports running on the containers.
We can access these applications only within the docker host
not through network systems.
You can explicitly bind a port or group of ports from container to
host using the -p flag.
docker run [...] -p 8000:5000 docker.io/httpd
Docker cheatsheet
Getting Started With Docker: Simplifying DevOps
Docker echosytems
Docker compose
Docker Compose is a tool that was developed to help define and share multi-container
applications.
With Compose, we can create a YAML file to define the services and with a single
command, can spin everything up or tear it all down.
Each of the containers here run in isolation but can interact with each other when required.
Docker Compose files are very easy to write in a scripting language called YAML, which is
an XML-based language that stands for Yet Another Markup Language.
version: "3.7"
services:
app:
image: node:12-alpine
command: sh -c "yarn install && yarn run dev"
ports:
- 3000:3000
working_dir: /app
volumes:
- ./:/app
environment:
MYSQL_HOST: mysql
MYSQL_USER: root
MYSQL_PASSWORD: secret
MYSQL_DB: todos
mysql:
image: mysql:5.7
volumes:
- mysql-data:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: secret
MYSQL_DATABASE: todos
volumes:
mysql-data:
Docker orchestration
Docker swarm
Docker swarm is a container orchestration tool, meaning that it allows the user to manage
multiple containers deployed across multiple host machines.
A Docker Swarm is a group of either physical or virtual machines that are running the
Docker application and that have been configured to join together in a cluster.
Once a group of machines have been clustered together, you can still run the Docker
commands that you're used to, but they will now be carried out by the machines in your
cluster.
The activities of the cluster are controlled by a swarm manager, and machines that have
joined the cluster are referred to as nodes.
Kubernetes
Kubernetes is an open source system to deploy, scale, and manage containerized
applications.
It automates operational tasks of container management and includes built-in commands for
deploying applications, rolling out changes to your applications, scaling your applications up
and down to fit changing needs, monitoring your applications, and more.
Application developers, IT system administrators and DevOps engineers use Kubernetes to
automatically deploy, scale, maintain, schedule and operate multiple application containers
across clusters of nodes.
Containers run on top of a common shared operating system (OS) on host machines but
are isolated from each other unless a user chooses to connect them.
Getting Started With Docker: Simplifying DevOps
Docker playground
https://www.docker.com/play-with-docker/
Ad

Recommended

PDF
Docker slides
Jyotsna Raghuraman
 
PDF
Docker for Developers
JasonStraughan1
 
PDF
Let's dockerize
Ahmed Sorour
 
PDF
Docker
Abhishek Tomar
 
PPTX
Introduction to Dockers.pptx
HassanRaza40719
 
PDF
Docker
Neeraj Wadhwa
 
PPTX
Docker, LinuX Container
Araf Karsh Hamid
 
PDF
Up and running with docker
Michelle Liu
 
PDF
[@NaukriEngineering] Docker 101
Naukri.com
 
PPTX
Introduction to Dockers and containers
Sri Padaraj M S
 
PPTX
Getting started with Docker
Ravindu Fernando
 
PDF
Docker interview Questions-2.pdf
Yogeshwaran R
 
PDF
Docker 1.9 Workshop
{code}
 
PDF
containers and virtualization tools ( Docker )
Imo Inyang
 
PDF
Axigen on docker
BUSINESS SOFTWARES & SOLUTIONS
 
PDF
Hack the whale
Marco Ferrigno
 
PDF
Dockers & kubernetes detailed - Beginners to Geek
wiTTyMinds1
 
PPTX
Accelerate your development with Docker
Andrey Hristov
 
PDF
Accelerate your software development with Docker
Andrey Hristov
 
PPTX
Docker.pptx
balaji257
 
PPTX
Docker and Microservice
Samuel Chow
 
ODP
Docker - The Linux Container
Balaji Rajan
 
PPTX
Docker-Presentation.pptx
Vipobav
 
PPTX
Introduction To Docker
Dr. Syed Hassan Amin
 
PPTX
Docker : Container Virtualization
Ranjan Baisak
 
PPTX
Docker
Ramchandra Koty
 
PPSX
Docker Kubernetes Istio
Araf Karsh Hamid
 
PPTX
Docker
Vu Duc Du
 
PDF
Salesforce Summer '25 Release Frenchgathering.pptx.pdf
yosra Saidani
 
PDF
WebdriverIO & JavaScript: The Perfect Duo for Web Automation
digitaljignect
 

More Related Content

Similar to Getting Started With Docker: Simplifying DevOps (20)

PDF
[@NaukriEngineering] Docker 101
Naukri.com
 
PPTX
Introduction to Dockers and containers
Sri Padaraj M S
 
PPTX
Getting started with Docker
Ravindu Fernando
 
PDF
Docker interview Questions-2.pdf
Yogeshwaran R
 
PDF
Docker 1.9 Workshop
{code}
 
PDF
containers and virtualization tools ( Docker )
Imo Inyang
 
PDF
Axigen on docker
BUSINESS SOFTWARES & SOLUTIONS
 
PDF
Hack the whale
Marco Ferrigno
 
PDF
Dockers & kubernetes detailed - Beginners to Geek
wiTTyMinds1
 
PPTX
Accelerate your development with Docker
Andrey Hristov
 
PDF
Accelerate your software development with Docker
Andrey Hristov
 
PPTX
Docker.pptx
balaji257
 
PPTX
Docker and Microservice
Samuel Chow
 
ODP
Docker - The Linux Container
Balaji Rajan
 
PPTX
Docker-Presentation.pptx
Vipobav
 
PPTX
Introduction To Docker
Dr. Syed Hassan Amin
 
PPTX
Docker : Container Virtualization
Ranjan Baisak
 
PPTX
Docker
Ramchandra Koty
 
PPSX
Docker Kubernetes Istio
Araf Karsh Hamid
 
PPTX
Docker
Vu Duc Du
 
[@NaukriEngineering] Docker 101
Naukri.com
 
Introduction to Dockers and containers
Sri Padaraj M S
 
Getting started with Docker
Ravindu Fernando
 
Docker interview Questions-2.pdf
Yogeshwaran R
 
Docker 1.9 Workshop
{code}
 
containers and virtualization tools ( Docker )
Imo Inyang
 
Hack the whale
Marco Ferrigno
 
Dockers & kubernetes detailed - Beginners to Geek
wiTTyMinds1
 
Accelerate your development with Docker
Andrey Hristov
 
Accelerate your software development with Docker
Andrey Hristov
 
Docker.pptx
balaji257
 
Docker and Microservice
Samuel Chow
 
Docker - The Linux Container
Balaji Rajan
 
Docker-Presentation.pptx
Vipobav
 
Introduction To Docker
Dr. Syed Hassan Amin
 
Docker : Container Virtualization
Ranjan Baisak
 
Docker Kubernetes Istio
Araf Karsh Hamid
 
Docker
Vu Duc Du
 

Recently uploaded (20)

PDF
Salesforce Summer '25 Release Frenchgathering.pptx.pdf
yosra Saidani
 
PDF
WebdriverIO & JavaScript: The Perfect Duo for Web Automation
digitaljignect
 
PPTX
Curietech AI in action - Accelerate MuleSoft development
shyamraj55
 
PPTX
Wenn alles versagt - IBM Tape schützt, was zählt! Und besonders mit dem neust...
Josef Weingand
 
PDF
Using the SQLExecutor for Data Quality Management: aka One man's love for the...
Safe Software
 
PDF
cnc-processing-centers-centateq-p-110-en.pdf
AmirStern2
 
PDF
AI Agents and FME: A How-to Guide on Generating Synthetic Metadata
Safe Software
 
PDF
Lessons Learned from Developing Secure AI Workflows.pdf
Priyanka Aash
 
PDF
PyCon SG 25 - Firecracker Made Easy with Python.pdf
Muhammad Yuga Nugraha
 
PDF
“MPU+: A Transformative Solution for Next-Gen AI at the Edge,” a Presentation...
Edge AI and Vision Alliance
 
PDF
Oh, the Possibilities - Balancing Innovation and Risk with Generative AI.pdf
Priyanka Aash
 
PDF
The Future of Product Management in AI ERA.pdf
Alyona Owens
 
PDF
EIS-Webinar-Engineering-Retail-Infrastructure-06-16-2025.pdf
Earley Information Science
 
PPTX
OpenACC and Open Hackathons Monthly Highlights June 2025
OpenACC
 
PDF
Tech-ASan: Two-stage check for Address Sanitizer - Yixuan Cao.pdf
caoyixuan2019
 
PDF
Mastering AI Workflows with FME by Mark Döring
Safe Software
 
PDF
Coordinated Disclosure for ML - What's Different and What's the Same.pdf
Priyanka Aash
 
PDF
A Constitutional Quagmire - Ethical Minefields of AI, Cyber, and Privacy.pdf
Priyanka Aash
 
PPTX
You are not excused! How to avoid security blind spots on the way to production
Michele Leroux Bustamante
 
PPTX
Securing Account Lifecycles in the Age of Deepfakes.pptx
FIDO Alliance
 
Salesforce Summer '25 Release Frenchgathering.pptx.pdf
yosra Saidani
 
WebdriverIO & JavaScript: The Perfect Duo for Web Automation
digitaljignect
 
Curietech AI in action - Accelerate MuleSoft development
shyamraj55
 
Wenn alles versagt - IBM Tape schützt, was zählt! Und besonders mit dem neust...
Josef Weingand
 
Using the SQLExecutor for Data Quality Management: aka One man's love for the...
Safe Software
 
cnc-processing-centers-centateq-p-110-en.pdf
AmirStern2
 
AI Agents and FME: A How-to Guide on Generating Synthetic Metadata
Safe Software
 
Lessons Learned from Developing Secure AI Workflows.pdf
Priyanka Aash
 
PyCon SG 25 - Firecracker Made Easy with Python.pdf
Muhammad Yuga Nugraha
 
“MPU+: A Transformative Solution for Next-Gen AI at the Edge,” a Presentation...
Edge AI and Vision Alliance
 
Oh, the Possibilities - Balancing Innovation and Risk with Generative AI.pdf
Priyanka Aash
 
The Future of Product Management in AI ERA.pdf
Alyona Owens
 
EIS-Webinar-Engineering-Retail-Infrastructure-06-16-2025.pdf
Earley Information Science
 
OpenACC and Open Hackathons Monthly Highlights June 2025
OpenACC
 
Tech-ASan: Two-stage check for Address Sanitizer - Yixuan Cao.pdf
caoyixuan2019
 
Mastering AI Workflows with FME by Mark Döring
Safe Software
 
Coordinated Disclosure for ML - What's Different and What's the Same.pdf
Priyanka Aash
 
A Constitutional Quagmire - Ethical Minefields of AI, Cyber, and Privacy.pdf
Priyanka Aash
 
You are not excused! How to avoid security blind spots on the way to production
Michele Leroux Bustamante
 
Securing Account Lifecycles in the Age of Deepfakes.pptx
FIDO Alliance
 
Ad

Getting Started With Docker: Simplifying DevOps

  • 2. History of Docker 2004 Solaris Containers / Zones technology introduced 2008 Linux containers (LXC 1.0) introduced 2013 Solomon Hykes starts Docker as an internal project within dotCloud Mar 2013 Docker released to open source Feb 2016 Docker introduces first commercial product – now called Docker Enterprise Edition Today Open source community includes: - 3,300+ contributors - 43,000+ stars - 12,000+ forks
  • 3. A History Lesson One application on one physical server In the Dark Ages
  • 4. Historical limitations of application deployment • Slow deployment times • Huge costs • Wasted resources • Difficult to scale • Difficult to migrate • Vendor lock in 17
  • 5. A History Lesson Hypervisor-based Virtualization • One physical server can contain multiple applications • Each application runs in a virtual machine (VM)
  • 6. Benefits of VMs • Better resource pooling – One physical machine divided into multiple virtual machines • Easier to scale • VMs in the cloud – Rapid elasticity – Pay as you go model
  • 7. Limitations of VMs • Each VM stills requires – CPU allocation – Storage – RAM – An entire guest operating system • The more VMs you run, the more resources you need • Guest OS means wasted resources • Application portability not guaranteed
  • 8. • Standardized packaging for software and dependencies • Isolate apps from each other • Share the same OS kernel • Works with all major Linux and Windows Server What is a container?
  • 9. Comparing Containers and VMs Containers are an app level construct VMs are an infrastructure level construct to turn one machine into many servers
  • 10. Containers and VMs together Containers and VMs together provide a tremendous amount of flexibility for IT to optimally deploy and manage apps. DEV PROD
  • 11. Key Benefits of Docker Containers Speed • No OS to boot = applications online in seconds Portability • Less dependencies between process layers = ability to move between infrastructure Efficiency • Less OS overhead • Improved VM density
  • 12. Docker Basics Image The basis of a Docker container. The content at rest. Container The image when it is ‘running.’ The standard unit for app service Engine The software that executes commands for containers. Networking and volumes are part of Engine. Can be clustered together. Registry Stores, distributes and manages Docker images Control Plane Management plane for container and cluster orchestration
  • 13. Building a Software Supply Chain Image Registry Traditional Microservices DEVELOPERS IT OPERATIONS Control Plane
  • 14. Docker registry A Docker registry is a storage and distribution system for named Docker images. The same image might have multiple different versions, identified by their tags. A Docker registry is organized into Docker repositories , where a repository holds all the versions of a specific image. The registry allows Docker users to pull images locally, as well as push new images to the registry (given adequate access permissions when applicable). By default, the Docker engine interacts with DockerHub , Docker’s public registry instance. However, it is possible to run on-premise the open-source Docker registry/distribution, as well as a commercially supported version called Docker Trusted Registry .
  • 15. Run your first docker
  • 16. Docker run One of the first and most important commands Docker users learn is the docker run command. This comes as no surprise since its primary function is to build and run containers. There are many different ways to run a container. By adding attributes to the basic syntax, you can configure a container to run in detached mode, set a container name, mount a volume, and perform many more tasks. docker run [OPTIONS] IMAGE [COMMAND] [ARG...]
  • 17. Docker run > docker run hello-world Unable to find image 'hello-world:latest' locally latest: Pulling from library/hello-world [...] Status: Downloaded newer image for hello-world:latest Hello from Docker! This message shows that your installation appears to be working correctly. [...]
  • 18. Build your own docker
  • 19. Docker file basics A Dockerfile is a simple text file that contains a list of commands that the Docker client calls while creating an image. It's a simple way to automate the image creation process. The commands you write in a Dockerfile are almost identical to their equivalent Linux commands: this means you don't really have to learn new syntax to create your own dockerfiles.
  • 20. Docker file directives FROM The from directive is used to set base image for the subsequent instructions. A Dockerfile must have FROM directive with valid image name as the first instruction. FROM ubuntu:20.04 RUN Using RUN directing ,you can run any command to image during build time. For example you can install required packages during the build of image. RUN apt-get update RUN apt-get install -y apache2 automake build-essential curl
  • 21. Docker file directives COPY The COPY directive used for copying files and directories from host system to the image during build. For example the first commands will copy all the files from hosts html/ directory /var/www/html image directory. Second command will copy all files with extension .conf to /etc/apache2/sites-available/ directory. COPY html/* /var/www/html/ COPY *.conf /etc/apache2/sites-available/ WORKDIR The WORKDIR directive used to sets the working directory for any RUN, CMD, ENTRYPOINT, COPY and ADD commands during build. WORKDIR /opt
  • 22. Docker file directives CMD The CMD directive is used to run the service or software contained by your image, along with any arguments during the launching the container. CMD uses following basic syntax CMD ["executable","param1","param2"] CMD ["executable","param1","param2"] For example, to start Apache service during launch of container, Use the following command. CMD ["apachectl", "-D", "FOREGROUND"] EXPOSE The EXPOSE directive indicates the ports on which a container will listen for the connections. After that you can bind host system port with container and use them. EXPOSE 80 EXPOSE 443
  • 23. Docker file directives ENV The ENV directive is used to set environment variable for specific service of container. ENV PATH=$PATH:/usr/local/pgsql/bin/ ENV PG_MAJOR=9.6.0 VOLUME The VOLUME directive creates a mount point with the specified name and marks it as holding externally mounted volumes from native host or other containers. VOLUME ["/data"]
  • 24. Sample docker file Given this Dockerfile: FROM alpine CMD ["echo", "Hello Tor Vergata!"] Build and run it: docker build -t hello . docker run --rm hello This will output: Hello Tor Vergata!
  • 25. Sample docker file FROM nginx:latest RUN touch /testfile COPY ./index.html /usr/share/nginx/html/index.html
  • 26. Docker build / push Use Docker build to build your image locally docker build -t <registry>/<image name>:<tag> . And Docker push to publish your image on registry docker push <registry>/<image name>:<tag>
  • 28. Data persistence Docker containers provide you with a writable layer on top to make changes to your running container. But these changes are bound to the container’s lifecycle: If the container is deleted (not stopped), you lose your changes. Let’s take a hypothetical scenario where you are running a database in a container without any data persistence configured. You create some tables and add some rows to them: but, if some reason, you need to delete this container, as soon as the container is deleted all your tables and their corresponding data get lost. Docker provides us with a couple of solutions to persist your data even if the container is deleted. The two possible ways to persist your data are: • Bind Mounts • Volumes
  • 29. Bind mounts Bind mounts have been around since the early days of Docker. When you use a bind mount, a file or directory on the host machine is mounted into a container. The file or directory is referenced by its absolute path on the host machine. By contrast, when you use a volume, a new directory is created within Docker’s storage directory on the host machine, and Docker manages that directory’s contents. The file or directory does not need to exist on the Docker host already. It is created on demand if it does not yet exist. Bind mounts are very performant, but they rely on the host machine’s filesystem having a specific directory structure available. If you are developing new Docker applications, consider using named volumes instead.
  • 30. Docker volumes Volumes are the preferred mechanism for persisting data generated by and used by Docker containers. While bind mounts are dependent on the directory structure and OS of the host machine, volumes are completely managed by Docker. Volumes have several advantages over bind mounts: ● Volumes are easier to back up or migrate than bind mounts. ● You can manage volumes using Docker CLI commands or the Docker API. ● Volumes work on both Linux and Windows containers. ● Volumes can be more safely shared among multiple containers. ● Volume drivers let you store volumes on remote hosts or cloud providers, to encrypt the contents of volumes, or to add other functionality. ● New volumes can have their content pre-populated by a container.
  • 31. Bind mounts vs volumes
  • 32. Docker volumes > docker volume create my-volume > docker run -d --name test -v my-volume:/app nginx:latest
  • 34. Docker network basics Docker Networking is used to connect docker container with each other and with the outside world. Docker uses CNM (Container Network Model) for networking. This model standardizes the steps required to provide networking for containers using multiple network drivers.
  • 35. Bridge networking Bridge network is a default network created automatically when you deploy a container. Bridge network uses a software bridge that allows containers connected to the same bridge network to communicate. Bridge networks are used on containers that are running on the same Docker daemon host. The bridge network creates a private internal isolated network to the host so containers on this network can communicate.
  • 36. Host networking This takes out any network isolation between the docker host and the docker containers. Host mode networking can be useful to optimize performance. It does not require network address translation (NAT). The host networking driver only works on Linux hosts, and is not supported on Docker Desktop for Mac, Docker Desktop for Windows, or Docker EE for Windows Server.
  • 37. Overlay networking Overlay networking is used if container on node A wants to talk to node B then to make communication between them we use Overlay networking. Overlay networking uses VXLAN to create an Overlay network. This has the advantage of providing maximum portability across various cloud and on-premises networks. By default, the Overlay network is encrypted with the AES algorithm.
  • 38. Exposing ports By default, when we create any containers it doesn’t publish or expose the application ports running on the containers. We can access these applications only within the docker host not through network systems. You can explicitly bind a port or group of ports from container to host using the -p flag. docker run [...] -p 8000:5000 docker.io/httpd
  • 42. Docker compose Docker Compose is a tool that was developed to help define and share multi-container applications. With Compose, we can create a YAML file to define the services and with a single command, can spin everything up or tear it all down. Each of the containers here run in isolation but can interact with each other when required. Docker Compose files are very easy to write in a scripting language called YAML, which is an XML-based language that stands for Yet Another Markup Language.
  • 43. version: "3.7" services: app: image: node:12-alpine command: sh -c "yarn install && yarn run dev" ports: - 3000:3000 working_dir: /app volumes: - ./:/app environment: MYSQL_HOST: mysql MYSQL_USER: root MYSQL_PASSWORD: secret MYSQL_DB: todos mysql: image: mysql:5.7 volumes: - mysql-data:/var/lib/mysql environment: MYSQL_ROOT_PASSWORD: secret MYSQL_DATABASE: todos volumes: mysql-data:
  • 45. Docker swarm Docker swarm is a container orchestration tool, meaning that it allows the user to manage multiple containers deployed across multiple host machines. A Docker Swarm is a group of either physical or virtual machines that are running the Docker application and that have been configured to join together in a cluster. Once a group of machines have been clustered together, you can still run the Docker commands that you're used to, but they will now be carried out by the machines in your cluster. The activities of the cluster are controlled by a swarm manager, and machines that have joined the cluster are referred to as nodes.
  • 46. Kubernetes Kubernetes is an open source system to deploy, scale, and manage containerized applications. It automates operational tasks of container management and includes built-in commands for deploying applications, rolling out changes to your applications, scaling your applications up and down to fit changing needs, monitoring your applications, and more. Application developers, IT system administrators and DevOps engineers use Kubernetes to automatically deploy, scale, maintain, schedule and operate multiple application containers across clusters of nodes. Containers run on top of a common shared operating system (OS) on host machines but are isolated from each other unless a user chooses to connect them.