Posted in DevOps, Docker

Vagrant là gì? Làm thế nào để ứng dụng Vagrant.

Khi các bạn xài Window vài muốn chạy Linux, thường thì lựa chọn sẽ là Virtual Machine(oracle Virtual Box chẳng hạn). Khi xài virtual-box các bạn phải set memory, capacity to máy ảo bạn sử dụng, sau đó lên mạng tải ubunu images về mount vào mới chạy được, đó là 1 quá trình không quá dài nhưng cũng mệt mỏi

Trong bài này mình sẽ giới thiệu 1 cho các bạn 1 công cụ mới tên là Vagrant.

Định nghĩa: Vagrant là một công cụ đa nền tảng cho phép bạn chỉ định Máy ảo ( dưới dạng Vagrantfile ) để triển khai tới một hypervisor (như VirtualBox trên laptop của bạn).

Vargant cũng gần như docker, bạn chỉ cần tạo 1 vagrantfile, trong đó cấu hình Linux machine của bạn, ví dụ như

file sau:

Vagrant.configure("2") do |config|
  config.vm.box = "bento/ubuntu-20.04"
end

File này chỉ định bạn sẽ tạo 1 vagrant box cài ubnutu-20.04

C:\Users\Thanh\VirtualBox VMs> vagrant up
A Vagrant environment or target machine is required to run this
command. Run `vagrant init` to create a new Vagrant environment. Or,
get an ID of a target machine from `vagrant global-status` to run
this command on. A final option is to change to a directory with a
Vagrantfile and to try again.

Để bật máy lên chỉ cần dùng lệnh “vagrant up”

C:\Users\Thanh\VirtualBox VMs> vagrant up
Bringing machine 'default' up with 'virtualbox' provider...
==> default: Importing base box 'bento/ubuntu-20.04'...
==> default: Matching MAC address for NAT networking...
==> default: Checking if box 'bento/ubuntu-20.04' version '202010.24.0' is up to date...
==> default: A newer version of the box 'bento/ubuntu-20.04' for provider 'virtualbox' is
==> default: available! You currently have version '202010.24.0'. The latest is version
==> default: '202012.23.0'. Run `vagrant box update` to update.
==> default: Setting the name of the VM: VirtualBoxVMs_default_1617373630300_91414
==> default: Clearing any previously set network interfaces...
==> default: Preparing network interfaces based on configuration...
    default: Adapter 1: nat
==> default: Forwarding ports...
    default: 22 (guest) => 2222 (host) (adapter 1)
==> default: Booting VM...
==> default: Waiting for machine to boot. This may take a few minutes...
    default: SSH username: vagrant
    default: SSH auth method: private key
    default:
    default: Vagrant insecure key detected. Vagrant will automatically replace
    default: this with a newly generated keypair for better security.
    default:
    default: Inserting generated public key within guest...
==> default: Machine booted and ready!
==> default: Checking for guest additions in VM...
==> default: Mounting shared folders...
    default: /vagrant => C:/Users/Thanh/VirtualBox VMs

Để access vào máy để làm việc các dùng lệnh “vagrant ssh”.

C:\Users\Thanh\VirtualBox VMs> vagrant ssh
Welcome to Ubuntu 20.04.1 LTS (GNU/Linux 5.4.0-52-generic x86_64)

 * Documentation:  https://help.ubuntu.com
 * Management:     https://landscape.canonical.com
 * Support:        https://ubuntu.com/advantage

 System information disabled due to load higher than 1.0

Vagrant làm cho việc setup máy ảo trên laptop của bạn đơn giản hơn rất nhiều. Chỉ cần 1 vài dòng lệnh là vagrant up là đã có 1 máy ảo chạy Ubuntu hoặc CenOS. Để access vào máy ảo chỉ cần vagrant ssh.

Ranh giới giữa máy ảo và container đang bị mờ. Bản cập nhật gần đây của Vagrant, đã có rất nhiều thứ được bổ sung, bao gồm cả Vagrant Cloud. Vagrant Cloud là một dịch vụ lữu trữ các boxes, tương tự như dockerhub, mà bạn có thể tải về và xài tương tự, ví dụ bạn cần 1 máy ảo có cài sẵn postgresql, chỉ cần lên Vagrant Cloud tải về vargrant file sau đó bật lên là được.

Phần sau mình sẽ làm về setup docker. Bên docker cũng tương tự khi bạn chỉ cần dockerfile sau đó dùng dockerun để chạy container.

Advertisement
Posted in Docker, Robotics

Cài đặt và chạy ROS2 trên docker cotainer

Trước đây mình học robotic và có cơ hội làm việc với ROS(Robot Operating System). Mình thấy ROS là một framework rất hay để các bạn làm quen với lập trình robot. Đồng thời ROS cũng cung cấp nhiều API để việc lập trình robot dễ dàng hơn.

Hiện tại công việc của mình là cloud engineer nên thay vì tiếp xúc làm việc nhiều với robot thì mình deploy các services nhiều hơn. Tuy nhiên thì sở thích là robot nên mình vẫn chạy 1 số project về Robot và xe tự lái. Đó là lý do có trang web này. Và mình thấy có thể kết hợp 2 mảng này để tạo nên 1 số dự án thú vị.

Trong bài này mình sẽ hướng dẫn các bạn cài đặt ROS2 trên docker container:

Nói sơ qua Docker là gì?

Bạn có thể coi Docker trên PC như một máy tính bên trong máy tính của bạn. Docker là một dịch vụ cung cấp các máy ảo độc lập được cài đặt sẵn những phần mền cần thiết. Những máy ảo này giúp cuộc sống của bạn dễ dàng hơn vì chúng đi kèm với các phiên bản phần mềm phù hợp cho bất kỳ ứng dụng nào bạn đang cố gắng chạy.

Lý do cài ROS nên docker container là gì, khi bạn muốn cài ros trên windows hoặc MacOS cũng được, nhưng sẽ phức tạp hơn. Do đó có thể chọn xài máy ảo(VM) chạy linux, hoặc xài docker, coi là bản mới của VM cũng được.

Ok vào bài:

Đầu tiên: Cài docker cho window trước, làm theo hướng dẫn từ Microsoft là chuẩn nhất:

Cài docker trên win cho UbuntuApp:

https://docs.docker.com/docker-for-windows/install/

Ai cài docker trên window gặp vấn đề thì mình sẽ post 1 bài riêng sau, sau khi cài docker thì tick vào ô Use WL2:

Sau khi cài docker trong window rồi thì cài Ubuntu app:

Bằng Link này: Get Ubuntu 20.04 LTS – Microsoft Store

Mở app Ubuntu, đầu tiên kiểm tra docker có chạy không:

docker

Nếu không báo lỗi tiếp tục pull và run docker container có ros2-vnc, gõ lệnh sau:

docker run -p 6080:80 --shm-size=512m tiryoh/ros2-desktop-vnc:dashing

Bước tiếp theo, vào google chrome, vào link sau:

http://127.0.0.1:6080/

Tới bước này là các bạn có 4 màn hình để làm việc, có thể chuyển qua lại ở góc dưới bên trái màn hình.

Có máy ảo docker rồi giờ mình tiếp tục cài ROS:

Cài đặt TurtleBot3

Nhấp vào biểu tượng menu ở phía dưới cùng bên trái.

Đi tới System Tools -> LXterminal.

Tải xuống TurtleBot3.

wget https://raw.githubusercontent.com/ROBOTIS-GIT/turtlebot3/ros2/turtlebot3.repos
mkdir -p ~/turtlebot3_ws/src
vcs import src < turtlebot3.repos

Đợi 2 phút để turtlebot 3 được tải xuống container.

Compile code bằng lệnh sau:

colcon build --symlink-install

Set the biến môi trường.

echo 'source ~/turtlebot3_ws/install/setup.bash' >> ~/.bashrc
echo 'export GAZEBO_MODEL_PATH=$GAZEBO_MODEL_PATH:~/turtlebot3_ws/src/turtlebot3/turtlebot3_simulations/turtlebot3_gazebo/models' >> ~/.bashrc
echo 'export TURTLEBOT3_MODEL=waffle_pi' >> ~/.bashrc
source ~/.bashrc

Chạy thử Gazebo trên ROS2:

Bạn nào từng hoặc cần nghiên cứ về ROS sẽ biết các lệnh này: Launch the simulation using the ros2 launch command.

ros2 launch turtlebot3_gazebo empty_world.launch.py

Đợi 1 lúc để Gazebo được bật lên, bình thường nó đã lâu rồi, chạy giả lập trong docker container thì phải lâu hơn chút. Dưới đây là thành quả.

Thực hiện theo các bước trên các bạn có thể giả lập 1 môi trường ảo cho robot trên một máy ảo chạy trên window của bạn cùng với sự trợ giúp của Docker và Ubuntu app 2020 trên Microsoft Store. Tất cả các phần mềm trên đều miễn phí.

Làm sao tắt ROS 2 docker container:

1 là mấy bạn vào app Ubuntu cái mà đang chạy docker container và tắt nó đi, hoặc dùng lệnh sau:

docker stop ros2-desktop-vnc

Muốn bật lại thì command là:

docker restart ros2-desktop-vnc
Posted in Docker

Docker & Kubernetes- Helm chart repository

Creating a Helm chart repository

In this post, we’ll learn how to create and work with Helm chart repositories.

A chart repository is an HTTP server that houses an index.yaml file and optionally some packaged charts. Because a chart repository can be any HTTP server that can serve YAML and tar files and can answer GET requests, we have a plethora of options when it comes down to hosting our own chart repository. For example, we can use a Google Cloud Storage (GCS) bucket, Amazon S3 bucket, GitHub Pages, or even create our own web server.

Once the charts are ready and we need to share them, the easiest way to do so is by uploading them to a chart repository. However, Helm does not come with a chart repository while Helm can serve the local repository via “helm serve”.

Github Pages

We can create charts repository using GitHub Pages. It allows us to serve static web pages.

All we need is to host a single index.yaml file along with a bunch of .tgz files.

Create a new Github repositoryCreate-a-new-Github-Repo.png

Though it’s empty repo, let’s just clone it for now:

$ git clone https://github.com/Einsteinish/dummy-helm-charts.git    

Create a helm chart

We need to have the Helm CLI installed and initialized:

$ helm version
Client: &version.Version{SemVer:"v2.16.10", GitCommit:"bceca24a91639f045f22ab0f41e47589a932cf5e", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.16.10", GitCommit:"bceca24a91639f045f22ab0f41e47589a932cf5e", GitTreeState:"clean"}

$ helm3 version
version.BuildInfo{Version:"v3.3.1", GitCommit:"249e5215cde0c3fa72e27eb7a30e8d55c9696144", GitTreeState:"dirty", GoVersion:"go1.15"}    

We’re going to use the directory ./sources/ for the sources of our charts. We need to create charts and copy them into the ./sources/:

$ cd dummy-helm-charts/
$ mkdir sources  

$ tree
.
└── dummy-helm-charts
    ├── README.md
    └── sources
    
$ helm create sources/dummy-chart
Creating sources/dummy-chart

$ tree
.
├── README.md
└── sources
    └── dummy-chart
        ├── Chart.yaml
        ├── charts
        ├── templates
        │   ├── NOTES.txt
        │   ├── _helpers.tpl
        │   ├── deployment.yaml
        │   ├── ingress.yaml
        │   ├── service.yaml
        │   ├── serviceaccount.yaml
        │   └── tests
        │       └── test-connection.yaml
        └── values.yaml
        
$ helm lint sources/*
==> Linting sources/dummy-chart
[INFO] Chart.yaml: icon is recommended

1 chart(s) linted, no failures

Create the Helm chart package

$ helm package sources/*
Successfully packaged chart and saved it to: 
/Users/kihyuckhong/Documents/Minikube/Helm/DUMMY/dummy-helm-charts/dummy-chart-0.1.0.tgz

Create the repository index

A chart repository is an HTTP server that houses an index.yaml file.

The index file contains information about each chart and provides the download URL, for example, https://example.com/charts/alpine-0.1.2.tgz for that chart.

The index file is a yaml file called index.yaml. It contains some metadata about the package, including the contents of a chart’s Chart.yaml file. A valid chart repository must have an index file. The helm repo index command will generate an index file based on a given local directory that contains packaged charts.

$ helm repo index --url https://einsteinish.github.io/helm-chart/ . 

$ tree
.
├── README.md
├── dummy-chart-0.1.0.tgz
├── index.yaml
└── sources
    └── dummy-chart
        ├── Chart.yaml
        ├── charts
        ├── templates
        │   ├── NOTES.txt
        │   ├── _helpers.tpl
        │   ├── deployment.yaml
        │   ├── ingress.yaml
        │   ├── service.yaml
        │   ├── serviceaccount.yaml
        │   └── tests
        │       └── test-connection.yaml
        └── values.yaml


$ cat index.yaml
apiVersion: v1
entries:
  dummy-chart:
  - apiVersion: v1
    appVersion: "1.0"
    created: "2020-10-22T13:11:36.940863-07:00"
    description: A Helm chart for Kubernetes
    digest: c8d82f24fc29d40693a608a1fd8db1c2596a8325ecae62529502a1cbae8677a2
    name: dummy-chart
    urls:
    - https://einsteinish.github.io/helm-chart/dummy-chart-0.1.0.tgz
    version: 0.1.0
generated: "2020-10-22T13:11:36.935738-07:00"

Pushing to Github repository

$ git add .

$ git commit -m "initial commit"
[main 4006e22] initial commit
 12 files changed, 322 insertions(+)
 create mode 100644 dummy-chart-0.1.0.tgz
 create mode 100644 index.yaml
 create mode 100644 sources/dummy-chart/.helmignore
 create mode 100644 sources/dummy-chart/Chart.yaml
 create mode 100644 sources/dummy-chart/templates/NOTES.txt
 create mode 100644 sources/dummy-chart/templates/_helpers.tpl
 create mode 100644 sources/dummy-chart/templates/deployment.yaml
 create mode 100644 sources/dummy-chart/templates/ingress.yaml
 create mode 100644 sources/dummy-chart/templates/service.yaml
 create mode 100644 sources/dummy-chart/templates/serviceaccount.yaml
 create mode 100644 sources/dummy-chart/templates/tests/test-connection.yaml
 create mode 100644 sources/dummy-chart/values.yaml

$ git push origin main
...
To https://github.com/Einsteinish/dummy-helm-charts.git
   9d23dff..4006e22  main -> main    

Setting up Github Pages as a helm chart repository

From “settings” of git repository, scroll down to Github Pages section and configure it as follow:Github-Pages-save.png

Click “Save”:Github-Pages-saved.png

Helm client configuration

To use the charts on the repository, we need to configure their own Helm client using helm repo command:

$ helm repo add dummy https://einsteinish.github.io/dummy-helm-charts
"dummy" has been added to your repositories
~/Documents/Minikube/Helm/DUMMY/dummy-helm-charts $ helm repo list
NAME  	URL                                             
stable	https://kubernetes-charts.storage.googleapis.com
local 	http://127.0.0.1:8879/charts                    
dummy 	https://einsteinish.github.io/dummy-helm-charts

$ helm search dummy
NAME             	CHART VERSION	APP VERSION	DESCRIPTION                
dummy/dummy-chart	0.1.0        	1.0        	A Helm chart for Kubernetes
local/dummy-chart	0.1.0        	1.0        	A Helm chart for Kubernetes

Chart repository updates

Whenever we want to add a new chart to the Helm chart repository, we should regenerate the index.yaml file. The helm repo index command will rebuild the index.yaml file including only the charts that it finds locally. Note that we can use the –merge flag to incrementally add new charts to an existing index.yaml:

$ helm repo index --url https://einsteinish.github.io/dummy-helm-charts/ --merge index
Posted in Docker

More about Docker run command

Docker run command

The basic syntax for the Docker run command looks like this:

k@laptop:~$ docker run --help
Usage: docker run [OPTIONS] IMAGE [COMMAND] [ARG...]
Run a command in a new container

We’re going to use very tiny linux distribution called busybox which has several stripped-down Unix tools in a single executable file and runs in a variety of POSIX environments such as Linux, Android, FreeBSD, etc. It’s going to execute a shell command in a newly created container, then it will put us on a shell prompt:

k@laptop:~$ docker run -it busybox sh
/ # echo 'bogotobogo'
bogotobogo
/ # exit
k@laptop:~$

The docker ps shows us containers:

k@laptop:~$ docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES

But right now, we don’t have any containers running. If we use docker ps -a, it will display all containers even if it’s not running:

k@laptop:~$ docker ps -a
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS                      PORTS               NAMES
4d673944ec59        busybox:latest      "sh"                13 minutes ago      Exited (0) 12 minutes ago                       trusting_mccarthy   

When we execute run command, it will create a new container and execute a command within that container. The container will remain until it is explicitly deleted. We can restart the container:

k@laptop:~$ docker restart 4d673944ec59
4d673944ec59   

We can attach to that container. The usage looks like this:

docker attach [OPTIONS] CONTAINER

Let’s attach to the container. It will put us back on the shell where we were before:

k@laptop:~$ docker attach 4d673944ec59

/ # ls
bin      etc      lib      linuxrc  mnt      proc     run      sys      usr
dev      home     lib64    media    opt      root     sbin     tmp      var
/ # exit

The docker attach command allows us to attach to a running container using the container’s ID or name, either to view its ongoing output or to control it interactively.

Now, we know that a container can be restarted, attached, or even be killed!

k@laptop:~$ docker ps -a
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS                          PORTS               NAMES
4d673944ec59        busybox:latest      "sh"                21 minutes ago      Exited (0) About a minute ago                       trusting_mccarthy  

But the one thing we can’t do changing the command that’s been executed. So, once we have created a container and it has a command associated with it, that command will always get run when we restart the container. We can restart and attached to it because it has shell command. But if we’re running something else, like an echo command:

k@laptop:~$ docker run -it busybox echo 'bogotobogo'
bogotobogo

Now, we’ve created another container which does echo:

k@laptop:~$ docker ps -a
CONTAINER ID        IMAGE               COMMAND             CREATED              STATUS                          PORTS               NAMES
849975841b4e        busybox:latest      "echo bogotobogo"   About a minute ago   Exited (0) About a minute ago                       elegant_goldstine   
4d673944ec59        busybox:latest      "sh"                29 minutes ago       Exited (0) 9 minutes ago                            trusting_mccarthy

It executed “echo bogotobogo”, and then exited. But it still hanging around. Even if we restart it:

k@laptop:~$ docker restart 849975841b4e
849975841b4e

It run in background and we don’t see any output. All it did “echo bogotobogo” again. I can remove this container:

k@laptop:~$ docker rm 849975841b4e
849975841b4e

Now it’s been removed from our list of containers:

k@laptop:~$ docker ps -a
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS                      PORTS               NAMES
4d673944ec59        busybox:latest      "sh"                35 minutes ago      Exited (0) 14 minutes ago                       trusting_mccarthy   

We want this one to be removed as well:

k@laptop:~$ docker rm 4d673944ec59
4d673944ec59
k@laptop:~$ docker ps -a
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
k@laptop:~$ 

docker run -it

Let’s look at the -it in docker run -it.

k@laptop:~$ docker run --help

Usage: docker run [OPTIONS] IMAGE [COMMAND] [ARG...]

Run a command in a new container

  -i, --interactive=false    Keep STDIN open even if not attached

  -t, --tty=false            Allocate a pseudo-TTY
...

If we run with t:

k@laptop:~$ docker run -i busybox sh

ls
bin
dev
etc
home
lib
lib64
linuxrc
media
mnt
opt
proc
root
run
sbin
sys
tmp
usr
var

cd home

cd /home

ls
default
ftp

exit
k@laptop:~$

It is interactive mode, and we can do whatever we want: “ls”, “cd /home” etc, however, it does not give us console, tty.

If we issue the docker run with just t:

k@laptop:~$ docker run -t busybox sh
/ #

We get a terminal we can work on. But we can’t do anything because it cannot get any input from us. We can execute anything since it’s not getting what we’re passing in.

k@laptop:~$ docker ps -a
CONTAINER ID        IMAGE               COMMAND             CREATED              STATUS                        PORTS               NAMES
3bcaa1d57ac0        busybox:latest      "sh"                15 seconds ago       Up 14 seconds                                     dreamy_yonath           
ac53c2a81ebe        busybox:latest      "sh"                About a minute ago   Exited (127) 55 seconds ago                       condescending_galileo   
497be20f5d7e        busybox:latest      "sh"                3 minutes ago        Exited (-1) 2 minutes ago                         nostalgic_wilson  

Docker run –rm

We need to kill the container run with only “t”:

      
k@laptop:~$ docker kill 3bcaa1d57ac0
3bcaa1d57ac0

k@laptop:~$ docker ps -a
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS                       PORTS               NAMES
3bcaa1d57ac0        busybox:latest      "sh"                3 minutes ago       Exited (-1) 2 minutes ago                        dreamy_yonath           
ac53c2a81ebe        busybox:latest      "sh"                3 minutes ago       Exited (127) 3 minutes ago                       condescending_galileo   
497be20f5d7e        busybox:latest      "sh"                6 minutes ago       Exited (-1) 4 minutes ago                        nostalgic_wilson        

Then, we want to remove all containers:

 
k@laptop:~$ docker rm $(docker ps -aq)
3bcaa1d57ac0
ac53c2a81ebe
497be20f5d7e

k@laptop:~$ docker ps -a
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
k@laptop:~$ 

The argument q in docker ps -aq gives us id of all containers.

The next thing to do here is that we don’t want remove the containers every time we execute a shell command:

 
k@laptop:~$ docker run -it --rm busybox sh
/ # echo Hello
Hello
/ # exit

k@laptop:~$ docker ps -a
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
k@laptop:~$ 

Notice that we don’t have any container. By passing in --rm, the container is automatically deleted once the container exited.

 
k@laptop:~$ docker run -it --rm busybox echo "I will not staying around forever"
I will not staying around forever
k@laptop:~$ docker ps -a
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
k@laptop:~$ 

Docker rmi – deleting local images

To remove all images from our local machine:

k@laptop:~$ docker rmi $(docker images -q)

Saving container – docker commit

As we work with a container and continue to perform actions on it (install software, configure files etc.), to have it keep its state, we need to commit. Committing makes sure that everything continues from where they left next time.

Exit docker container:

root@f510d7bb05af:/# exit
exit

Then, we can save the image:

# Usage: sudo docker commit [container ID] [image name]
$ sudo docker commit f510d7bb05af my_image

Running CentOS on Ubuntu 14.04

As an exercise, we now want to run CentOS on Ubuntu 14.04.

Before running it, check if we have have any Docker container including running ones:

k@laptop:~$ docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES

OK. There is no running Docker. How about any exited containers:

k@laptop:~$ docker ps -a
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES

Nothings there!

Let’s search Docker registry:

k@laptop:~$ docker search centos
NAME                            DESCRIPTION                                     STARS     OFFICIAL   AUTOMATED
centos                          The official build of CentOS.                   1223      [OK]       
ansible/centos7-ansible         Ansible on Centos7                              50                   [OK]
jdeathe/centos-ssh-apache-php   CentOS-6 6.6 x86_64 / Apache / PHP / PHP m...   11                   [OK]
blalor/centos                   Bare-bones base CentOS 6.5 image                9                    [OK]
...

We can download the image using docker pull command. The command find the image by name on Docker Hub and download it from Docker Hub to a local image cache.

k@laptop:~$ docker pull centos
latest: Pulling from centos
c852f6d61e65: Pull complete 
7322fbe74aa5: Pull complete 
f1b10cd84249: Already exists 
Digest: sha256:90305c9112250c7e3746425477f1c4ef112b03b4abe78c612e092037bfecc3b7
Status: Downloaded newer image for centos:latest

Note that when the image is successfully downloaded, we see a 12 character hash: Download complete which is the short form of the image ID. These short image IDs are the first 12 characters of the full image ID – which can be found using docker inspect or docker images –no-trunc=true:

k@laptop:~$ docker inspect centos:latest
[
{
    "Id": "7322fbe74aa5632b33a400959867c8ac4290e9c5112877a7754be70cfe5d66e9",
    "Parent": "c852f6d61e65cddf1e8af1f6cd7db78543bfb83cdcd36845541cf6d9dfef20a0",
    "Comment": "",
    "Created": "2015-06-18T17:28:29.311137972Z",
    "Container": "d6b8ab6e62ce7fa9516cff5e3c83db40287dc61b5c229e0c190749b1fbaeba3f",
    "ContainerConfig": {
        "Hostname": "545cb0ebeb25",
        "Domainname": "",
        "User": "",
        "AttachStdin": false,
        "AttachStdout": false,
        "AttachStderr": false,
        "PortSpecs": null,
        "ExposedPorts": null,
        "Tty": false,
        "OpenStdin": false,
        "StdinOnce": false,
        "Env": null,
        "Cmd": [
            "/bin/sh",
            "-c",
            "#(nop) CMD [\"/bin/bash\"]"
        ],
        "Image": "c852f6d61e65cddf1e8af1f6cd7db78543bfb83cdcd36845541cf6d9dfef20a0",
        "Volumes": null,
        "VolumeDriver": "",
        "WorkingDir": "",
        "Entrypoint": null,
        "NetworkDisabled": false,
        "MacAddress": "",
        "OnBuild": null,
        "Labels": {}
    },
    "DockerVersion": "1.6.2",
    "Author": "The CentOS Project \u003ccloud-ops@centos.org\u003e - ami_creator",
    "Config": {
        "Hostname": "545cb0ebeb25",
        "Domainname": "",
        "User": "",
        "AttachStdin": false,
        "AttachStdout": false,
        "AttachStderr": false,
        "PortSpecs": null,
        "ExposedPorts": null,
        "Tty": false,
        "OpenStdin": false,
        "StdinOnce": false,
        "Env": null,
        "Cmd": [
            "/bin/bash"
        ],
        "Image": "c852f6d61e65cddf1e8af1f6cd7db78543bfb83cdcd36845541cf6d9dfef20a0",
        "Volumes": null,
        "VolumeDriver": "",
        "WorkingDir": "",
        "Entrypoint": null,
        "NetworkDisabled": false,
        "MacAddress": "",
        "OnBuild": null,
        "Labels": {}
    },
    "Architecture": "amd64",
    "Os": "linux",
    "Size": 0,
    "VirtualSize": 172237380
}
]

Or:

k@laptop:~$ docker images --no-trunc=true
REPOSITORY          TAG                 IMAGE ID                                                           CREATED             VIRTUAL SIZE
centos              latest              7322fbe74aa5632b33a400959867c8ac4290e9c5112877a7754be70cfe5d66e9   8 weeks ago         172.2 MB

k@laptop:~$ docker images --no-trunc=false

Or:

k@laptop:~$ docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             VIRTUAL SIZE
centos              latest              7322fbe74aa5        8 weeks ago         172.2 MB

Let’s run Docker with the image:

k@laptop:~$ docker run -it centos:latest /bin/bash
[root@33b55acb4814 /]# 

Now, we are on CentOS. Let’s make a file, and then just exit:

[root@33b55acb4814 /]# cd /home

[root@33b55acb4814 home]# touch bogo.txt

[root@33b55acb4814 home]# ls
bogo.txt

[root@33b55acb4814 home]# exit
exit

k@laptop:~$

Check if there is any running container:

k@laptop:~$ docker ps 
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES

Nothings’ there. The docker ps command shows only the running containers. So, if we want to see all containers, we need to add -a flag to the command:

k@laptop:~$ docker ps -a
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS                      PORTS               NAMES
33b55acb4814        centos:latest       "/bin/bash"         2 minutes ago       Exited (0) 51 seconds ago                       grave_jang          

We do not have any Docker process that’s actively running. But we can restart the one that we’ve just exited from.

$ docker restart 33b55acb4814
33b55acb4814

k@laptop:~$ docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
33b55acb4814        centos:latest       "/bin/bash"         4 minutes ago       Up 4 seconds                            grave_jang          

Note that the container is now running, and it actually executed the /bin/bash command.

We can go back to where we left by executing docker attach ContainerID command:

k@laptop:~$ docker attach 33b55acb4814

[root@33b55acb4814 /]# 

Now, we’re back. To make sure, we can check if there is the file we created:

[root@33b55acb4814 /]# cd /home
[root@33b55acb4814 home]# ls
bogo.txt
Posted in Docker

Docker Container

Docker là gì ? Kiến thức cơ bản về Docker

Docker là gì ? Kiến thức cơ bản về Docker

Image for post

I. Tại sao phải dùng Docker ?

Việc setup và deploy application lên một hoặc nhiều server rất vất vả từ việc phải cài đặt các công cụ, môi trường cần cho application đến việc chạy được ứng dụng chưa kể việc không đồng nhất giữa các môi trường trên nhiều server khác nhau. Chính vì lý do đó Docker được ra đời để giải quyết vấn đề này.

II. Docker là gì ?

Docker là một nền tảng cho developers và sysadmin để develop, deploy và run application với container. Nó cho phép tạo các môi trường độc lập và tách biệt để khởi chạy và phát triển ứng dụng và môi trường này được gọi là container. Khi cần deploy lên bất kỳ server nào chỉ cần run container của Docker thì application của bạn sẽ được khởi chạy ngay lập tức.

III. Lợi ích của Docker

  • Không như máy ảo Docker start và stop chỉ trong vài giây.
  • Bạn có thể khởi chạy container trên mỗi hệ thống mà bạn muốn.
  • Container có thể build và loại bỏ nhanh hơn máy ảo.
  • Dễ dàng thiết lập môi trường làm việc. Chỉ cần config 1 lần duy nhất và không bao giờ phải cài đặt lại các dependencies. Nếu bạn thay đổi máy hoặc có người mới tham gia vào project thì bạn chỉ cần lấy config đó và đưa cho họ.
  • Nó giữ cho word-space của bạn sạch sẽ hơn khi bạn xóa môi trường mà ảnh hưởng đến các phần khác.

IV. Cài đặt

Link download: tại đây

Chọn bản cài đặt tương ứng với hệ điều hành của bạn và tiến hành cài đặt theo hướng dẫn đối với Linux còn Windows và MacOS thì bạn chỉ cần tải bản cài về và cài đặt như mọi application khác.

Sau khi cài đặt xong để kiểm tra xem cài đặt thành công hay không ?

  • Mở command line:
$ docker version$ docker info$ docker run hello-world

V. Một số khái niệm

Image for post
  • Docker Client: là cách mà bạn tương tác với docker thông qua command trong terminal. Docker Client sẽ sử dụng API gửi lệnh tới Docker Daemon.
  • Docker Daemon: là server Docker cho yêu cầu từ Docker API. Nó quản lý images, containers, networks và volume.
  • Docker Volumes: là cách tốt nhất để lưu trữ dữ liệu liên tục cho việc sử dụng và tạo apps.
  • Docker Registry: là nơi lưu trữ riêng của Docker Images. Images được push vào registry và client sẽ pull images từ registry. Có thể sử dụng registry của riêng bạn hoặc registry của nhà cung cấp như : AWS, Google Cloud, Microsoft Azure.
  • Docker Hub: là Registry lớn nhất của Docker Images ( mặc định). Có thể tìm thấy images và lưu trữ images của riêng bạn trên Docker Hub ( miễn phí).
  • Docker Repository: là tập hợp các Docker Images cùng tên nhưng khác tags. VD: golang:1.11-alpine.
  • Docker Networking: cho phép kết nối các container lại với nhau. Kết nối này có thể trên 1 host hoặc nhiều host.
  • Docker Compose: là công cụ cho phép run app với nhiều Docker containers 1 cách dễ dàng hơn. Docker Compose cho phép bạn config các command trong file docker-compose.yml để sử dụng lại. Có sẵn khi cài Docker.
  • Docker Swarm: để phối hợp triển khai container.
  • Docker Services: là các containers trong production. 1 service chỉ run 1 image nhưng nó mã hoá cách thức để run image — sử dụng port nào, bao nhiêu bản sao container run để service có hiệu năng cần thiết và ngay lập tức.

VI. Dockerfile

– Dockerfile là file config cho Docker để build ra image. Nó dùng một image cơ bản để xây dựng lớp image ban đầu. Một số image cơ bản: python, unbutu and alpine. Sau đó nếu có các lớp bổ sung thì nó được xếp chồng lên lớp cơ bản. Cuối cùng một lớp mỏng có thể được xếp chồng lên nhau trên các lớp khác trước đó.

– Các config :

  • FROM — chỉ định image gốc: python, unbutu, alpine…
  • LABEL — cung cấp metadata cho image. Có thể sử dụng để add thông tin maintainer. Để xem các label của images, dùng lệnh docker inspect.
  • ENV — thiết lập một biến môi trường.
  • RUN — Có thể tạo một lệnh khi build image. Được sử dụng để cài đặt các package vào container.
  • COPY — Sao chép các file và thư mục vào container.
  • ADD — Sao chép các file và thư mục vào container.
  • CMD — Cung cấp một lệnh và đối số cho container thực thi. Các tham số có thể được ghi đè và chỉ có một CMD.
  • WORKDIR — Thiết lập thư mục đang làm việc cho các chỉ thị khác như: RUN, CMD, ENTRYPOINT, COPY, ADD,…
  • ARG — Định nghĩa giá trị biến được dùng trong lúc build image.
  • ENTRYPOINT — cung cấp lệnh và đối số cho một container thực thi.
  • EXPOSE — khai báo port lắng nghe của image.
  • VOLUME — tạo một điểm gắn thư mục để truy cập và lưu trữ data.

VII. Tạo Demo

Tạo file Dockerfile

FROM golang:1.11 AS builderWORKDIR /go/src/docker-demo/COPY . .RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o docker-demo .FROM alpine:latestWORKDIR /root/COPY — from=builder /go/src/docker-demo .CMD [“./docker-demo”]

Tạo file main.go

package mainimport (   “fmt”)func main() {   fmt.Println(“Learning Docker”)}

Tiến hành build file Dockerfile

$ docker build .
Image for post
$ docker run 4cc010d9d657
Image for post

Kết quả in ra dòng chữ Learning Docker đã được code trong file main.go

VII. Các lênh cơ bản trong docker

  • List image/container:
$ docker image/container ls
  • Delete image/container:
$ docker image/container rm <tên image/container >
  • Delete all image hiện có:
$ docker image rm $(docker images –a –q)
  • List all container hiện có:
$ docker ps –a
  • Stop a container cụ thể:
$ docker stop <tên container>
  • Run container từ image và thay đổi tên container:
$ docker run –name <tên container> <tên image>
  • Stop all container:
$ docker stop $(docker ps –a –q)
  • Delete all container hiện có:
$ docker rm $(docker ps –a –q)
  • Show log a container:
$ docker logs <tên container>
  • Build một image từ container:
$ docker build -t <tên container> .
  • Tạo một container chạy ngầm:
$ docker run -d <tên image>
  • Tải một image trên docker hub:
$ docker pull <tên image>
  • Start một container:
$ docker start <tên container>
Posted in Docker

Google Cloud Platform Tutorial

What is Cloud Computing?

Cloud computing is defined as the services offered through remote servers on the internet. These services might include database storage, applications, compute power and other IT resources over the pay-as-you-go pricing approach. The remote server allows users to save, modify, or process data on the internet or cloud-based platform instead of storing it on a local server or their devices.

Cloud computing is evolving due to fast performance, better manageability, and less maintenance. It helps organizations to minimize the number of resources and overall infrastructure costs. Additionally, it helps IT teams better focus on the important applications, services, and processes and achieve the company’s goals.

Typically, the cloud-computing providers offer their services according to the following three standard models:

Điện toán đám mây được định nghĩa là các dịch vụ được cung cấp thông qua các máy chủ từ xa trên internet. Các dịch vụ này có thể bao gồm lưu trữ cơ sở dữ liệu, ứng dụng, sức mạnh máy tính và các tài nguyên CNTT khác theo cách tiếp cận định giá theo phương thức trả tiền. Máy chủ từ xa cho phép người dùng lưu, sửa đổi hoặc xử lý dữ liệu trên internet hoặc nền tảng dựa trên đám mây thay vì lưu trữ trên máy chủ cục bộ hoặc thiết bị của họ. Điện toán đám mây đang phát triển nhờ hiệu suất nhanh, khả năng quản lý tốt hơn và ít bảo trì hơn. Nó giúp các tổ chức giảm thiểu số lượng tài nguyên và chi phí cơ sở hạ tầng tổng thể. Ngoài ra, nó giúp các nhóm CNTT tập trung tốt hơn vào các ứng dụng, dịch vụ và quy trình quan trọng và đạt được các mục tiêu của công ty. Thông thường, các nhà cung cấp dịch vụ điện toán đám mây cung cấp dịch vụ của họ theo ba mô hình tiêu chuẩn sau:

What is Google Cloud Platform?

Google Cloud Platform (GCP) is a suite of cloud computing services provided by Google. It is a public cloud computing platform consisting of a variety of services like compute, storage, networking, application development, Big Data, and more, which run on the same cloud infrastructure that Google uses internally for its end-user products, such as Google Search, Photos, Gmail and YouTube, etc.

The services of GCP can be accessed by software developers, cloud administrators and IT professionals over the Internet or through a dedicated network connection.

Why Google Cloud Platform?

Google Cloud Platform is known as one of the leading cloud providers in the IT field. The services and features can be easily accessed and used by the software developers and users with little technical knowledge. Google has been on top amongst its competitors, offering the highly scalable and most reliable platform for building, testing and deploying the applications in the real-time environment.

Apart from this, GCP was announced as the leading cloud platform in the Gartner’s IaaS Magic Quadrant in 2018. Gartner is one of the leading research and advisory company. Gartner organized a campaign where Google Cloud Platform was compared with other cloud providers, and GCP was selected as one of the top three providers in the market.

Most companies use data centers because of the availability of cost forecasting, hardware certainty, and advanced control. However, they lack the necessary features to run and maintain resources in the data center. GCP, on the other side, is a fully-featured cloud platform that includes:

  • Capacity: Sufficient resources for easy scaling whenever required. Also, effective management of those resources for optimum performance.
  • Security: Multi-level security options to protect resources, such as assets, network and OS -components.
  • Network Infrastructure: Number of physical, logistical, and human-resource-related components, such as wiring, routers, switches, firewalls, load balancers, etc.
  • Support: Skilled professionals for installation, maintenance, and support.
  • Bandwidth: Suitable amount of bandwidth for peak load.
  • Facilities: Other infrastructure components, including physical equipment and power resources.

Therefore, Google Cloud Platform is a viable option for businesses, especially when the businesses require an extensive catalog of services with global recognition.

Benefits of Google Cloud Platform

Some of the main benefits of Google Cloud Platform are explained below:

Best Pricing: Google enables users to get Google Cloud hosting at the cheapest rates. The hosting plans are not only cheaper than other hosting platforms but also offer better features than others. GCP provides a pay-as-you-go option to the users where users can pay separately only for the services and resources they want to use.

Work from Anywhere: Once the account is configured on GCP, it can be accessed from anywhere. That means that the user can use GCP across different devices from different places. It is possible because Google provides web-based applications that allow users to have complete access to GCP.

Private Network: Google has its own network that enables users to have more control over GCP functions. Due to this, users achieve smooth performance and increased efficiency over the network.

Scalable: Users are getting a more scalable platform over the private network. Because Google uses fiber-optic cables to extend its network range, it is likely to have more scalability. Google is always working to scale its network because there can be any amount of traffic at any time.

Security: There is a high number of security professionals working at Google. They always keep trying to secure the network and protect the data stored on servers. Additionally, Google uses an algorithm that encrypts all the data on the Cloud platform. This gives assurance to the users that their data is completely safe and secure from unauthorized sources.

Redundant Backup: Google always keeps backup of user’s data with built-in redundant backup integration. In case a user has lost the stored data, it’s not a big problem. Google always has a copy of the users’ data unless the data is deleted forcefully. This adds data integrity, reliability and durability with GCP.

Key Features of Google Cloud Platform

The following are some key features of Google Cloud Platform:

  • On-demand services: Automated environment with web-based tools. Therefore, no human intervention is required to access the resources.
  • Broad network access: The resources and the information can be accessed from anywhere.
  • Resource pooling: On-demand availability of a shared pool of computing resources to the users.
  • Rapid elasticity: The availability of more resources whenever required.
  • Measured service: Easy-to-pay feature enables users to pay only for consumed services.
Google Cloud Platform Tutorial

Working of Google Cloud Platform

When a file is uploaded on the Google cloud, the unique metadata is inserted into a file. It helps identify the different files and track the changes made across all the copies of any particular file. All the changes made by individuals get synchronized automatically to the main file, also called a master file. GCP further updates all the downloaded files using metadata to maintain the correct records.

Let’s understand the working of GCP with a general example:

Suppose that MS Office is implemented on Cloud to enable several people to work together. The primary aim of using cloud technology is to work on the same project at the same time. We can create and save a file on the cloud once we install a plugin for the MS Office suite. This will allow several people to edit a document at the same time. The owner can assign access to specific people to allow them to download and start editing the document in MS Office.

Once users are assigned as an editor, they can use and edit the document’s cloud copy as desired. The combined, edited copy is generated that is known as the master document. GCP helps to assign a unique URL to each specific copy of the existing document given to different users. However, any of the authorized users’ changes will be visible on all the copies of documents shared over the cloud. In case multiple changes are made to the same document, then GCP allows the owner to select the appropriate changes to keep.

Google Cloud Platform Services

Google provides a considerable number of services with several unique features. That is the reason why Google Cloud Platform is continually expanding across the globe. Some of the significant services of GCP are:

  • Compute Services
  • Networking
  • Storage Services
  • Big Data
  • Security and Identity Management
  • Management Tools
  • Cloud AI
  • IoT (Internet of Things)
Google Cloud Platform Tutorial

Let’s understand each of these services in details:

Compute Services

GCP offers a scalable range of computing services, such as:

  • Google App Engine: It is a cloud computing platform that follows the concept of Platform-as-a-Service to deploy PHP, Java and other software. It is also used to develop and deploy web-based software in Google-managed data centers. The most significant advantage of Google App Engine is its automatic scaling capability. This means that the App Engine automatically allocates more resources for the application when there is an increase in requests.
  • Compute Engine: It is a cloud computing platform that follows the concept of Infrastructure-as-a-Service to run Windows and Linux based virtual machines. It is an essential component of GCP. It is designed on the same infrastructure used by Google search engine, YouTube and other Google services.
  • Kubernetes Engines: This computing service is responsible for offering a platform for automatic deployment, scaling, and other operations of application containers across clusters of hosts. The engine supports several container tools like a docker, etc.

Networking

GCP includes the following network services:

  • VPC: VPC stands for Virtual Private Network. The primary function of VPC is to offer a private network with routing, IP allocation, and network firewall policies. This will help to create a secure environment for the application deployments.
  • Cloud Load Balancing: As its name states, Cloud balancing is used to distribute workload across different computing resources to balance the entire system performance. This also results in cost-reduction. The process also helps in minimizing the availability and maximizing the capability of the resources.
  • Content Delivery Network: CDN is a geographically distributed network of proxy servers and their data centers. The primary aim of using CDN is to provide maximum performance to the users. Additionally, it also helps deliver high availability of resources by equally distributing the related services to the end-users.

Storage Services

GCP has the following storage services:

  • Google Cloud Storage: It is an online data storage web service that Google provides to its users to store and access data from anywhere. The service also includes a wide range of features like maximum performance, scalability, security and sharing.
  • Cloud SQL: It is a web-service that enables users to create, manage, and use relational databases stored on Google Cloud servers. The service itself maintains and protects the databases, which helps users focus on their applications and other operations.
  • Cloud Bigtable: It is known for its fast performance and highly manageable feature. It is a highly scalable NoSQL database service that allows collecting and retaining data from as low as 1 TB to hundreds of PB.

Big Data

GCP provides a variety of services related to big data; they are:

  • BigQuery: It is a fully managed data analysis service by Google. The primary aim of Google BigQuery service is to helps businesses to analyze Big Data. It offers a highly scalable data management option. This means BigQuery allows users to perform ad-hoc queries and share data insights across the web.
  • Google Cloud Datastore: Google Cloud Datastore is a kind of datastore service that is fully managed, schema-less, and non-relational. This service enables businesses to perform automatic transactions and a rich set of queries. The main advantage of Google Cloud Datastore is the capability of automatic scaling. This means that the service can itself scale up and down, depending on the requirement of resources.
  • Google Cloud Dataproc: It is a very fast and easy to use big data service offered by Google. It mainly helps in managing Hadoop and Spark services for distributed data processing. The service allows users to create Hadoop or Spark clusters sized according to the overall workload and can be accessed whenever users want them.

Security and Identity Management

GCP includes the following services related to Security and Identity management:

  • Cloud Data Loss Prevention API: It is mainly designed to manage sensitive data. It helps users manage sensitive data elements like credit card details, debit card details, passport numbers, etc. It offers fast and scalable classification for sensitive data.
  • Cloud IAM: It stands for Cloud Identity and Access Management. It is a framework that contains rules and policies and validates the authentication of the users for accessing the technology resources. That is why it is also known as Identity Management (IdM).

Management Tools

GCP includes the following services related to management tools:

  • Google Stackdriver: Google Stackdriver service is primarily responsible for displaying the overall performance and diagnostics information. This may include insights of data monitoring, tracing, logging, error reporting, etc. The service also prompts an alert notification to the public cloud users.
  • Google Cloud Console App: It is a native mobile application powered by Google. The primary aim of this service is to enable users to manage the core features of Google Cloud services directly from their mobile devices anytime, anywhere. The primary functions of this service are alerting, monitoring, and performing critical actions on resources.

Cloud AI

When it comes to Cloud AI, GCP offers these services:

  • Cloud Machine Learning Engine: It is another fully managed service that allows users to create Machine Learning models. The service is mainly used for those ML models, which are based on mainstream frameworks.
  • Cloud AutoML: It is the type of service that is based on Machine Learning. It helps users to enter their data sets and gain access to quality trained pre-designed ML models. The service works by following Google’s transfer learning and Neural Architecture Search method.

IoT (Internet of Things)

GCP contains the following IoT services:

Cloud IoT Core: It is one of the fully managed core services. It allows users to connect, control, and ingest data from various devices that are securely connected to the Internet. This allows other Google cloud services to analyze, process, collect and visualize IoT data in real-time.

Cloud IoT Edge: The Edge computing service brings memory and other computing-power resources near to the location where it is required.

Advantages of Google Cloud Platform

There are several advantages of using Google Cloud Platform, such as:

  • Google Cloud Offers Quick and Easy Collaboration: Multiple users can access the data and simultaneously contribute their information. This is possible because the data is stored on the cloud servers, not on the user’s personal computers.
  • Higher Productivity with Continuous Development: Google is always working on adding new features and functionalities to provide higher productivity to the customers. Therefore, Google delivers frequent updates to its products and services.
  • Less Disruption with Adopting New Features: Instead of pushing huge disruptive updates of changes, Google provides small updates weekly. This helps users to understand and adopt new features easily.
  • Least or Minimal Data is stored on Vulnerable Devices: Google does not store data on local devices unless a user explicitly tries to do it. This is because the data stored on local devices may get compromised compared to the cloud’s data.
  • Users can access Google Cloud from Anywhere: The best thing is that a user can easily access the information stored on Google cloud from anywhere because it is operated through web-based applications.
  • Google provides Maximum Security with its Robust Structure: Google hires leading security professionals to protect user’s data. Users get process-based and physical security features made by Google.
  • Users have Full Control over their Data: Users gain full control over services and the data stored in Google Cloud. If a user does not want to use Google services any longer and wants to delete the cloud data, it can be easily performed.
  • Google provides Higher Uptime and Reliability: Google uses several resources to provide higher and reliable up-time servers. If a data center is not working for technical issues, the system will automatically communicate with the secondary center without interruption visible to users.

Creating a Free Tier Account on GCP

To start using Google Cloud Platform, we are first required to create an account GCP. Here, we will create a free tier account for explaining the upcoming topic of this tutorial. The best thing about free account is that Google provides $300 worth credit to spend over the next 90 days after the date of account creation. Google offers all the core services of GCP with a free account for the next 90 days.

However, users must have a credit card to start a free tier account. Google asks for the credit card details to make sure that it is a genuine human request. Google does not charge automatically even after the 90 days or when we have exhausted the $300 free credit. The amount will only be charged when we will be upgrading our free account to a paid account manually.

Let’s start with the steps of creating a free tier account on Google Cloud Platform:

Step 1: First, we are required to navigate to the following link: https://cloud.google.com/gcp/

Step 2: On the next screen, we need to click on ‘Get started for free’, as shown below:

Google Cloud Platform Tutorial

Step 3: Next, we are required to login to the Google Account. We can use the ‘create an account’ button if we don’t have an existing Google account.

Google Cloud Platform Tutorial

Step 4: Once we have logged in, we will get to the following screen:

Google Cloud Platform Tutorial

Here, we must select the Country, agree to the Terms of Service, and then click on the ‘CONTINUE’ button.

Step 5: On the next screen, we have to enter some necessary details such as name and address details. Also, we have to enter payment details like the method of payments and credit card details. After filling all the details, we need to click on the button ‘START MY FREE TRIAL’ from the bottom of the page:

Google Cloud Platform Tutorial

Step 6: Google asks for the confirmation to use the credit card for the small deduction to ensure that the card information is correct. However, the amount is refunded back to the same account. Here, we need to click on the ‘CONTINUE’ button:

Google Cloud Platform Tutorial

Step 7: On the next screen, we must click on the ‘GO TO CONSOLE’ button:

Google Cloud Platform Tutorial

After clicking on the ‘GO TO CONSOLE’ button, we will be redirected to the Dashboard that includes a summary of GCP services along with projects and other insights. It looks like this:

Google Cloud Platform Tutorial

To be specific, a Dashboards of GCP displays the summarized view of the followings:

  • Project Info: contains project details such as project name, ID, and number.
  • Resources: contains a list of resources being used in the related project.
  • APIs: contains various API requests running with the project (in request/sec form).
  • Google Cloud Platform Status: displays an overall summary of services that are part of GCP.
  • Monitoring: displays alerts, performance stats, Uptime, etc. to ensure that systems are running reliably.
  • Error Reporting: displays errors occurring in the projects, but it needs to be configured first.
  • Trace: displays latency data of existing applications across a distributed tracing system.
  • Compute Engine: displays the insights of CPU usage in percentage (%).
  • Tutorials: contains Getting Started guides (basic guides) to explain how the GCP features work.
  • News: displays news and other important updates regarding Google Cloud Platform.
  • Documentation: contains in-depth guides to teach more about Compute Engine, Cloud Storage, and App Engine.

Google Cloud Platform Pricing

When it comes to pricing, Google Cloud Platform is the cheapest solution in the market. GCP is not only low on price but also offers more features and services than other providers.

When comparing GCP with other leading competitors, it has more benefits over them. Google provides its users a massive 60% savings, including:

  • 15% rightsizing recommendation
  • 21% list price differences
  • 24% of sustained usage discounts
Google Cloud Platform Tutorial

Some of the main benefits of GCP pricing are:

No Hidden Charges: There are no hidden charges behind the GCP pricing. Google’s pricing structure is straightforward and can be easily understood.

Pay-as-you-go: Google offers its customer ‘use now, pay later’ option. So, users will have to pay only for those services which they want to use or already using.

No Termination Fee: Users are free to stop using Google services whenever they want, and there will not have to pay any termination fee. That means the moment users stop using Google services; they stop paying for it.

Difference between Google Cloud Platform, AWS and Azure

Like Google cloud platform, AWS and Azure are the other popular cloud-based platforms. However, there are differences amongst them. Some of the main differences between GCP, AWS and Azure are tabulated below:

Google CloudAWSAzure
It uses GCE (Google Compute Engine) for computing purposes.AWS EC2 offers core compute services.It uses virtual machines for computation purposes.
It uses Google Cloud Storage for storage purposes.It uses Amazon S3 for storing the data.It uses a storage block bob that comprises blocks for storing the data.
It offers the lowest price to the customers to beat other cloud providers.AWS pricing is generally keen to have inscrutable. The overall structure of granular pricing is a bit complex.Like AWS, Azure pricing structure is also difficult to understand unless you have considerable experience.
It uses Cloud Test labs for App Testing purposes.It uses a device farm for App Testing purposes.It uses DevTest labs for App Testing purposes.
It uses Subnet as a virtual network.It uses VPC as a virtual network.It uses VNet as a virtual Network.
It follows the Cloud Load Balancing configuration.It follows the Elastic Load Balancing configuration.It follows the Load-Balancer Application Gateway configuration.

Job Opportunities with GCP

Having a piece of in-depth knowledge in the Google Cloud Platform is very useful as per job purposes. However, an experience or little more expertise in using GCP will help a person stand aside from the crowd. This will not only make a resume more effective but will also open a variety of job opportunities.

There are many job-opportunities with GCP. Some popular job-roles are listed below:

  • Technical Lead Manager
  • Sales Engineer
  • Technical Solutions Engineer
  • Account Executive
  • Technical Program Manager
  • Cloud Software Engineer
  • Data Center Software Engineer
  • Solutions Architect
  • Strategic Customer Engineer

Prerequisite

There is no special prerequisite for this GCP Tutorial. All you need is continuous learning and practicing with the tools. However, if you want to extend functionalities to match your requirements, then a basic knowledge of working with cloud-based software and tools will be beneficial and put you at an advantage. We have designed this tutorial to help you learn all the concepts of Google Cloud Platform from scratch.

Audience

Our Google Cloud Platform Tutorial is designed to help beginners and professionals.

Problem

We assure you that you will not find any difficulty while learning through our Google Cloud Platform Tutorial. But if you find any mistake in this tutorial, we request you to kindly post the problem in the contact form so that we can improve it.