Mình sẽ viết đoạn này bằng tiếng Anh, tiếng việt các bạn có thể dùng tính năng translate nằm bên phải để dịch tiếng việt nhé. Hoặc mình sẽ dịch tiếng việt bên dưới
Most of self-driving car on the world work as follow:
Cách mà một chiếc xe tự lái hoạt động được cấu trúc như biểu đồ phía dưới:
Computer Vision(Kỹ thuật xử lý ảnh) + Sensor Fusion(kết hợp sensor: các bạn tạm hiểu là cách hợp các tín hiệu cảm biến lại để đưa ra thông tin chính xác nhất) để đưa ra vị trí chính xác của xe nằm ở đâu trong map. Từ đó xe dùng các thật toán tìm đường Path Planning để đi tới đích, bước cuối cùng là truyền lệnh cho động cơ xe để yêu cầu chạy tốc độ bao nhiêu bẻ lái như thế nào:
Tổng quát chỉ có vậy, nếu các bạn muốn hiểu chi tiết thì hãy đọc đoạn tiếng anh bên dưới(có thế translate nếu muốn)
Computer vision is how we use images to figure out what the world around us look like. Sensor fusion is how we in-cooperate data from other sensor like laser, Lidars, radar, to get richer understanding out of environment. Once we’ve build this deep understanding of what the world looks like we use path planning to chart a course through the world to get us to where we would like to go. The final step is how we actually turn the sterring wheel and hit the throttle, and the brake in oder to execute the trajectory that we build during path planning.
II/ Computer Vision
Computer Vision using the camera mounted in front of the car, to detect lane and other vehicle. It looking for color and edges and gradient to find the lane on the road and then train a deep neural network to draw the bounding box around the other vehicle on the road.
Deep neural network and deep learning is exciting new part of machine leaning and artificial intelligent. And this is the way that computer can learn what cars and other object look like by simply sending them lots and lots of data they see lots of cars and this is pretty similar to what advance driving assistance system do on the road today.
II/ Sensor Fusion
Once we know what world look like by images, the next steps is augment that understanding of the world using other sensor so radar and laser to get measurement that are difficult for the camera alone to understand so things like distance between our car and other car and how fast other object in the environment moving.
A Lidar which has an array of laser doing a 360 degrees scan to to world and seeing the different object in that environment looks like and how they move.
So once we understand both what the world look like and how to measure it and we cooperate those understanding together to get rich picture of our surrounding environment, the next step is to localize ourself in that environment.
Maybe people think GPS today can localize ourself in cellphone. we absolute now where we are but it not really like that, because GPS uncertainty is about 1 to 2 meter. And in self-driving we need the absolute accuracy, because if a small mistake happen, it very dangerous for your car if you running on the hight way with 150 to 200km/h. So we have to use much sophisticated mathematical algorithm as well as hight definition map to localize our vehicle precisely in it environment to single digits centimeter level accuracy.
Using particle filter is very good way to localize a vehicle in an know environment. Measuring distance from various landmarks ans it’s figuring how far is it from these landmark and where it see the landmark and comparing that to the map and using that to figure out precisely where it is in the world , those landmark might be things like streetlight, or traffic sign, mail boxes or even manhole covers.
IV/ Path Planning
When we know how the world look like and where were are in this world, the next step is to actually chart a path plan through that world to figure out how to get where we want to go, and this is path planning. This program cerate a path planner which predict where the other vehicles on the road are going to and then figures out the maneuver our vehicle should take in response and finally build and series of way points those are the green points in the video for the car driver through that’s the trajectory the car should follow and you see if the vehicle comes up on other traffic it has to figure out should it slow down and stay in it lane or should it shift right or should shift left. And this is the type of decision that real self-driving car had to make all the time subject to constraints like speed limit and acceleration limit.
Final step in the pipeline is control. Control is how we actually turn the steering wheel and hit the throttle and hit the brake in oder to execute that trajectory that we build during path planning.
If you want to go in detail, you can apply to Udacity self-driving car program, there you will be learn theories, programming skill with the best professors and the best engineers form over the world