Posted in Home

Where the wind creates cloud waterfalls

As the sun dipped lower and lower over the lush, volcanic Canary Islands, travelling steadily onward on its inexorable collision course with the sea, ripples of anticipation resonated across our small group of stargazers. Clad in warm coats (nights on the craggy flanks of the Spanish island’s giant volcano can be blustery), we listened as astrophysicist Agustin Nunez explained why La Palma is – no exaggeration – the best place on Earth to see the stars.\

tải xuống (3)
A view of the Milky Way (Credit: Enrique Mesa Photography/Getty)

First, he said, its position 100km off the coast of northern Africa means it is close to the equator, so you can see stars from both the northern and southern hemispheres – but in a temperate climate with placid weather patterns uncommon in the tropics.

Second, it’s very dark here, something that’s aided by an island-wide agreement to keep it that way, meaning all night-time lighting is either an orange hue (which doesn’t interfere with telescopes) or pointed down, at the ground.

But thirdly, and most importantly, is the wind. “Our trade winds are created by a high pressure system in the Azores, and travel more than 2,000km over the sea. When it hits our north shore, it’s crystal clear,” he said, noting that these smooth and slow winds creates an atmosphere where the stars are especially clear from the ground, both through a telescope and to the naked eye. “Here, we have the lowest turbulence on the planet.”

tải xuống (1)
Where land and clouds collide (Credit: Tim Johnson)

And all this is justly recognised: in 2012, La Palma became the world’s first Unesco recognised Starlight Reserve. The island is also home to one of the most important observatories on the planet: a place that houses 16 massive telescopes – including the largest one in the world.

It’s only recently that visitors have been able to partake in these excellent stargazing opportunities. For years, the observatory was a closed research facility, except for a handful of open days that attracted thousands of curious people. But with the observatory normalizing regular visits in 2013, the infrastructure – including a recent increase in guided starlight tours – is now in place for earthbound visitors to touch distant galaxies.

Down on terra firma, I was shown around the island by Sheila Crosby, an affable Englishwoman with a touch of the mad scientist, who worked at the Observatorio del Roque de los Muchachos for years as a software engineer. She’s also a certified starlight guide, and as she drove us somewhat erratically up and down La Palma’s winding roads, she started to explain the connections between land and sky, geology and astronomy – and how the island’s unique structure has created a number of Earth-bound wonders: for one, a cloud waterfall.

Posted in Robotics

RoboCup Standard Platform League: Goal Detection

Abstract—This paper presents a new fast and robust goal detection system for the Nao humanoid player at the RoboCup standard platform league. The proposed methodology is done totally based on Artificial Vision, without additional sensors. First, the goals are detected by means of color based segmentation and geometrical image processing methods from the 2D images provided by the front camera mounted in the head of the Nao
robot. Then, once the goals have been recognized, the position of the robot with respect to the goal is obtained exploiting 3D geometric properties. The proposed system is validated with real images by emulating real RoboCup conditions. Index Terms—RoboCup and soccer robots, Artificial Vision and Robotics, Nao humanoid, Goal Detection, Color Segmentation, 3D geometry.

A. Detection Based on Geometrical Relations
The first proposed method is intended to be robust and fast in order to overcome some of the usual drawbacks of the vision systems in the RoboCup, such as the excessive dependency of the illumination and the play field conditions, the difficulty in the detection of the goal posts depending on geometrical aspects (rotations, scale,. . . ) of the images captured by the robots, or the excessive computational cost of robust solutions based on classical Artificial Vision techniques. The proposed
approach can be decomposed into different stages that are described in the next subsections.
1) Color calibration: The first stage of the proposed method consists of a color calibration process. Thus, a set of YUV images acquired from the front camera of the Nao robot is segmented into regions representing one color class each.
Fig.2 shows an example image captured by the Nao robot containing a blue goal.
The segmentation process is performed by using a k-means clustering algorithm, but considering all the available centroids as initial seeds. Thus, in fact, seven centroids are utilized, corresponding to the colors of the ball (orange), goals (yellow and blue), field (green), robots (red and blue) and lines (white).

The first problem addressed in this project is the segmentation of the image to separate the object (in this case the goal) from the background. As the RoboCup rules specify that goals are painted either sky blue or yellow, a simple segmentation solution based on color has been chosen.  It can be assumed that normally no other objects of similar colors will be present in the picture taken from the camera. If a similar color object is indeed present, it will corrupt the segmentation result, and depending on is size or shape it can render the method inoperative.

Therefore, proper tuning of the segmentation parameters is mandatory to avoid these situations. The following figure shows the result of an ideal segmentation of a camera image.

Ideal segmentation (right) of the goal in the camera image (left).Edge_detection

Edge detection
Once the image is segmented to separate the goal from the background, one can attempt to extract the edges of the resulting binary image. This is done to reducethenumberoflinesdetectedinthefollowingHoughtransformstage, while retaining the structural information of the goal.

Ideal edge detection (right) of the segmented image (left).