<td id="kg486"><optgroup id="kg486"></optgroup></td>
<button id="kg486"><tbody id="kg486"></tbody></button>
<li id="kg486"><dl id="kg486"></dl></li>
  • <dl id="kg486"></dl>
  • <code id="kg486"><tr id="kg486"></tr></code>
  • Smartphone-Based System is Realizing Driverless Car

    Dec 23, 2015

     

    Researchers from the University of Cambridge are developing technologies which use deep learning techniques to help machines to recognise their location and surroundings. The systems could be used for the development of driverless cars and autonomous robotics.

    The systems can identify a user’s location and orientation in places where GPS does not function, and identify the various components of a road scene in real time on a regular camera or Smartphone, performing the same job as sensors costing tens of thousands of pounds.

    The first system, called SegNet, can take an image of a street scene it hasn’t seen before and classify it, sorting objects into 12 different categories – such as roads, street signs, pedestrians, buildings and cyclists – in real time. It can deal with light, shadow and night-time environments, and currently labels more than 90% of pixels correctly. Previous systems using expensive laser or radar based sensors have not been able to reach this level of accuracy while operating in real time.

    For the driverless cars currently in development, radar and base sensors are expensive, often costing more than the car itself. In contrast with expensive sensors, which recognise objects through a mixture of radar and LIDAR, SegNet learns by example – it was ‘trained’ by a group of undergraduate students, who manually labelled every pixel in each of 5000 images, with each image taking about 30 minutes to complete. Once the labelling was finished, the researchers then took two days to ‘train’ the system before it was put into action.

    “It’s remarkably good at recognising things in an image, because it’s had so much practice,” said Alex Kendall, a PhD student in the Department of Engineering. “However, there are a million knobs that we can turn to fine-tune the system so that it keeps getting better.

    ” The system is not yet at the point where it can be used to control a car or truck, but it could be used as a warning system, similar to the anti-collision technologies currently available on some passenger cars.

    “Vision is our most powerful sense and driverless cars will also need to see,” said Professor Roberto Cupola, who led the research. “But teaching a machine to see is far more difficult than it sounds.”

    As children, we learn to recognise objects through example – if we’re shown a toy car several times, we learn to recognise both that specific car and other similar cars as the same type of object. But with a machine, it’s not as simple as showing it a single car and then having it be able to recognise all different types of cars. Machines today learn under supervision: sometimes through thousands of labelled examples.

    There are three key technological questions that must be answered to design autonomous vehicles: where am I, what’s around me and what do I do next. SegNet addresses the second question, while a separate but complementary system answers the first by using images to determine both precise location and orientation.

    The localisation system designed by Kendall and Prof Cupola runs on a similar architecture to SegNet, and is able to localise a user and determine their orientation from a single colour image in a busy urban scene. The system is said to be more accurate than GPS and works in places where GPS does not, such as indoors, in tunnels, or in cities where a reliable GPS signal is not available.

    It has been tested along a kilometre-long stretch of King’s Parade in central Cambridge, and it is able to determine both location and orientation within a few metres and a few degrees.

    The localisation system uses the geometry of a scene to learn its precise location, and is able to determine, for example, whether it is looking at the east or west side of a building, even if the two sides appear identical.

    “In the short term, we’re more likely to see this sort of system on a domestic robot – such as a robotic vacuum cleaner, for instance,” said Prof Cupola. “It will take time before drivers can fully trust an autonomous car, but the more effective and accurate we can make these technologies, the closer we are to the widespread adoption of driverless cars and other types of autonomous robotics.”

    Source: newelectronics


    Copyright ? 2017, G.T. Internet Information Co.,Ltd. All Rights Reserved.
    主站蜘蛛池模板: 最近更新中文字幕影视| 狠狠精品干练久久久无码中文字幕 | 牛牛在线精品观看免费正| 国产精品亚洲一区二区三区 | 国产午夜福利精品一区二区三区 | 日本精品3d动漫一区二区| 免费观看日本污污ww网站一区| 做受视频60秒试看| 日本一本一区二区| 亚洲精品国产综合久久一线 | 日本免费一级片| 亚洲综合伊人久久大杳蕉| 鲁啊鲁啊鲁在线视频播放| 女人18毛片黄| 久久精品国产清白在天天线 | 精品福利一区二区三区免费视频| 国产美女mm131爽爽爽毛片| 久久99青青精品免费观看| 欧美色视频在线| 国产一区二区三区在线视频| 91精品久久久| 成人毛片视频免费网站观看| 亚洲午夜无码久久久久小说| 精品处破视频在线观看| 国产熟女一区二区三区五月婷| 一级一级毛片看看| 最近中文2019字幕第二页| 你看桌子上都是你流的| 黄页网站在线视频免费| 处处吻动漫免费观看全集| 久久久久女人精品毛片| 欧美日韩国产乱了伦| 国产无遮挡又黄又爽在线视频| 一个人看的www高清直播在线观看| 日韩精品久久无码中文字幕| 亚洲精品美女久久久久9999| 茄子视频国产在线观看| 国产精品一区二区久久国产| jizz在线看片| 故意短裙公车被强好爽在线播放| 亚洲乱码一区二区三区在线观看 |