AI specialist Baidu aims to expand its autonomous ride-hailing platform, which already covers 10 Chinese cities, including Beijing, Chongqing and Shanghai.
Artificial intelligence company Baidu has announced plans to build what it claims will be the world’s largest autonomous ride-hailing service area in 2023.
The plans outline a goal to expand the operation area for its fully driverless robotaxis, allowing Baidu company to reach more potential customers.
Since August 2022, Baidu has operated fully driverless ride-hailing services in the cities of Chongqing and Wuhan, with access to hundreds of square kilometres of operational area. It will continue to expand on this area next year.
Additionally, Baidu revealed a series of new developments, including an AI model built for autonomous driving perception, high-definition autonomous driving maps, a closed-loop autonomous driving data system, and the successful end-to-end adaptation of AI chips for autonomous vehicles.
In first-tier cities like Beijing and Shanghai, each robotaxi on Apollo Go can provide 15 rides a day on average, nearly the same daily ride average of typical online ride-hailing car services.
Currently, Baidu’s autonomous ride-hailing platform Apollo Go covers more than 10 cities in China including all first-tier cities. In the third quarter of 2022, Apollo Go has completed more than 474,000 rides, up 311 per cent year over year, and a 65 per cent increase compared to last quarter.
In first-tier cities like Beijing and Shanghai, each robotaxi on Apollo Go can provide 15 rides a day on average, nearly the same daily ride average of typical online ride-hailing car services. By the end of the third quarter of 2022, the accumulated rides provided to the public by Apollo Go have reached 1.4 million.
As Baidu continues to scale up its operation area of robotaxi service, it is one step closer to the goal of providing autonomous driving services to more people, while further strengthening its leading position in the global autonomous ride-hailing market.
Baidu said the autonomous vehicle (AV) industry has long grappled with the “long tail” problem, in which an autonomous vehicle runs into a scenario it has not seen or experienced before. To address this problem, it has announced the industry’s first “AI big model” for autonomous driving, a pre-trained visual-language model, backed by the Baidu WenXin Big Model, which recognises thousands of objects, helping to enlarge the scope of semantic recognition.
It claims the model will enable autonomous vehicles to quickly make sense of an unseen object or vehicle, such as special vehicle such as a fire truck, or ambulance.
Baidu’s new closed-loop data system aims to address the exponential growth of data. It has introduced the concept of “fine purification, strong ingestion” to effectively identify and utilise data. To purify the data, the system leverages both on-board small AI models and cloud-based big AI model to achieve high-efficiency data mining and automated labelling.
The data ingestion architecture achieves automated training with its group-optimisation ability and data distribution understanding to effectively utilise data and further enhance the overall intelligence of autonomous driving.
Baidu has introduced the concept of “fine purification, strong ingestion” to effectively identify and utilise data.
Baidu is also helping to bring autonomous driving technology to advanced assisted driving products. Currently, the technology stack level enables the unification of L4 and L2+ smart driving products in terms of visual perception scheme, technical architecture, map unification, data interconnection and infrastructure sharing.
Baidu envisions a mutually beneficial relationship in which L4 will continue to provide advanced technology migration for L2+ smart driving products in urban use cases, while L2 data feedback will also help to improve L4 generalisation ability.
Why not try these links to see what our SmartCitiesWorld AI can tell you.
(Please note this is an experimental service)
How does Baidu's AI big model improve autonomous vehicle object recognition?What role does the closed-loop data system play in autonomous driving efficiency?How does Apollo Go's robotaxi service compare to traditional ride-hailing daily rides?In what ways does Baidu unify L4 and L2+ smart driving technology stacks?How does Baidu's autonomous ride-hailing expansion impact urban mobility coverage?