May 23 2019
2:45 pm - 3:45 pm
Latest & What’s Ahead in Autonomy and Robotics
Track Name: SPAR 3D
Session Date: May 23 2019 2:45 pm - 3:45 pm
Utilizing Artificial Intelligence in Construction Planning and Scheduling
Rene Morkos, CEO of ALICE Technologies, who received his Ph.D. from Stanford in Artificial Intelligence applications for construction, will share the planning and scheduling challenges facing construction today and why artificial intelligence is perfect to solve some of the biggest planning and scheduling issues. Rene will go into the challenges with the current CPM method of scheduling, what AI is, and how AI is being applied to solve challenges that have plagued the construction planning process for decades and how AI can astronomically increase the ability to explore millions of construction scenarios in just minutes; a feat that would take a human being decades by comparison. This session is appropriate for construction executives and professionals who are responsible for increasing the efficiency of their organizations, heads of innovation, or who are responsible for project planning and scheduling.
Distributed Edge Processing: the Key to Robotic Intelligence
Automakers looking to achieve quick, accurate robotic perception in a moving vehicle are pinning their hopes on powerful processors. Housed in a computer farm in the vehicle’s trunk, they require massive amounts of computational power to fuse and decimate hundreds of billions of data points, three quarters of which are ultimately deemed useless—and thrown out. In this session, artificial perception pioneer AEye argues that centralized computing does not solve the real-time perception problem. Instead, AEye believes reliable, timely robotic intelligence requires distributed edge processing. This intelligence must begin at the sensor layer, so that filtered information is passed along to vehicle path-planning software, which can quickly translate this information into action. AEye will explain how this radically different way of looking at autonomous perception enables self-driving cars to make faster decisions, with higher accuracy, using only a small fraction of the power.
Intersection Modeling for Connected and Autonomous Vehicles
The world is about to undergo a tectonic shift in how we move people and their products. The protagonist - as many of us know - is the autonomous vehicle (AV). Like the transition from the horse carriage to the combustion automobile engine, many analysts believe that AVs will fundamentally change transportation in the future. Major automobile companies are even projecting near-future timelines for commercial vehicles with advanced level 3 and 4 autonomy. With all of the excitement surrounding AVs, questions and concerns naturally arise. Consider the signalized intersection. Since their appearance at the end of the 19th century, traffic lights have been the primary mode of granting access to road intersections. However, traffic statistics show that despite their claim to only a tiny percentage of road area, intersections are where 25% to 45% of all traffic collisions occur. Here enters a Mandli Communications solution. In order for AVs to become a reality, intersection modeling must be created and maintained, and in this presentation we will be exploring how we collect intersection modeling data for connected and autonomous vehicles using our state of the art X35 data collection vehicle. The X35 collects high-resolution videolog images, positional data, pavement information, and LiDAR data in a variety of weather, speed, and light conditions. The high resolution cameras allow for close inspection of roadway infrastructure in much better detail than standard cameras. The X35 is equipped with a Position Orientation System (POS) that collects vehicle position, velocity, altitude, track, speed, and dynamics. This data can be used to create centerline maps and to calculate geometric measurements, such as curve and grade. The system can provide latitude, longitude, and elevation data that is accurate to within +/- 1 meter. Mandli monitors its collection vehicles while deployed to efficiently manage and maintain schedules. Effective route planning can greatly improve collection efficiency and every project has a route/structure checklist for route planning. The Project Manager will organize the routes and prepare the information for data collection. The goal of proper route planning is to minimize deadheading and navigation errors. Once data is collected, we are able to produce an intersection model in two formats. The first format is the Digital Terrain Model (DTM), an accurate aligned point cloud of the intersection with transient objects removed, such as vehicles, pedestrians, and other moving objects. The second format is the Intersection Geometry, a map of the intersection that charts where vehicles can drive, such as lanes, edge of the road, and other allowable maneuvers. Thus far, the results from leveraging these map formats have been impressive. In one pilot, for example, when a city bus was behind schedule, it was capable of requesting the next intersection on the map have a green light, which improved on time arrivals. And of course, Mandli has also used these maps to simulate the safe passage of autonomous vehicles through intersection, in order to drive down the percentage of collisions occurring in intersections.