Next Generation E/E
Building a Future-Proof Architecture using RTI Connext Drive
Introduction
Automobile architectures are continually evolving, and this rate of change is accelerating with the introduction of advanced driver assist systems (ADAS) and autonomous capabilities. Unlike the era of single-function ECUs with point-to-point hardwired interconnects (sometimes numbering in the hundreds per vehicle), modern vehicle architecture is software-defined and able to take on a much larger set of tasks using fewer resources, resulting in vehicles that are lighter, safer and easier to build and upgrade.
To illustrate this trend, RTI presents an expanding set of examples that demonstrate the ease and flexibility of using RTI Connext Drive to create modern vehicle architectures that are:
- Scalable from entry-level driver assistance up to fully autonomous driving
- Expandable to include teleoperations, V2X, simulation, hardware acceleration, etc.
- Interoperable with AUTOSAR, ROS 2, CAN, and other automotive standards
- Reusable to easily adapt to different configurations, processors, operating systems, transports and programming languages — without having to re-engineer the applications
Furthermore, these examples get the full benefit of Connext Drive to meet your performance, reliability and safety certification needs.
-
What this Example Does
The distributed architecture using purpose-built ECU has been migrating toward a more centralized architecture, and further toward a truly software-defined vehicle.
But how can a software-defined vehicle be made to be ‘future-proof’?
This example set is based on the open-standard Data Distribution Service (DDS™) standard, as implemented in Connext Drive, which provides a high-performance, safe, secure and interoperable environment for the most demanding systems in the world. RTI Connext is trusted in countless critical systems and supports many industry standards such as ROS 2, AUTOSAR Classic and AUTOSAR Adaptive.
This example set models the system architecture outlined in the AVCC TR-001 reference document, which has a scalable design to support the SAE International Levels 1 through 5 of driving automation. As such, this architectural example can scale from entry-level driver assistance up to full autonomous operation.
This example set can be quickly reconfigured to fit a centralized or zonal architecture pattern, and can be extended to support teleoperations, telematics, hardware acceleration, hardware redundancy, and operation with simulation environments.
Connext Drive provides the standards-based portability to help create a future-proof architecture that meets the needs of performance, scalability, safety and security — today and in the future.
Note that this is an architectural example, built as a high-performance framework. The logic within the application modules (for perception, localization, path planning, etc.) is outside of the scope of this example.
-
Understanding the Example
This example aligns to a subset of the AVCC TR-001 Conceptual Architecture, as shown in the highlighted portion below.
This example architecture supports the creation of driver-assistance features such as lane departure warning and blind-spot warning, while providing for future expansion into higher levels of autonomy. The application modules provide a high-performance data communications network within the Connext framework, freeing the developer to focus on creating the value within each module. The purpose of each application module can be described as follows:
Perception: The perception module is responsible for interpreting sensor data (Lidar, Radar, Cameras, etc.) to detect, classify, and track objects within its field of view. It produces a sequence of identified object descriptors which are provided to the Localization and Ego Motion modules for further processing.
Localization: The localization module is responsible for determining the precise location of the Ego Vehicle and its surrounding detected objects within a larger coordinate space, typically using GNSS/GPS as a global coordinate reference. It produces an ‘Ego Pose’ value indicating the position and orientation of the vehicle.
Ego Motion: The ego motion module is responsible for converting the sequence of Ego Pose values into a set of values describing the motion of the vehicle in all axes, including the velocity and acceleration of the vehicle.
Scene Evaluation: This module is responsible for interpreting the Ego Motion, Ego Pose, detected objects and other data to detect conditions such as lane departure, objects in blind spots, lead vehicle departure and others. This module may also predict the motions of surrounding objects to enable more advanced levels of autonomy.
In this example application, the sensor data used by the Perception module is provided by an additional application that generates test data. In actual use, this data would be provided by physical sensors.
-
Download the Example
Example code is available in GitHub. See the README.md file for instructions on how to build and run this example.
-
Next Steps
Post questions on the RTI Community Forum.
Check out more of the Connext product suite and learn how Connext can help you build your distributed system. If you haven't already, download the free trial.