DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
This Office Action is in response to the amendment filed on June 23, 2025. Claims 18-36 are pending. Claims 18, 35 and 36 are independent.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on June 28, 2025 has been considered. The submission is in compliance with the provisions of 37 CFR 1.97. The Forms PTO-1449 are signed and attached hereto.
Response to Arguments
Applicants’ arguments have been fully considered and persuasive because they are directed toward the newly added claim limitations. Thus, the rejection has been withdrawn in light of the newly added claim limitations.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 18-36 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by U.S. Patent Publication No. 2019/0258251 to Ditty et al. (hereinafter “Ditty”).
Claims 18-36 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Ditty.
With respect to independent claims 18, 35 AND 36, Ditty discloses an onboard compute unit comprising: an embedded computer to run an operating system and application layer (see paragraph [0121]: Each controller is essentially one or more onboard supercomputers that can operate in real-time to process sensor signals, and output autonomous operation commands to self-drive vehicle (50) and/or assist the human vehicle driver in driving.); and
to preprocess sensor data and prepare the sensor data for input to an Artificial Intelligence (AI) accelerator chip (see paragraph [0270]: deep-learning hardware accelerator (DLA) (401) may perform preprocessing (160), inferencing on a trained neural network (162), and post-processing (164) to provide any function (166) that can be performed by a trained neural network, such as collision detection, sign detection, object detection, lane detection, or other function.);
an Al accelerator chip to process the preprocessed sensor data using machine learning (ML) systems to perform machine learning computations for object detection and estimation (see paragraph (see paragraph [0269]: PVA (402) may perform preprocessing (150), a computer vision algorithm (152), post-processing, to provide a classic computer vision function (156) such as collision detection, sign detection, object detection, lane detection, or any other computer-vision function.);
sensors to capture real-time traffic data related to conditions within the traffic environment (see paragraphs [0033] and [0179]: Such visual data may include any combination of videos, images, real-time or near real-time data captured by any type of camera or video recording device. Computer vision applications implement computer vision algorithms to solve high-level problems. For example, an ADAS system can implement real-time object detection algorithms to detect pedestrians/bikes, recognize traffic signs, and/or issue lane departure warnings based on visual data captured by an in-vehicle camera or video recording device. An autonomous driving system must be able to process huge amount of data from cameras, RADAR, LIDAR, ultrasonic, infrared, GPS, IMUS, and/or HD-Maps in real-time.);
at least one trained model utilized by the embedded computer and the Al accelerator chip to analyze the real-time traffic data and to generate a risk estimation, based on the analysis of the real-time traffic data, related to a mobility platform within the traffic environment (see paragraphs [0286]: the GPU complex (300) in each Advanced SoC is preferably configured to execute any number of trained neural networks, including CNNs, DNNs, and any other type of network, to perform the necessary functions for autonomous driving, including (for example and without limitation) lane detection, object detection, and/or free space detection. GPU complex (300) is further configured to run trained neural networks to perform any AI function desired for vehicle control, vehicle management, or safety, including the functions of perception, planning and control.); and
an alert mechanism to generate an alert directed to an operator of the mobility platform based on risk estimation based on the risk estimation exceeding a threshold likelihood of collision (see paragraphs [0025]: When the AEB system detects a hazard, it typically first alerts the driver to take corrective action to avoid the collision, similar to a FCW system. If the driver does not take corrective action, the AEB system may automatically apply the brakes in an effort to prevent, or at least mitigate, the impact of the predicted collision. AEB systems, may include techniques such as dynamic brake support (“DBS”) and/or crash imminent braking (“CIB”). A DBS system provides a driver-warning).
With respect to dependent claim 19, Ditty discloses wherein the embedded computer comprises at least one of a single-board computer (SBC) or a System-on-Module (SoM) attached to a carrier board (see paragraphs [0211], [0269] and [0316]: computer vision also plays a role, for example in lane detection as well as redundant object detection at moderate distances. Thus, the SoC design preferably includes Computer Vision Accelerators (402) (VA0, VA1). The Computer Vision Accelerators (402) preferably use NVIDIA Programmable Vision Accelerator (“PVA”). Hardware Acceleration Cluster (400) can perform redundant, diverse processing in both PVA (402) and DLA (401). PVA (402) may perform preprocessing (150), a computer vision algorithm (152), post-processing, to provide a classic computer vision function (156) such as collision detection, sign detection, object detection, lane detection, or any other computer-vision function. The Advanced SoC may offload some, or all, of these tasks to the CPU complex (200), preferably including the pre-processing function (150). The Primary Computer (On-Board Computer) (100) and the Backup Computer (On-Board Computer) (200) may each be configured according to FIG. 17, FIG. 18, or FIG. 19. Alternatively, the Primary Computer (On-Board Computer) (100) may include one more Advanced SoCs as in FIG. 17, FIG. 18, or FIG. 19, and the Backup Computer (On-Board Computer) (200) may use an older generation of processor, or another type of processor altogether.).
With respect to dependent claim 20, Ditty discloses wherein the Al accelerator chip and the embedded computer operate together to share the processing of data, with the embedded computer handling general-purpose computing tasks and the Al accelerator chip handling specialized machine learning tasks (see paragraph [0040]: the GPUs described herein are domain specific, parallel processing accelerators. While a CPU typically consists of a few cores optimized for sequential serial processing, a GPU typically has a massively parallel architecture consisting of thousands of smaller, more efficient computing cores designed for handling multiple tasks simultaneously. GPUs are used for many purposes beyond graphics, including to accelerate high performance computing, deep learning and artificial intelligence, analytics, and other engineering applications.).
With respect to dependent claim 21, Ditty discloses wherein the Al accelerator chip includes multiple cores optimized for matrix multiplication and mathematical operations used in machine learning (see paragraph [0198]: each of the four processing blocks may be allocated 16 FP32 Cores, 8 FP64 Cores, 16 INT32 Cores, two mixed-precision Tensor Cores for deep learning matrix arithmetic, a L0 instruction cache, one warp scheduler, one dispatch unit, and a 64 KB Register File. In a preferred embodiment, SMs (301) include independent parallel integer and floating-point data paths, providing for efficient execution of workloads with a mix of computation and addressing calculations.).
With respect to dependent claim 22, Ditty discloses data storage components for storing sensor data and sensor data derivatives (see paragraphs [0187] and [0549]: Platform (100) further includes Storage (500) which may be comprised of one or more storage elements including RAM, SRAM, DRAM, VRAM, Flash, hard disks, and other components and devices that can store at least one bit of data. Storage (500) preferably includes on-chip storage, and may be comprise L2 or L3 caches for use with the CPU (200) and/or the GPU (300). The derivatives are used to accumulate to an expression such as for example a 6×6 3D pose update equation. The procedure is iterated. In some non-limiting embodiments, this function supports both advanced trajectory estimation 3030a and obstacle detection 3010 using LIDAR. LIDAR Ground Plane 3154.).
With respect to dependent claim 23, Ditty discloses wherein the alert mechanism is further configured to generate a second alert directed at an operator of a further actor within the traffic environment (see paragraph [0672]: For example, Map (212) identifies areas with overpasses, tree-lines, and open fields, which are associated with large gusts; as truck approaches those areas, controller (100) anticipates the gusts and reduces speed. Additional actions include increasing following distance to vehicles, and engaging self-driving truck's hazard lights, to alert other vehicles to the potentially hazardous condition.).
With respect to dependent claim 24, Ditty discloses wherein the mobility safety system selectively records real-time data related to traffic conditions based on the risk estimation (see paragraphs [0179] and [0265]: These examples are only a few of the possible sensors and systems that may be used to achieve full Level 3-5 performance at ASIL D safety levels. An autonomous driving system must be able to process huge amount of data from cameras, RADAR, LIDAR, ultrasonic, infrared, GPS, IMUS, and/or HD-Maps in real-time, and generate commands to control the car safely, reliably, and comfortably. Performing the same object detection function in different and independent ways, in different and independent hardware components, enhances the ASIL safety rating of the system using the Advanced SoC and is called “ASIL Decomposition.” As shown in FIG. 14, ASIL Decomposition lowers the ASIL requirement for a specific element, by providing for redundant elements in the architecture. Thus, a single function that requires ASIL D functional safety can be implemented by two redundant, independent components, for example one at ASIL C and the backup at ASIL A. The combination of the two redundant components provides each with an ASIL level less than ASIL D, provides an overall functional safety level at ASIL D.).
With respect to dependent claim 25, Ditty discloses wherein the onboard compute unit is to selectively perform computations based on the risk estimation to mitigate risk and ensure safety (see paragraphs [0190]: the SoC is an AI supercomputer, designed for use in self-driving cars with specific features optimized for L3-5 functionality. The SoC preferably is designed to be meet critical automotive standards, such as the ISO 26262 functional safety specification. In a preferred embodiment, the Advanced SoC has at least an ASIL C functional safety level.).
With respect to dependent claim 26, Ditty discloses wherein the mobility safety system records only sensor data related to a location and movement of an actor in the traffic environment when the risk estimation indicates a threshold probability of a collision with the actor (see paragraphs [0267] and [0268]: the accelerators (401), (402) together perform functions (e.g., detection of imminent collision, lane departure warning, pedestrian detection) that when combined with the autonomous driving functions performed by GPU complex (300), together provides to provide ASIL D level functional safety. The GPU complex (300) and DLA accelerator (401) may use deep neural networks and CNNs to process information from vehicle motion sensors such as the inertial sensing system and possibly other input from vehicular semi-autonomous systems (SAS) (82) or ADAS systems.).
With respect to dependent claim 27, Ditty discloses wherein the alert mechanism is to selectively provide feedback to the operator via a user interface based on the risk estimation (see paragraph [0125]: HMI display may provide the vehicle occupants with information regarding maps and vehicle's location, the location of other vehicles (including an occupancy grid) and even the Controller's identification of objects and status. For example, HMI display (86) may alert the passenger when the controller has identified the presence of a stop sign, caution sign, or changing traffic light and is taking appropriate action, giving the vehicle occupants peace of mind that the controller is functioning as intended.).
With respect to dependent claim 28, Ditty discloses wherein the mobility safety system is to reduce power consumption by selectively activating and deactivating components based on the risk estimation (see paragraphs [0445] and [0448]: The hardware independence of the safety MCU 5020 from the processor SOC(s) executing the application processes 5002 minimizes the risk that hardware faults affecting the processor SOC 5054 will also affect the safety MCU. For example, in some implementations the MCU 5020 receives separate power or battery backup so that power faults affecting the processor SOC executing the application processes 5002 will not affect the safety MCU 5020. L1SS safety supervisor 5014(1) also communicates with a boot and power management processor 5060 via a boot server 4010(A), and monitors the operation of the boot server using an associated watchdog 5010(A). The boot and power management processor 5060 controls resets and clocks. While the L2SSsafety supervisor 5014(2) could in some embodiments communicate directly with the boot and power management processor 5060, in the embodiment shown the L2SSsafety supervisor communicates with SOC 5054 only via the L1SS safety supervisor 5014(1) and the Boot server 4010(A).).
With respect to dependent claim 29, Ditty discloses wherein the mobility safety system further comprises a user interface to display information about a current risk level and suggestions for actions to reduce that risk (see paragraphs [0125] and [0267]: Controller's identification of objects and status. For example, HMI display (86) may alert the passenger when the controller has identified the presence of a stop sign, caution sign, or changing traffic light and is taking appropriate action, giving the vehicle occupants peace of mind that the controller is functioning as intended. The accelerators (401), (402) together perform functions (e.g., detection of imminent collision, lane departure warning, pedestrian detection) that when combined with the autonomous driving functions performed by GPU complex (300), together provides to provide ASIL D level functional safety. Furthermore, in the event of a failure of one of the accelerators (401) or (402), combination of the functioning accelerator and the GPU complex (300) together ensures safe operation. In the event of the failure of both accelerator (401) and (402), the system returns a fault message that service is required, notifies the driver, and executes a transition routine that returns control to the driver.).
With respect to dependent claim 30, Ditty discloses wherein the mobility safety system is configured to automatically detect a pedestrian on a shared use path and to emit a warning to alert the pedestrian (See paragraphs [0033] and [0048]: an ADAS system can implement real-time object detection algorithms to detect pedestrians/bikes, recognize traffic signs, and/or issue lane departure warnings based on visual data captured by an in-vehicle camera or video recording device. Autonomous vehicles must have networks trained and focused on specific tasks like pedestrian detection, lane detection, sign reading, collision avoidance and many more. Even if a single combination of neural networks could achieve Level 3-5 functionality, the “black box” nature of neural networks makes achieving ASIL D functionality impractical.).
With respect to dependent claim 31, Ditty discloses wherein the mobility safety system is to detect a non-human actor in the traffic environment and the alert mechanism is configured to emit warnings to discourage the non-human actor from engaging with the mobility platform (see paragraphs [0150], [0151] and [0267]: A gated active system uses a pulsed infrared light source and a synchronized infrared camera. Because an active system uses an infrared light source, it does not perform as well in detecting living objects such as pedestrians, bicyclists, and animals. Passive infrared systems perform well at detecting living objects. Typical infrared camera functional safety levels are ASIL B. the accelerators (401), (402) together perform functions (e.g., detection of imminent collision, lane departure warning, pedestrian detection) that when combined with the autonomous driving functions performed by GPU complex (300), together provides to provide ASIL D level functional safety.).
With respect to dependent claim 32, Ditty discloses wherein the mobility safety system includes software algorithms to determine that a sensor is obstructed due to obscurants and wherein the mobility safety system is to alert the operator of the obstruction (see paragraph [0346]: Autonomous driving generally involves multiple sensors and it is crucial to detect sensor system blockage because the system requires reliable information. When camera is blocked, the image generally contains a blurred region with low amount of details. Generally-speaking, sensor blockage detection can be considered as a pattern recognition problem, and a neural network may be trained to detect sensor failure by seeking to identify a blurred region.).
With respect to dependent claim 33, Ditty discloses wherein the mobility safety system provides a lane deviation warning by analyzing sensor data to determine a position of the mobility safety mobility safety system with respect to lane markings and triggering a warning message using the alert mechanisms when the mobility safety system deviates from a lane (see paragraphs [0335] and [0716]: A suitable ADAS SoC is designed to be used for Lane Departure Warning (“LDW”), alerting the driver of unintended/unindicated lane departure. The ADAS system may include (for example and without limitation) a lane departure warning unit, implemented with a field programmable gate array (FPGA) and a stereo video camera. The lane departure warning unit may be implemented in other suitable embodiments as an FPGA and a monocular camera unit, and/or a processor and a monocular camera unit.).
With respect to dependent claim 34, Ditty discloses wherein the mobility safety system includes an emergency stop capability that is triggered responsive to a threshold probability of a collision, and wherein the mobility safety system is to engage a brake of the mobility platform to perform an emergency stop (see paragraph [0776]: generating a first control signal responsive to the input; receiving a second signal from one or more of: 1) an Auto Emergency Braking unit, 2) a Forward Crash Warning unit, 3) a Lane Departure Warning, 4) a Collision Warning Unit, and 5) a blind spot warning unit; evaluating whether the first control signal conflicts with the second signal; and controlling one or more vehicle actuators responsive to the evaluation.).
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DEMETRA R SMITH-STEWART whose telephone number is (571)270-3965. The examiner can normally be reached 10am - 6pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Peter Nolan can be reached at 571-270-7016. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DEMETRA R SMITH-STEWART/Examiner, Art Unit 3661
/PETER D NOLAN/Supervisory Patent Examiner, Art Unit 3661