DETAILED ACTION
This office action is in response to the amendment filed on 11/24/2025. This action is made Final.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Election/Restrictions
Claims 14, 21 – 22 are withdrawn from further consideration pursuant to 37 CFR 1.142(b) as being drawn to a nonelected species, there being no allowable generic or linking claim. Election was made without traverse in the reply filed on 05/14/2025.
Response to Amendment
The amendment filed on 11/24/2025 has been entered. Claims 1-4, 7-8, 10 - 17, 19-22 remain pending in the application. Claims 14, 21 – 22 are withdrawn from consideration. The previous 101 rejection has been withdrawn in view of Applicant’s amendment.
Response to Arguments
Applicant’s arguments with respect to the 103 rejection of claims 1 and 17 has been considered but are moot in view of new ground of rejection necessitated by Applicant’s amendment.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1-4, 7-8, 10, 13, 15-17, 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Pollach et al. (Publication No. US 20180067488 A1; hereinafter Pollach) in view of Panigrahi et al. (Publication No. US 20230037228 A1; hereinafter Panigrahi) in further view of Jung et al. (Publication No. US 20190137294 A1; hereinafter Jung) and in further view of Deshpande et al. (Publication No. US 20180032605 A1; hereinafter Deshpande).
Regarding to claim 1, Pollach teaches
A method for testing a vehicular driver assistance system, the method comprising:
obtaining sensor data captured by a sensor of a vehicle equipped with the vehicular driver assistance system ([Par. 0015 – 0016], “The autonomous driving system 100 can include a sensor system 110 having multiple sensors, each of which can measure different portions of the environment surrounding the vehicle and output the measurements as raw measurement data 115. The raw measurement data 115 can include characteristics of light, electromagnetic waves, or sound captured by the sensors, such as an intensity or a frequency of the light, electromagnetic waves, or the sound, an angle of reception by the sensors, a time delay between a transmission and the corresponding reception of the light, electromagnetic waves, or the sound, a time of capture of the light, electromagnetic waves, or sound, or the like. [0016] The sensor system 110 can include multiple different types of sensors, such as image capture devices 111, Radio Detection and Ranging (Radar) devices 112, Light Detection and Ranging (Lidar)devices 113, ultra-sonic devices 114, microphones, infrared or night-vision cameras, time-of-flight cameras, cameras capable of detecting and transmitting differences in pixel intensity, or the like.”), the vehicular driver assistance system comprising a processor for processing captured sensor data; ([Par. 0025], “The sensor fusion system 300, in some embodiments, can generate feedback signals 116 to provide to the sensor system 110. The feedback signals 116 can be configured to prompt the sensor system 110 to calibrate one or more of its sensors.”)
obtaining annotations of the captured sensor data, the annotations representing a predicted output of the processor when the processor is processing the captured sensor data for the vehicular driver assistance system; ([Par. 0027], “The situational awareness system can analyze portions of the environmental model 121 to determine a situational annotation of the vehicle. The situational awareness system may include a classifier, such as a neural network or other machine learning module. The situational annotation of the vehicle can include a classification of the vehicle surroundings (e.g., locations of other cars, landmarks, pedestrian, stoplights, intersections, highway exits, weather, visibility, etc.), the state of the vehicle (e.g., speed, acceleration, path of the vehicle, etc.), and/or any other data describing the situation in which the vehicle is driving or will be driving in the near future.” Wherein the “situational annotation” can be seen as a predicted output of the processor for ADAS functionality.)
storing the captured sensor data and the annotated sensor data at data storage; ([Par. 0064], “the annotated environmental model, parts of the annotated environmental model, situational annotation, and/or other data can be stored to a memory associated with the vehicle (such as memory system 330 of FIG. 3). The stored portions of the environmental model and situational annotation can be used to evaluate an incident associated with the vehicle.”)
generating analysis data based on statistical analysis of the captured sensor data and statistical analysis of the annotated sensor data; ([Par. 0064], “The stored portion of the environmental model and situational annotation can be passed to backbone infrastructure and/or other entity for analysis. The stored data associated with the event can be used to improve the functionality of an automated driving system (e.g., to avoid the incident in future). Updates to a situational awareness system, a driving functionality system, and/or other system can be made based on the data and provided to vehicles in, for example, an over-the-air update. In some cases, the stored portion of the environmental model and situational annotation associated with an incident may also be used by an insurance company, vehicle safety organization, the police, or other entity to analyze the incident.”)
storing the analysis data at a results database; ([Par. 0064], “The stored portion of the environmental model and situational annotation can be passed to backbone infrastructure and/or other entity for analysis. The stored data associated with the event can be used to improve the functionality of an automated driving system (e.g., to avoid the incident in future). Updates to a situational awareness system, a driving functionality system, and/or other system can be made based on the data and provided to vehicles in, for example, an over-the-air update.”; [Par. 0076], “the situational awareness system provides data associated with the uncertain situation to an external node. Data describing the uncertain or new situation, such as environmental model data, predicted environmental model data, localization information, region of interest information, situational annotation, and/or any other information related to the uncertain situation is provided to an external node. Depending on a level of uncertainty in a specific situation, an appropriate amount of situational context can be uploaded to the external node.” This is interpreted as the stored data is used for further processing and storage. This also implies the analysis resulted is stored at the database in the backbone system.) and
generating, using the stored analysis data, a key performance indicator (KPI) report, ([Par. 0064], “The stored portions of the environmental model and situational annotation can be used to evaluate an incident associated with the vehicle. In one example, a vehicle may be involved in an incident, such as an accident, a near miss, or the like, and a stored portion of the environmental model and situational annotation can be used to analyze the incident.”)
Pollach teaches to analyze and generate evaluation result of the sensor and the annotated data as described above, but does not explicitly disclose wherein the KPI report comprises a dynamic graphic representation based on the analysis data.
However, Panigrahi teaches wherein the KPI report comprises a dynamic graphic representation based on the analysis data. ([Par. 0096], “While 7A, with reference to FIGS. 1 through 6B, depicts a graphical representation illustrating a resource balance profile for the RR scheme, FIG. 7B, with reference to FIGS. 1 through 7A, depicts a graphical representation illustrating a balance report for the KPI-aware allocation scheme, in accordance with an embodiment of the present disclosure. Graph for instantaneous balance resource is plotted against the maximum resource capacity (Cap) set against each of the application types. For instance, resource cap for the eMBB, URLLC, mMTC type are 60,20 and 20 units, respectively. The balance resource values are always less than the Cap because of the lesser number of users in the system. Similarly, the average percentage of resource utilization for different sCATs have shown in FIGS. 7A and 7B for RR and KPI-aware schemes, respectively.”)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claim invention to modify Pollach to incorporate the teaching of Panigrahi. The modification would have been obvious because by integrating Panigrahi’s KPI-driven evaluation and visualization into Mathias’s ADAS framework enhances performance assessment and data efficiency, aligning with industry trends for standardized, data-driven testing in safety-critical systems. A skilled artisan would find this combination feasible with a reasonable expectation of success, as both references share data-driven optimization goals, making Panigrahi’s tools adaptable to Mathias’s ADAS context.
The combination of Pollach and Panigrahi teaches the generating the KPI report based on stored sensor data as described above, but does not explicitly disclose wherein the dynamic graphic representation comprises a three- dimensional model of the equipped vehicle and at least one other object generated from the captured sensor data to visualize a driving scene.
However, Jung teaches wherein the dynamic graphic representation comprises a three- dimensional model of the equipped vehicle and at least one other object generated from the captured sensor data to visualize a driving scene. ([Par. 0012], “The generating of the 3D virtual route may include generating a segmentation image based on image data acquired from a camera sensor among the sensors, detecting objects included in the segmentation image, generating the driving environment model based on depth values of the objects and a driving lane of the vehicle identified from the objects, and generating the 3D virtual route by registering the driving environment model and the position of the vehicle in the map information.)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claim invention to modify the combination of Pollach and Panigrahi to incorporate the teaching of Jung. The modification would have been obvious because generating a three-dimensional (3D) representation of the driving scene enables enhanced visualization of the surrounding environment for the driver and/or passenger, thereby improving situational awareness and facilitating recognition of the current driving conditions.
The combination of Pollach, Panigrahi and Jung teaches to process the sensor data as described above, but does not explicitly disclose to use a MapReduce technique to generate analysis data.
However, Deshpande teaches use a MapReduce technique to generate analysis data. ([Par. 0078], “the intermediary computing device 105 (or component such as the analytics platform appliance 110) can include a Pig or MapReduce programming tool, with a Hadoop framework used to execute the data analysis pipeline (e.g., from source to display). The display data structure or other data that gets displayed by the end user computing device 125 (e.g., on dashboards) can be stored in PostgreSQL (or Postgres) or other relational database management system, and visualization can be accomplished using JavaScript or a third party tool such as HighCharts.”)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claim invention to modify the combination of Pollach, Panigrahi, and Jung to incorporate the teaching of Deshpande. The modification would have been obvious because the use of a MapReduce framework allows large volumes of data to be processed in parallel across multiple computing nodes, improving computational efficiency, scalability, and overall data processing performance.
Regarding to claim 2, the combination of Pollach, Panigrahi, Jung and Deshpande teaches the method of claim 1.
Pollach further teaches wherein a distributed computing system performs the statistical analysis. ([Par. 0064], “a stored portion of the environmental model and situational annotation can be used to analyze the incident. The stored portion of the environmental model and situational annotation can be passed to backbone infrastructure and/or other entity for analysis. The stored data associated with the event can be used to improve the functionality of an automated driving system (e.g., to avoid the incident in future).” Wherein the “backbone infrastructure” corresponds to the “distributed computing system”)
Regarding to claim 3, the combination of Pollach, Panigrahi, Jung and Deshpande teaches the method of claim 1.
Pollach further teaches wherein the annotated sensor data and the results database are stored on distributed storage. ([Par. 0064], “a stored portion of the environmental model and situational annotation can be used to analyze the incident. The stored portion of the environmental model and situational annotation can be passed to backbone infrastructure and/or other entity for analysis. The stored data associated with the event can be used to improve the functionality of an automated driving system (e.g., to avoid the incident in future).” This should be understood as the data is passed and stored at the backbone infrastructure.)
Regarding to claim 4, the combination of Pollach, Panigrahi, Jung and Deshpande teaches the method of claim 1.
Pollach further teaches wherein the sensor comprises a camera, and wherein the captured sensor data comprises image data captured by the camera. ([Par. 0016], “The sensor system 110 can include multiple different types of sensors, such as image capture devices 111, Radio Detection and Ranging (Radar) devices 112, Light Detection and Ranging (Lidar)devices 113, ultra-sonic devices 114, microphones, infrared or night-vision cameras, time-of-flight cameras, cameras capable of detecting and transmitting differences in pixel intensity, or the like.”)
Regarding to claim 7, the combination of Pollach, Panigrahi, Jung and Deshpande teaches the method of claim 1.
Jung further teaches wherein the three-dimensional model of the at least one other object is located relative to the three-dimensional model of the equipped vehicle based on the captured sensor data. ([Par. 0012], “The generating of the 3D virtual route may include generating a segmentation image based on image data acquired from a camera sensor among the sensors, detecting objects included in the segmentation image, generating the driving environment model based on depth values of the objects and a driving lane of the vehicle identified from the objects, and generating the 3D virtual route by registering the driving environment model and the position of the vehicle in the map information.”)
Regarding to claim 8, the combination of Pollach, Panigrahi, Jung and Deshpande teaches the method of claim 1.
Pollach further teaches wherein the at least one other object comprises at least one selected from the group consisting of (i) another vehicle, (ii) a pedestrian, and (iii) a lane marker. [Par. 0027], “The situational awareness system can analyze portions of the environmental model 121 to determine a situational annotation of the vehicle. The situational awareness system may include a classifier, such as a neural network or other machine learning module. The situational annotation of the vehicle can include a classification of the vehicle surroundings (e.g., locations of other cars, landmarks, pedestrian, stoplights, intersections, highway exits, weather, visibility, etc.), the state of the vehicle (e.g., speed, acceleration, path of the vehicle, etc.), and/or any other data describing the situation in which the vehicle is driving or will be driving in the near future.”)
Regarding to claim 10, the combination of Pollach, Panigrahi, Jung and Deshpande teaches the method of claim 1.
Pollach further teaches wherein generating the analysis data comprises executing a data analytics engine within a cloud container. ([Par. 0064], “a stored portion of the environmental model and situational annotation can be used to analyze the incident. The stored portion of the environmental model and situational annotation can be passed to backbone infrastructure and/or other entity for analysis. The stored data associated with the event can be used to improve the functionality of an automated driving system (e.g., to avoid the incident in future).”; [Par. 0076], “Depending on a level of uncertainty in a specific situation, an appropriate amount of situational context can be uploaded to the external node. The external node can include an external server, backbone system, and/or other node in an infrastructure associated with the situational awareness system and/or AD/ADAS systems.” Wherein the “external node” correspond to the “cloud container”)
Regarding to claim 13, the combination of Pollach, Panigrahi, Jung and Deshpande teaches the method of claim 1.
Panigrahi further teaches wherein the KPI report comprises, for each respective test case of a plurality of test cases, a result of the respective test case. ([Par. 0095], “FIGS. 5A and 5B, with reference to FIGS. 1 through 4B, depict graphical representation illustrating throughput hit and miss percentage for Round Robin (RR) and KPI-aware allocation schemes, in accordance with an embodiment of the present disclosure. As it can be evident from FIGS. 4A through5B, by applying the slice creation and allocation method as described in FIG. 3, miss percentage can be reduced significantly. Moreover, in several scenarios, KPI-aware allocation gives higher hit percentage than RR allocation scheme. This is because in KPI-aware, the system 100 schedules asper the KPI budgets”; [Par. 0096], “While 7A, with reference to FIGS. 1 through 6B, depicts a graphical representation illustrating a resource balance profile for the RR scheme, FIG. 7B, with reference to FIGS. 1 through 7A, depicts a graphical representation illustrating a balance report for the KPI-aware allocation scheme, in accordance with an embodiment of the present disclosure. Graph for instantaneous balance resource is plotted against the maximum resource capacity (Cap) set against each of the application types.”)
Regarding to claim 15, the combination of Pollach, Panigrahi, Jung and Deshpande teaches the method of claim 1.
Pollach further teaches wherein the vehicular driver assistance system comprises one selected from the group consisting of (i) traffic sign recognition, (ii) headlamp control, (iii) pedestrian detection, (iv) collision avoidance, and (v) lane marker detection. ([Par. 0027], “The situational awareness system can analyze portions of the environmental model 121 to determine a situational annotation of the vehicle. The situational awareness system may include a classifier, such as a neural network or other machine learning module. The situational annotation of the vehicle can include a classification of the vehicle surroundings (e.g., locations of other cars, landmarks, pedestrian, stoplights, intersections, highway exits, weather, visibility, etc.), the state of the vehicle (e.g., speed, acceleration, path of the vehicle, etc.), and/or any other data describing the situation in which the vehicle is driving or will be driving in the near future.”)
Regarding to claim 16, the combination of Pollach, Panigrahi, Jung and Deshpande teaches the method of claim 1.
Pollach further teaches wherein the method further comprises training, using the annotated sensor data, a machine learning model of the vehicular driver assistance system. ([Par. 0060], “The annotated environmental model or portions thereof can be provided as input to a machine learning module, such as a neural network or other machine learning module. The machine learning module can classify a situation of the vehicle based on the annotated environmental model. The machine learning module can be trained to classify various situations that the vehicle encounters.”)
Regarding to claim 17, Pollach teaches
A method for testing a vehicular driver assistance system, the method comprising:
obtaining image data captured by a forward-viewing camera disposed at a windshield of a vehicle equipped with the vehicular driver assistance system, ([Par. 0016], “The sensor system 110 can include multiple different types of sensors, such as image capture devices 111, Radio Detection and Ranging (Radar) devices 112, Light Detection and Ranging (Lidar)devices 113, ultra-sonic devices 114, microphones, infrared or night-vision cameras, time-of-flight cameras, cameras capable of detecting and transmitting differences in pixel intensity, or the like. An image capture device 111, such as one or more cameras, can capture at least one image of at least a portion of the environment surrounding the vehicle. The image capture device 111 can output the captured image(s) as raw measurement data 115, which, in some embodiments, can be unprocessed and/or uncompressed pixel data corresponding to the captured image(s).”) the vehicular driver assistance system comprising an image processor for processing captured image data; ([Par. 0025], “The sensor fusion system 300, in some embodiments, can generate feedback signals 116 to provide to the sensor system 110. The feedback signals 116 can be configured to prompt the sensor system 110 to calibrate one or more of its sensors.”)
obtaining annotations of the captured image data, the annotations representing a predicted output of the image processor when processing the captured image data for the vehicular driver assistance system; ([Par. 0027], “The situational awareness system can analyze portions of the environmental model 121 to determine a situational annotation of the vehicle. The situational awareness system may include a classifier, such as a neural network or other machine learning module. The situational annotation of the vehicle can include a classification of the vehicle surroundings (e.g., locations of other cars, landmarks, pedestrian, stoplights, intersections, highway exits, weather, visibility, etc.), the state of the vehicle (e.g., speed, acceleration, path of the vehicle, etc.), and/or any other data describing the situation in which the vehicle is driving or will be driving in the near future.” Wherein the “situational annotation can be seen as a predicted output of the processor for ADAS functionality.)
storing the captured image data and the annotated image data at data storage; ([Par. 0064], “the annotated environmental model, parts of the annotated environmental model, situational annotation, and/or other data can be stored to a memory associated with the vehicle (such as memory system 330 of FIG. 3). The stored portions of the environmental model and situational annotation can be used to evaluate an incident associated with the vehicle.”)
generating analysis data based on statistical analysis of the captured image data and statistical analysis of the annotated image data; ([Par. 0064], “The stored portion of the environmental model and situational annotation can be passed to backbone infrastructure and/or other entity for analysis. The stored data associated with the event can be used to improve the functionality of an automated driving system (e.g., to avoid the incident in future). Updates to a situational awareness system, a driving functionality system, and/or other system can be made based on the data and provided to vehicles in, for example, an over-the-air update. In some cases, the stored portion of the environmental model and situational annotation associated with an incident may also be used by an insurance company, vehicle safety organization, the police, or other entity to analyze the incident.”)
storing the analysis data at a results database; ([Par. 0064], “The stored portion of the environmental model and situational annotation can be passed to backbone infrastructure and/or other entity for analysis. The stored data associated with the event can be used to improve the functionality of an automated driving system (e.g., to avoid the incident in future). Updates to a situational awareness system, a driving functionality system, and/or other system can be made based on the data and provided to vehicles in, for example, an over-the-air update.”; [Par. 0076], “the situational awareness system provides data associated with the uncertain situation to an external node. Data describing the uncertain or new situation, such as environmental model data, predicted environmental model data, localization information, region of interest information, situational annotation, and/or any other information related to the uncertain situation is provided to an external node. Depending on a level of uncertainty in a specific situation, an appropriate amount of situational context can be uploaded to the external node.” This is interpreted as the stored data is used for further processing and storage. This also implies the analysis resulted is stored at the database in the backbone system.) and
generating, using the stored analysis data, a key performance indicator (KPI) report. ([Par. 0064], “The stored portions of the environmental model and situational annotation can be used to evaluate an incident associated with the vehicle. In one example, a vehicle may be involved in an incident, such as an accident, a near miss, or the like, and a stored portion of the environmental model and situational annotation can be used to analyze the incident.”; [Par. 0060], “a machine learning approach is used to classify the situational annotation of the vehicle. The annotated environmental model or portions thereof can be provided as input to a machine learning module, such as a neural network or other machine learning module. The machine learning module can classify a situation of the vehicle based on the annotated environmental model. The machine learning module can be trained to classify various situations that the vehicle encounters. For example, the machine learning module can be trained using any suitable machine learning training approach (e.g., supervised, unsupervised learning, etc.) on labeled data. The data used to train the machine learning module may include sensor data from multiple modalities, which can include data from multiple sensors, multiple levels of data (e.g., low level data, high level data),data from external sources, and the like.”)
Pollach teaches to analyze and generate evaluation result of the sensor and the annotated data as described above, but does not explicitly disclose wherein the KPI report comprises a dynamic graphic representation based on the analysis data.
However, Panigrahi teaches wherein the KPI report comprises a dynamic graphic representation based on the analysis data. ([Par. 0096], “While 7A, with reference to FIGS. 1 through 6B, depicts a graphical representation illustrating a resource balance profile for the RR scheme, FIG. 7B, with reference to FIGS. 1 through 7A, depicts a graphical representation illustrating a balance report for the KPI-aware allocation scheme, in accordance with an embodiment of the present disclosure. Graph for instantaneous balance resource is plotted against the maximum resource capacity (Cap) set against each of the application types. For instance, resource cap for the eMBB, URLLC, mMTC type are 60,20 and 20 units, respectively. The balance resource values are always less than the Cap because of the lesser number of users in the system. Similarly, the average percentage of resource utilization for different sCATs have shown in FIGS. 7A and 7B for RR and KPI-aware schemes, respectively.”)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claim invention to modify Pollach to incorporate the teaching of Panigrahi. The modification would have been obvious because by integrating Panigrahi’s KPI-driven evaluation and visualization into Mathias’s ADAS framework enhances performance assessment and data efficiency, aligning with industry trends for standardized, data-driven testing in safety-critical systems. A skilled artisan would find this combination feasible with a reasonable expectation of success, as both references share data-driven optimization goals, making Panigrahi’s tools adaptable to Mathias’s ADAS context.
The combination of Pollach and Panigrahi teaches the generating the KPI report based on stored sensor data as described above, but does not explicitly disclose wherein the dynamic graphic representation comprises a three- dimensional model of the equipped vehicle and at least one other object generated from the captured sensor data to visualize a driving scene.
However, Jung teaches wherein the dynamic graphic representation comprises a three- dimensional model of the equipped vehicle and at least one other object generated from the captured sensor data to visualize a driving scene. ([Par. 0012], “The generating of the 3D virtual route may include generating a segmentation image based on image data acquired from a camera sensor among the sensors, detecting objects included in the segmentation image, generating the driving environment model based on depth values of the objects and a driving lane of the vehicle identified from the objects, and generating the 3D virtual route by registering the driving environment model and the position of the vehicle in the map information.)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claim invention to modify the combination of Pollach and Panigrahi to incorporate the teaching of Jung. The modification would have been obvious because generating a three-dimensional (3D) representation of the driving scene enables enhanced visualization of the surrounding environment for the driver and/or passenger, thereby improving situational awareness and facilitating recognition of the current driving conditions.
The combination of Pollach, Panigrahi and Jung teaches to process the sensor data as described above, but does not explicitly disclose to use a MapReduce technique to generate analysis data.
However, Deshpande teaches use a MapReduce technique to generate analysis data. ([Par. 0078], “the intermediary computing device 105 (or component such as the analytics platform appliance 110) can include a Pig or MapReduce programming tool, with a Hadoop framework used to execute the data analysis pipeline (e.g., from source to display). The display data structure or other data that gets displayed by the end user computing device 125 (e.g., on dashboards) can be stored in PostgreSQL (or Postgres) or other relational database management system, and visualization can be accomplished using JavaScript or a third party tool such as HighCharts.”)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claim invention to modify the combination of Pollach, Panigrahi, and Jung to incorporate the teaching of Deshpande. The modification would have been obvious because the use of a MapReduce framework allows large volumes of data to be processed in parallel across multiple computing nodes, improving computational efficiency, scalability, and overall data processing performance.
Claims 19 – 20 recite the method with substantially similar scope as claims 7 -8 respectively, thus being rejected for the same basis as claims 6 – 8 respectively above.
Claim(s) 11 is rejected under 35 U.S.C. 103 as being unpatentable over the combination of Pollach, Panigrahi, Jung and Deshpande in further view of Naphade et al. (Publication No. US 20220053171 A1; hereafter Naphade).
Regarding to claim 11, the combination of Pollach, Panigrahi, Jung and Deshpande teaches the method of claim 1.
The combination of Pollach, Panigrahi, Jung and Deshpande teaches to annotate the sensor data as described in claim 1 above, but does not explicitly disclose further comprising receiving the annotations for annotating the captured sensor data via a representational state transfer (REST) application programming interface (API).
However, Naphade teaches further comprising receiving the annotations for annotating the captured sensor data via a representational state transfer (REST) application programming interface (API). ([Par. 0044], “a video annotator 120 may communicate with the streaming servers in order to receive image data and/or video data for annotation (e.g., using Representational State Transfer Application Program Interfaces). In various examples, communications may be provided between the devices over an Open Network Video Interface Forum (ONVIF) bridge which may define rules for how software should query devices for their names, settings, streams, and the like. ONVIF “calls” or messages maybe sent to an ONVIF compliant device and the device may return an RTSP address to retrieve corresponding video and/or image data via the RTSP server. In some embodiments, the video manager(s) 112 may also provide video data to the streaming analyzer(s) 114 using RTSP communications.”
A skilled artisan would be motivated to incorporate the teaching of Naphade into the Pollach, Panigrahi, Jung and Deshpande’s combination to facilitate robust, web-based reception of annotations in ADAS testing environments, as REST APIs are a conventional standard for scalable, stateless data transfer (e.g., emphasizing interoperability and ease of integration). This combination yields predictable results, improving the system's ability to handle annotations from diverse sources while maintaining the core functionality of data processing and performance evaluation.
Claim(s) 12 is rejected under 35 U.S.C. 103 as being unpatentable over the combination of Pollach, Panigrahi, Jung and Deshpande in further view of Crabtree et al. (Publication No. US 2017012448 A1; hereafter Crabtree)
Regarding to claim 12, the combination of Pollach, Panigrahi, Jung and Deshpande teaches the method of claim 1.
The combination of Pollach, Panigrahi, Jung and Deshpande teaches to analyze the sensor and annotated data described in claim 1 above, but does not explicitly disclose wherein the results database comprises a document based database.
However, Crabtree teaches wherein the results database comprises a document based database. ([Par. 0050], “The invention offers pre-programmed algorithm toolsets for this purpose and also offers API hooks that allow the data to be passed to external processing algorithms prior to final output in a format pre-decided to be most appropriate for the needs of the scrape campaign authors. Result data may also be appropriately processed and formalized for persistent storage in a document based data store 990such as MongoDB, although, depending on the needs of the authors and the type of data retrieved during the scrape, any NOSQL type data storage or even a relational database may be used.”)
A person of ordinary skill in the art would be motivated to incorporate the teaching of Crabtree into the Pollach, Panigrahi, Jung and Deshpande’s combination to leverage a proven, flexible storage solution for handling diverse ADAS analysis outputs (e.g., annotated sensor data and statistical results), as document-based databases are standard for non-relational data in distributed systems. This integration yields predictable improvements in data management, aligning with industry practices for robust, schema-flexible storage in performance evaluation contexts, without undue experimentation.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to STEVEN V NGUYEN whose telephone number is (571)272-7320. The examiner can normally be reached Monday -Friday 11am - 7pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, James J Lee can be reached at (571) 270-5965. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/STEVEN VU NGUYEN/
Examiner, Art Unit 3668