DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Applicant(s) Response to Official Action
The response filed on 11/26/2025 has been entered and made of record.
Response to Arguments/Amendments
Presented arguments have been fully considered but are held unpersuasive. Examiner’s response to the presented arguments follows below.
Claim Rejections - 35 USC § 103
Summary of Arguments:
Regarding claim 1, the Applicant argues Li et al., (US 2021/0261148 A1) in view of Meng et al., (CN-115675503-A):
“determining, by the one or more processors, a segmentation mask comprising one or more semantic features in the image that correspond to a selected terrain of interest by performing image segmentation on the image using a trained machine learning model; detecting, by the one or more processors, based on the segmentation mask, a vehicle path condition in the image;” [Remarks: Page 8]
“… disclosure of Li is system-driven and does not involve "determining... a segmentation mask comprising one or more semantic features in the image that correspond to a selected terrain of interest ..." or any indication of terrain interest, as recited in claim 1.” [Remarks: Page 8]
“Li also discusses applying I3D convolutions to image frames and extracting visual features, including using ROIAlign and MaskAlign to extract object features from semantic masks based on semantic segmentation. Li at paragraph 34. However, this segmentation of Li is directed to identifying irregular shaped objects and dynamic objects for scene analysis, not to "determining... a segmentation mask comprising one or more semantic features in the image that correspond to a selected terrain of interest" as recited in claim 1. There is no linkage between segmentation and "a selected terrain of interest", nor any indication that segmentation is conditioned on "a selected terrain of interest".” [Remarks: Page 8]
“Indeed, Li fails to disclose or suggest "determining...a segmentation mask ...that correspond[s] to a selected terrain of interest", or "detecting... a vehicle path condition" based on the "segmentation mask ...that correspond[s] to a selected terrain of interest".” [Remarks: Page 9]
“… cited portions of Meng do not remedy the above-noted deficiencies of the cited portions of Li. Furthermore, the proposed combination of the applied references is not understood to remedy the above-noted deficiencies.” [Remarks: Page 9]
Regarding dependent claims, the Applicant argues:
“The other claims currently under consideration in the application are dependent from their respective independent claims discussed above and therefore are believed to be allowable for at least similar reasons. Because each dependent claim is deemed to define an additional aspect of the invention, the individual consideration of each on its own merits is respectfully requested. Reconsideration and withdrawal of the rejections of the dependent claims are respectfully requested.” [Remarks: Page 9]
Examiner’s Response:
Regarding claim 1, the examiner contends:
Li discloses determining, by the one or more processors, a segmentation mask comprising one or more semantic features (roadway features/characteristics) in the image (Li: Paras. [0034], [0036]-[0037] disclose using a “neural network” to perform “semantic segmentation” on image frames to generate “semantic masks” and extract features of “driving scene characteristics” (ego-stuff).); detecting, by the one or more processors, based on the segmentation mask, a vehicle path condition in the image (Li: Paras. [0003], [0036] disclose “detect and identify driving scene characteristics” (path conditions) based on the processing of the semantic masks/graphs.).
Li does not explicitly disclose “semantic features in the image that correspond to a selected terrain of interest”.
However, Meng is in the same field of endeavor and teaches semantic features in the image that correspond to a selected terrain of interest (Meng: Paras. [0004], [0009], [0084] disclose an off-road auxiliary control method that involves obtaining a “terrain mode currently selected” (e.g., sand, mud, rock) and determining the terrain of the current location through “image analysis and classification methods” of a collected “terrain image.”).
In other words, Li discloses a sophisticated visual perception system using semantic segmentation masks to understand driving scenes, while Meng teaches an off-road assistance system that identifies specific terrains (corresponding to user-selected modes) and provides driver suggestions (e.g., shift gears, switch modes) to improve off-road passability. Therefore, it would have been obvious to a person of ordinary skill in the art to integrate the semantic segmentation mask generation of Li into the terrain detection framework of Meng. Doing so would enhance the accuracy of identifying complex “selected terrains of interest” (e.g., distinguishing sand from mud via semantic features) to reliably generate the safety suggestions and prompts taught by Meng.
ii.- v. See examiner’s response for “i.” above.
Regarding the dependent claims, see examiner’s response for claim 1 above.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-11 are rejected under 35 U.S.C. 103 as being unpatentable over Li et al., hereinafter referred to as Li (US 2021/0261148 A1) in view of Meng et al., hereinafter referred to as Meng (CN-115675503-A).
As per claim 1, Li discloses a method (Li: Abstract), comprising:
obtaining, by one or more processors, an image from a camera of a vehicle (Li: Paras. [0003], [0024], [0041], [0053] disclose obtaining, by one or more processors, an image from a camera of a vehicle 102.);
determining, by the one or more processors, a segmentation mask comprising one or more semantic features (roadway features/characteristics) in the image (Li: Paras. [0034], [0036]-[0037] disclose using a “neural network” to perform “semantic segmentation” on image frames to generate “semantic masks” and extract features of “driving scene characteristics” (ego-stuff).);
detecting, by the one or more processors, based on the segmentation mask, a vehicle path condition (i.e., driving scene characteristics) in the image (Li: Paras. [0003], [0033], [0036], [0067] disclose detecting driving scene characteristics, which include road markings, traffic lights, traffic signs, and roadway configuration based on the processing of the semantic masks/graphs.); and
(102) based on the vehicle path condition (Li: Fig. 5 & Paras. [0048], [0049], [0080]-[0081] disclose predicting driver actions and providing alerts based the predicted driver stimulus action 232 and predicted driver intention action 234 to enable the driver to complete one or more driving maneuvers to avoid any potential overlap with dynamic objects and/or static objects within the driving scene.).
However, Li does not explicitly disclose “… semantic features in the image that correspond to a selected terrain of interest … displaying, on a user interface, a notification indicating a plurality of suggestions …”.
Further Meng is in the same field of endeavor and teaches semantic features in the image that correspond to a selected terrain of interest (Meng: Paras. [0004], [0009], [0084] disclose an off-road auxiliary control method that involves obtaining a “terrain mode currently selected” (e.g., sand, mud, rock) and determining the terrain of the current location through “image analysis and classification methods” of a collected “terrain image.”);
displaying, on a user interface (i.e., all-terrain driving control interface of the vehicle display screen), a notification indicating a plurality of suggestions (Meng: Paras. [0007], [0016], [0025], [0061], [0082], [0088] disclose teaching an off-road auxiliary control method to provide the user with an auxiliary reminder function when the vehicle is off-road. When a terrain condition is detected that does not match the current vehicle mode, the system prompts the user to switch to the terrain mode corresponding to the terrain. This terrain mode switch is an action executed by the vehicle. Further, the system provides gear shifting suggestions, prompting the user to “shift gears” based on vehicle speed and the current terrain mode. These are suggestions (e.g., shift gears, switch to sand mode) for actions to be executed by the vehicle based on the path condition.).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, and having the teachings of Li and Meng before him or her, to modify the vehicle scene analysis system of Li to include the semantic features in the image that correspond to a selected terrain of interest and displaying user interface suggestion features as described in Meng. The motivation for doing so would have been to improve driving ability for inexperienced drivers by providing a convenient configuration that monitors important parameters in real time to reliably generate safety suggestions and prompts.
As per claim 2, Li-Meng disclose the method of claim 1, wherein performing the image segmentation comprises:
dividing the image into a plurality of image segments (Li: Fig. 2 & Paras. [0034], [0036], [0054] disclose applying instance segmentation and semantic segmentation 210 on input image frames 202 to detect dynamic objects (fed into “ego-thing graph”) and static scene elements (fed into “ego-stuff graph”) located within the driving scene and the driving scene characteristics of the driving scene.); and
assigning, using the trained machine learning model, a semantic label (stuff-objects/thing-objects) to each of the plurality of image segments, wherein each of the one or more semantic features includes the semantic label of a corresponding image segment of the plurality of image segments (Li: Paras. [0034], [0067]-[0068] disclose assigning, using the trained machine learning model, a semantic label to each of the plurality of image segments, wherein each of the one or more semantic features includes the semantic label of a corresponding image segment of the plurality of image segments.).
As per claim 3, Li-Meng disclose the method of claim 2, wherein detecting the vehicle path condition comprises:
determining, using the trained machine learning model (108), one or more boundary regions (206, 406) in the plurality of image segments, wherein the vehicle path condition is detected based on the one or more semantic features and the one or more boundary regions (Li: Figs. 4A-4D & Paras. [0033]-[0034], [0056], [0067]-[0069] disclose determining, using the trained machine learning model, one or more boundary regions in the plurality of image segments, wherein the vehicle path condition is detected based on the one or more semantic features and the one or more boundary regions.).
As per claim 4, Li-Meng disclose the method of claim 1, wherein detecting the vehicle path condition comprises detecting a type of terrain along a path of the vehicle within a scene of the image (Meng: Para. [0082] discloses it is determined that the current terrain is one of sand, mud and snow (different terrains in different seasons and weather conditions).).
As per claim 5, Li-Meng disclose the method of claim 4, further comprising generating the plurality of suggestions based at least in part on the detected type of terrain along the path of the vehicle (Meng: Paras. [0082]-[0089] disclose generating the plurality of suggestions (e.g., shift gears, switch to sand mode) based at least in part on the detected type of terrain along the path of the vehicle.).
As per claim 6, Li-Meng disclose the method of claim 1, wherein detecting the vehicle path condition comprises detecting a type of hindrance (e.g., dynamic objects and/or static objects) along a path of the vehicle within a scene of the image (Li: Fig. 4 & Paras. [0048], [0049], [0054], [0080]-[0081] disclose predicting driver actions and providing alerts based the predicted driver stimulus action 232 and predicted driver intention action 234 to enable the driver to complete one or more driving maneuvers to avoid any potential overlap with dynamic objects and/or static objects within the driving scene.).
As per claim 7, Li-Meng disclose the method of claim 6, further comprising generating the plurality of suggestions based at least in part on the detected type of hindrance along the path of the vehicle (Li: Fig. 4 & Para. [0054] disclose the I3D 204 is configured to apply instance segmentation and semantic segmentation 210 to detect dynamic objects located within the driving scene and the driving scene characteristics of the driving scene. As discussed above, the dynamic objects may be classified as ego-things and the dynamic scene characteristics may be classified as ego-stuff. Further, Meng: Paras. [0082]-[0089] disclose generating the plurality of suggestions (e.g., shift gears, switch to sand mode) based at least in part on the detected type of terrain along the path of the vehicle, which can also be considered as a hindrance along the path of the vehicle.).
As per claim 8, Li-Meng disclose the method of claim 1, wherein detecting the vehicle path condition comprises detecting one or more of a type of terrain or a type of hindrance along a path of the vehicle within a scene of the image, and further comprising generating the plurality of suggestions based at least in part on the detected type of hindrance or the detected type of terrain along the path of the vehicle (Meng: Paras. [0082]-[0089] disclose generating the plurality of suggestions (e.g., shift gears, switch to sand mode) based at least in part on the detected type of terrain along the path of the vehicle.).
As per claim 9, Li-Meng disclose the method of claim 1, further comprising receiving, via the user interface, user input indicating a selection of at least one of the plurality of suggestions corresponding to an action to be executed by the vehicle (Meng: Paras. [0056], [0082], [0089], [0096]-[0097] disclose the user selecting one of the plurality of suggestions (e.g., shift gears, switch to sand mode) based on a prompt on the all-terrain driving control interface. When it is determined that the mode switching is successful, the HUT displays the mode information through the vehicle display screen.).
As per claim 10, Li-Meng disclose the method of claim 1, further comprising receiving vehicle data information (Meng: i.e., tire slip rate, current speed of the vehicle) associated with the vehicle; and generating the plurality of suggestions based at least in part on the vehicle path condition and the vehicle data information (Meng: Paras. [0082]-[0089] disclose generating the plurality of suggestions (e.g., shift gears, switch to sand mode) based at least in part on the vehicle path condition and the vehicle data information.).
As per claim 11, Li-Meng disclose the method of claim 1, further comprising:
producing the trained machine learning model by training a neural network to predict (classify) semantic information (irregular shaped objects from semantic masks or stuff objects) and boundary region information (dynamic objects or pixels encapsulated by bounding boxes) for one or more pixels of the image (Li: Figs. 2, 4 & Paras. [0034], [0036], [0045], [0054]-[0056], [0069], [0071] disclose the neural network 108 configured to execute machine learning/deep learning processing to provide a one-channel binary mask on subsets of pixels of each of the image frames 202 that are encapsulated within each of the bounding boxes 206 that include each of the dynamic objects located within the driving scene 400 and extracts features of irregular shaped objects from semantic masks based on semantic segmentation 210 of the image frames 202 for classification. [i.e., it’s axiomatic, that the neural network is a trained machine learning model, since it classifies/predicts semantic and boundary region information for pixels of the image]); and
receiving, by the one or more processors, one or more user preferences indicating the selected terrain of interest (Meng: Paras. [0088], [0106], [0109]-[0111] disclose prompts configured by the user that correspond to different terrain modes, therefore, receiving the user’s preferences to shift gears is based on the selected terrain mode.).
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to PEET DHILLON whose telephone number is (571)270-5647. The examiner can normally be reached M-F: 5am-1:30pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sath V. Perungavoor can be reached at 571-272-7455. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/PEET DHILLON/Primary Examiner
Art Unit: 2488
Date: 01-29-2026