DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Applicant’s response to the Non-final Office Action dated 11/19/2025, filed with the office on 01/16/2026, has been entered and made of record.
Information Disclosure Statement
The information disclosure statement (“IDS”) filed on 03/02/2026 has been reviewed and the listed reference has been considered.
Response to Amendment
In light of Applicant’s amendment of claim 2, the rejection of record with respect to the claim under 35 U.S.C. 112(b) has been withdrawn.
Response to Arguments
Applicant's arguments filed on January 16, 2026 with respect to rejection of claims under 35 U.S.C. 101 has been fully considered; but they are not found persuasive. Specifically, in page 15 of its reply, Applicant argues in third paragraph that the amended claims 1, 10 and 19 include significantly more than any abstract idea. Examiner respectfully disagrees. Performing one or more operations under their broadest reasonable interpretation, cover performance of the limitations as an abstract idea, and do not provide meaningful limitations to transform the abstract idea into a patent eligible application of the abstract idea such that the claims amount to significantly more than the abstract idea itself. Therefore, applicant’s arguments are not found persuasive.
Applicant’s amendment of independent Claims 1, 10 and 19, which has altered the scope of the claims of the instant application, has necessitated the new ground(s) of rejection presented in this office action with respect to claims of the instant application. Accordingly, in response to Applicant’s arguments that are merely directed to the amended portion of the claims, new analyses have been presented below, which make Applicant’s arguments moot.
Consequently, THIS ACTION IS MADE FINAL.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The independent claims 1, 10, and 19 respectively recite a method, a system and a processor for image analysis. With respect to analysis of independent claims 1, 10 and 19:
Step 1:
With regard to Step 1, the instant claims are directed to a method; a system and a processor and therefore, the claim is directed to one of the statutory categories of invention.
Step 2A, Prong One:
With regard to 2A, Prong One, consider independent claim 1, the limitations of “determining, using one or more machine learning models and based at least on image data representative of an image, a first classification corresponding to a portion of the image; determining, based at least on map data representing a map associated with an environment, that the map indicates a second classification of driving surface for a point within the environment that corresponds to the portion of the image; determining, based at least on the first classification and the second classification of the driving surface, whether the driving surface is occluded at the portion of the image; generating first data indicating whether the driving surface is occluded at the portion of the image; and performing one of more operations using at least the first data.”5, as drafted, recite an abstract idea, such as a process that, under its broadest reasonable interpretation, covers performance of the limitations in the mind of a person, i.e., concepts performed in the human mind (including an observation, evaluation, judgement, opinion). That is, an analyst reviewing the images may classify objects represented in the image, analyze map data associated with the environment, recognize a driving surface, determine whether the driving surface is occluded at any portion of the image based on image data classification and map data classification, generate first data indicating the determination and perform an operation using the first data. This is the concept that falls under the grouping of abstract ideas mental processes, i.e., a concept performed in the human mind, evaluation, judgement, and/or observation of an analyst.
Step 2A, Prong Two:
The 2019 PEG defines the phrase “evaluate whether the claim recites additional elements that integrate the exception into a practical application of the exception”. Therefore, additional elements, or a combination of additional elements in the claim, are required to apply, rely on, or use the judicial exception. In the instant case, the additional elements/limitations in the claims, i.e., machine learning models and one or more processing units merely regarded as adding insignificant extra-solution activities to the judicial exception, and do not apply, rely on, or use the judicial exception as an indication of integration of the judicial exception into a practical application. Accordingly, the above-mentioned additional elements/limitations do not integrate the abstract idea into a practical application; and therefore, the claim recites an abstract idea.
Step 2B:
Because the claims fail under Step 2A, the claims are further evaluated under Step 2B. The claims herein do not include additional elements that are sufficient to amount to significantly more than the judicial exception, because as discussed above with respect to integration of the abstract idea into practical application, the additional elements/limitations to perform the steps, amount to no more than insignificant extra-solution activity. Merely applying an exception using generic components cannot provide an inventive concept. Therefore, claims 1, 10 and 19 are not patent eligible.
Further, with regard to dependent claims 2-9, 11-18 and 20 viewed individually, these additional steps, under their broadest reasonable interpretation, cover performance of the limitations as an abstract idea, and do not provide meaningful limitations to transform the abstract idea into a patent eligible application of the abstract idea such that the claims amount to significantly more than the abstract idea itself.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1-3, 5-8, 10-12 and 14-20 are rejected under 35 U.S.C. 103 as being unpatentable over Gummadi et al. (US 2021/0012120 A1) in view of Nayak et al. (US 11,551,548 B1).
Regarding claim 1, Gummadi teaches, A method comprising: (Gummadi, ¶0003: “the method can include”) determining, using one or more machine learning models (Gummadi, ¶0003: “a machine learning model forming point classifications”) and based at least on image data representative of an image, a first classification (Gummadi, ¶0003: “receiving camera image data and point cloud data into the autonomous vehicle, semantically classifying the image data”) corresponding to a portion of the image; (Gummadi, ¶0005: “semantically classify each pixel of the image data into a classification”). However, Gummadi does not explicitly teach, determining, based at least on map data representing a map associated with an environment, that the map indicates a second classification of driving surface for a point within the environment that corresponds to the portion of the image determining, based at least on the first classification and the second classification of the driving surface whether the driving surface is occluded at the portion of the image; generating first data indicating whether the driving surface is occluded at the portion of the image; and performing one of more operations using at least the first data.
In an analogous field of endeavor, Nayak teaches, determining, based at least on map data representing a map associated with an environment, (Nayak, col. 9, lines 36-38: “The map data may include one or more data points indicating attributes (e.g., geographical attributes) associated with the location”) that the map indicates a second classification of driving surface for a point within the environment that corresponds to the portion of the image; (Nayak, col. 10, lines 60-61: “one or more data points indicating the one or more road lane markings in map data”) determining, based at least on the first classification and the second classification of the driving surface whether the driving surface is occluded at the portion of the image; (Nayak, col. 9, lines 48-49: “determine a difference of one or more objects as indicated by the map data and the sensor data”) generating first data indicating whether the driving surface is occluded at the portion of the image; (Nayak, col. 16, lines 64-66: “determine whether a route from the current location of the vehicle 105 to the designated area does not interfere with any physical object”) and performing one of more operations using at least the first data. (Nayak, col. 16, lines 61-63: “calculation module 303 may identify a designated area (e.g., a side of a road, an off-road area, etc.) in which the vehicle 105 can park”).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Gummadi using the teachings of Nayak to introduce comparing sensor data and map data. A person skilled in the art would be motivated to combine the known elements as described above and achieve the predictable result of automatically detecting whether a parking spot in map data is available for parking in sensor data. Therefore, it would have been obvious to combine the analogous arts Gummadi and Nayak to obtain the invention in claim 1.
Regarding claim 2, Gummadi in view of Nayak teaches, The method of claim 1, wherein the determining whether the driving surface is occluded at the portion of the image comprises: determining, based at least on the second classification of the driving surface, that the first classification does not include one or more object classifications; (Gummadi, ¶0021: “classifying 1263 each of the second transformed points that represents a non-obstructed space”) and determining, based at least on the first classification not including the one or more object classifications, (Gummadi, ¶0021: “Non-obstructed space can be defined as any space that is not part of the obstructed space”) that the driving surface is not occluded at the portion of the image. (Gummadi, ¶0018: “detection of drivable surfaces, where the drivable surfaces can include, but are not limited to, road”).
Regarding claim 3, Gummadi in view of Nayak teaches, The method of claim 1, wherein; the determining whether the driving surface is occluded at the portion of the image comprises: determining, based at least on the second classification of the driving surface, that the first classification includes one or more object classifications; (Gummadi, ¶0002: “object classifications of objects that could occupy the cells with cells in the navigation area”) and determining, based at least on the first classification including the one or more object classifications, (Gummadi, ¶0033: “nearby objects including, but not limited to, people, cars, and low walls”) that the driving surface is occluded by one or more objects corresponding to the one or more object classifications at the portion of the image, (Gummadi, ¶0026: “Obstructed space or non-drivable surfaces can include surfaces that are impassable by a wheelchair/bicycle/car sized vehicle”) the first data indicates that the driving surface is occluded by the one or more objects (Gummadi, ¶0026: “Obstructed space or non-drivable surfaces can include surfaces that are impassable by a wheelchair/bicycle/car sized vehicle”) corresponding to the one or more object classifications. (Gummadi, ¶0033: “nearby objects including, but not limited to, people, cars, and low walls”).
Regarding claim 5, Gummadi in view of Nayak teaches, The method of claim 1, wherein the generating the first data comprises generating the first data representing a label associated with the portion of the image, (Gummadi, ¶0006: “The semantic segmentation output point (XRGB, YRGB) can optionally include values including 0=non-drivable, 1=road, 2=sidewalk, 3=terrain, 4=lane marking, >0=drivable, 0=obstructed”) the label indicating one of: the driving surface is not occluded at the portion of the image; (Gummadi, ¶0018: “detection of drivable surfaces, where the drivable surfaces can include, but are not limited to, road, sidewalk, ground, terrain surfaces, and lane markings”) the driving surface is occluded by a dynamic object at the portion of the image; (Nayak, col. 21, lines 44-45: “The sensor data may indicate geographical attributes and/or dynamic attributes”) or the driving surface is occluded by a static object at the portion of the image. (Nayak, col. 9, lines 60-61: “obstructing objects (e.g., another vehicle, a barrier, a cone”).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Gummadi in view of Nayak using the additional teachings of Nayak to introduce detecting dynamic or static object. A person skilled in the art would be motivated to combine the known elements as described above and achieve the predictable result of automatically detecting moving or still objects of an environment for autonomous driving. Therefore, it would have been obvious to combine the analogous arts Gummadi and Nayak to obtain the invention in claim 5.
Regarding claim 6, Gummadi in view of Nayak teaches, The method of claim 1, further comprising: determining, using the one or more machine learning models and based at least on the image data, (Gummadi, ¶0003: “classifying the image data based on a machine learning model forming point classifications”) a third classification corresponding to a second portion of the image; (Gummadi, ¶0006: “classifying each of the second transformed points that represents a non-obstructed space and an obstructed space within a pre-selected area surrounding the autonomous vehicle”) determining, based at least on the map data, that the map indicates a fourth classification of the driving surface for a second point within the environment that corresponds to the second portion of the image; (Nayak, col. 9, lines 50-53: “the sensor data may indicate an existence of a traffic barrier within the location of the WWD event; whereas, the map data does not include a datapoint that defines the traffic barrier within the location”) determining, based at least on the third classification and the fourth classification, whether the driving surface is occluded at the second portion of the image; (Nayak, col. 15, lines 40-43: “In the second scenario 400B, a road work 413 having a double-lane closure is impacting the second portion, thereby forcing the vehicles 401 and 403 to drive through the lane 409C”) and generating second data indicating whether the driving surface is occluded at the second portion of the image. (Nayak, col. 9, lines 58-60: “determining whether a route from the current position of the vehicle 105 to the correct portion does not interfere with any obstructing objects”).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Gummadi in view of Nayak using the additional teachings of Nayak to introduce detection of occluding objects in different portions of an image. A person skilled in the art would be motivated to combine the known elements as described above and achieve the predictable result of automatically determining whether a road is drivable at every portion. Therefore, it would have been obvious to combine the analogous arts Gummadi and Nayak to obtain the invention in claim 6.
Regarding claim 7, Gummadi in view of Nayak teaches, The method of claim 1, further comprising: determining, based at least on the map data, a first distance associated with the point within the environment; (Nayak, col. 8, lines 37-41: “detects that a relative distance between the front of the vehicle and another object (e.g., a vehicle, a barrier, etc.) is less than a threshold distance (e.g., the relative distance becomes less than 4.2 meters) at the location”) and determining, based at least on point cloud data, a second distance associated with the point within the environment, (Gummadi, ¶0031: “LIDAR 420 can provide data on the range or distance to surfaces around autonomous vehicle 121”) wherein the determining whether the driving surface is occluded at the portion of the image is further based at least on the first distance and the second distance. (Nayak, col. 8, lines 35-39: “detects a change with respect to one or more road objects at the location as indicated by map data; (5) detects that a relative distance between the front of the vehicle and another object (e.g., a vehicle, a barrier”).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Gummadi in view of Nayak using the additional teachings of Nayak to introduce detecting a change between map data and Lidar data. A person skilled in the art would be motivated to combine the known elements as described above and achieve the predictable result of automatically operating an autonomous vehicle to avoid collisions with obstructions. Therefore, it would have been obvious to combine the analogous arts Gummadi and Nayak to obtain the invention in claim 7.
Regarding claim 8, Gummadi in view of Nayak teaches, The method of claim 7, further comprising: determining whether the second distance is within a threshold distance to the first distance, wherein the determining whether the driving surface is occluded at the portion of the image is further based at least on whether the first distance is within the threshold distance to the second distance. (Nayak, col. 8, lines 35-40: “detects a change with respect to one or more road objects at the location as indicated by map data; (5) detects that a relative distance between the front of the vehicle and another object (e.g., a vehicle, a barrier, etc.) is less than a threshold distance (e.g., the relative distance becomes less than 4.2 meters”).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Gummadi in view of Nayak using the additional teachings of Nayak to introduce determining whether a distance threshold is met. A person skilled in the art would be motivated to combine the known elements as described above and achieve the predictable result of identifying an occluding object between an autonomous vehicle and map object. Therefore, it would have been obvious to combine the analogous arts Gummadi and Nayak to obtain the invention in claim 8.
Regarding claim 10, it recites a system with elements corresponding to the steps of the method recited in claim 1. Therefore, the recited elements of system claim 10 are mapped to the proposed combination in the same manner as the corresponding steps in method claim 1. Additionally, the rationale and motivation to combine Gummadi and Smolyanskiy presented in rejection of claim 1, apply to this claim. Gummadi additionally teaches, A system comprising: one or more processors (Gummadi, ¶0005: “the system can include, but is not limited to including, a pre-processor”) and the map indicates a second classification of a traffic object (Nayak, col. 10, lines 60-61: “one or more data points indicating the one or more road lane markings in map data”; the traffic object is interpreted as a road lane marking).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Gummadi in view of Nayak using the additional teachings of Nayak to introduce a map representing lane markings. A person skilled in the art would be motivated to combine the known elements as described above and achieve the predictable result of autonomously operating a vehicle within the lane markings based on the map data. Therefore, it would have been obvious to combine the analogous arts Gummadi and Nayak to obtain the invention in claim 10.
Regarding claim 11, it recites a system with elements corresponding to the steps of the method recited in claim 2. Therefore, the recited elements of system claim 11 are mapped to the proposed combination in the same manner as the corresponding steps in method claim 2. Additionally, the rationale and motivation to combine Gummadi and Nayak presented in rejection of claim 1, apply to this claim.
Regarding claim 12, it recites a system with elements corresponding to the steps of the method recited in claim 3. Therefore, the recited elements of system claim 12 are mapped to the proposed combination in the same manner as the corresponding steps in method claim 3. Additionally, the rationale and motivation to combine Gummadi and Nayak presented in rejection of claim 1, apply to this claim.
Regarding claim 14, it recites a system with elements corresponding to the steps of the method recited in claim 5. Therefore, the recited elements of system claim 14 are mapped to the proposed combination in the same manner as the corresponding steps in method claim 14. Additionally, the rationale and motivation to combine Gummadi and Nayak presented in rejection of claim 5, apply to this claim.
Regarding claim 15, it recites a system with elements corresponding to the steps of the method recited in claim 6. Therefore, the recited elements of system claim 15 are mapped to the proposed combination in the same manner as the corresponding steps in method claim 6. Additionally, the rationale and motivation to combine Gummadi and Nayak presented in rejection of claim 6, apply to this claim.
Regarding claim 16, it recites a system with elements corresponding to the steps of the method recited in claim 7. Therefore, the recited elements of system claim 16 are mapped to the proposed combination in the same manner as the corresponding steps in method claim 7. Additionally, the rationale and motivation to combine Gummadi and Nayak presented in rejection of claim 7, apply to this claim.
Regarding claim 17, Gummadi in view of Nayak teaches, The method of claim 16, wherein the one or more processors are further to: generate a first determination on whether the traffic object is occluded at the portion of the image based at least on the first classification and the second classification; (Nayak, col. 9, lines 48-49: “determine a difference of one or more objects as indicated by the map data and the sensor data”) and generate a second determination of whether the traffic object is occluded at the portion of the image based at least on the first distance and the second distance, (Nayak, col. 8, lines 35-39: “detects a change with respect to one or more road objects at the location as indicated by map data; (5) detects that a relative distance between the front of the vehicle and another object (e.g., a vehicle, a barrier”) wherein the determination of whether the traffic object is occluded at the portion of the image is based at least on the first determination and the second determination. (Nayak, col. 19, lines 17-22: “identifying a correct portion of a road (as indicated in map data); (2) determining whether a route from the current position of the vehicle 105 to the correct portion does not interfere with any obstructing objects (e.g., another vehicle, a barrier, a cone, etc.); and (3) if such route exists, causing the vehicle 105 to move to the correct portion”).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Gummadi in view of Nayak using the additional teachings of Nayak to introduce detecting an occluding object in the environment. A person skilled in the art would be motivated to combine the known elements as described above and achieve the predictable result of automatically operating an autonomous vehicle to avoid collisions with obstructions. Therefore, it would have been obvious to combine the analogous arts Gummadi and Nayak to obtain the invention in claim 17.
Regarding claim 18, Gummadi in view of Nayak teaches, The system of claim 10, wherein the system is comprised in at least one of: a control system for an autonomous or semi-autonomous machine (Gummadi, ¶0005: “The system of the present teachings for estimating free space based on image data and point cloud data, where the free space can be used for navigating an autonomous vehicle”).
Regarding claim 19, Gummadi teaches, One or more processor comprising processing circuitry to: (Gummadi, ¶0005: “the system can include, but is not limited to including, a pre-processor”) determine, based at least on image data representative of an image, a first classification corresponding to a portion of the image; (Gummadi, ¶0003: “performing a first transform on the points in the point cloud data into an image coordinate system associated with the image data and classifying each of the first transformed points that represents an obstructed space and the non-obstructed space”). However, Gummadi does not explicitly teach, determine, based at least on map data representing a map associated with an environment, that the map indicates a second classification of a traffic object for a point within the environment that corresponds to the portion of the image; determine, based at least on the first classification being an object that is different than the traffic object of the second classification, that the traffic object is occluded at the portion of the image; generate first data indicating that the traffic object is occluded at the portion of the image; and perform one or more operations using at least the first data.
In an analogous field of endeavor, Nayak teaches, determine, based at least on map data representing a map associated with an environment, (Nayak, col. 9, lines 36-38: “The map data may include one or more data points indicating attributes (e.g., geographical attributes) associated with the location”) that the map indicates a second classification of a traffic object for a point within the environment that corresponds to the portion of the image; (Nayak, col. 10, lines 60-61: “one or more data points indicating the one or more road lane markings in map data”) determine, based at least on the first classification being an object that is different than the traffic object of the second classification, that the traffic object is occluded at the portion of the image; (Nayak, col. 8, lines 35-39: “detects a change with respect to one or more road objects at the location as indicated by map data; (5) detects that a relative distance between the front of the vehicle and another object (e.g., a vehicle, a barrier”) generate first data indicating that the traffic object is occluded at the portion of the image; (Nayak, col. 16, lines 64-66: “determine whether a route from the current location of the vehicle 105 to the designated area does not interfere with any physical object”) and perform one or more operations using at least the first data. (Nayak, col. 16, lines 61-63: “calculation module 303 may identify a designated area (e.g., a side of a road, an off-road area, etc.) in which the vehicle 105 can park”).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Gummadi using the teachings of Nayak to introduce comparing sensor data and map data. A person skilled in the art would be motivated to combine the known elements as described above and achieve the predictable result of automatically detecting whether a traffic object is occluded. Therefore, it would have been obvious to combine the analogous arts Gummadi and Nayak to obtain the invention in claim 19.
Regarding claim 20, Gummadi in view of Smolyanskiy teaches, The one or more processors of claim 19, wherein the one or more processor are comprised in at least one of: a control system for an autonomous or semi-autonomous machine (Gummadi, ¶0005: “The system of the present teachings for estimating free space based on image data and point cloud data, where the free space can be used for navigating an autonomous vehicle”).
Claims 4 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over Gummadi et al. (US 2021/0012120 A1), in view of Nayak et al. (US 11,551,548 B1) and in further view of Smolyanskiy et al. (US 2021/0150230 A1).
Regarding claim 4, Gummadi in view of Nayak teaches, The method of claim 1, wherein the determining that the map indicates the second classification of the driving surface for the point within the environment that corresponds to the portion of the image (Nayak, col. 9, lines 57-58: “identifying a correct portion of a road (as indicated in map data)”) comprises: obtaining the map data associated with the environment, the map data representing (Nayak, col. 7, lines 61-62: “provide content or data (e.g., including geographic data, parametric representations of mapped features”) at least the second classification (Nayak, col. 10, lines 60-61: “one or more data points indicating the one or more road lane markings in map data”) map data, that the map indicates the second classification of the driving surface for the point within the environment. (Nayak, col. 10, lines 60-61: “one or more data points indicating the one or more road lane markings in map data”).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Gummadi in view of Nayak using the additional teachings of Nayak to introduce map data representing environmental features. A person skilled in the art would be motivated to combine the known elements as described above and achieve the predictable result of detecting environmental features for operating an autonomous vehicle. Therefore, it would have been obvious to combine the analogous arts Gummadi and Nayak to obtain the above-described limitations in claim 1. However, the combination of Gummadi and Nayak does not explicitly teach, a three-dimensional location for the point within the environment; projecting the three-dimensional location to a two-dimensional location associated with the portion of the image.
In an analogous field of endeavor, Smolyanskiy teaches, a three-dimensional location for the point (Smolyanskiy, ¶0062: “identify 3D locations of objects in the world space corresponding to each pixel”) within the environment; (Smolyanskiy, ¶0062: “location in a 3D representation of the environment (e.g., a 3D map or some other world space) projecting the three-dimensional location to a two-dimensional location associated with the portion of the image (Smolyanskiy, ¶0007: “projecting a LiDAR point cloud into one or more height maps in a top-down view) and/or images of the 3D space (e.g., by unprojecting an image into world space and projecting into a top-down view”).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Gummadi in view of Nayak using the teachings of Smolyanskiy to introduce projecting a 3D map into a 2D view. A person skilled in the art would be motivated to combine the known elements as described above and achieve the predictable result of autonomous navigation within an environment based on the top down 2D map of the road. Therefore, it would have been obvious to combine the analogous arts Gummadi, Nayak and Smolyanskiy to obtain the invention in claim 4.
Regarding claim 13, it recites a system with elements corresponding to the steps of the method recited in claim 4. Therefore, the recited elements of system claim 13 are mapped to the proposed combination in the same manner as the corresponding steps in method claim 4. Additionally, the rationale and motivation to combine Gummadi, Nayak and Smolyanskiy presented in rejection of claim 4, apply to this claim.
Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Gummadi et al. (US 2021/0012120 A1), in view of Nayak et al. (US 11,551,548 B1) and in further view of Brouard et al. (US 10,528,812 B1).
Regarding claim 9, Gummadi in view of Nayak teaches, The method of claim 7, whether the performing the one or more operations using the at least the first data comprising. However, the combination of Gummadi and Nayak does not explicitly teach, at least one of: generating training data representing the image with a label indicating that the road surface is occluded by one or more objects; or training, using at least one of the first data or the training data, one or more second machine learning models to update one or more parameters of the one or more second machine learning models.
In an analogous field of endeavor, Brouard teaches, at least one of: generating training data representing the image with a label indicating that the road surface is occluded by one or more objects; (Brouard, col. 16 lines 1-3: “recognize and output labels for landmarks that are not in the land map blocks 404 used to generating the initial training data labels”) or training, using at least one of the first data or the training data, one or more second machine learning models to update one or more parameters of the one or more second machine learning models. (Brouard, col. 16 lines 11-13: “a second training dataset 410 that is enhanced from the initial training dataset for training a second intermediate CNN model”).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Gummadi in view of Nayak using the teachings of Brouard to introduce training a second neural network with a labeled dataset. A person skilled in the art would be motivated to combine the known elements as described above and achieve the predictable result of improving the classification performance of the autonomous vehicle. Therefore, it would have been obvious to combine the analogous arts Gummadi, Nayak and Brouard to obtain the invention in claim 9.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MEHRAZUL ISLAM whose telephone number is (571)270-0489. The examiner can normally be reached Monday-Friday: 8am-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Saini Amandeep can be reached on (571) 272-3382. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MEHRAZUL ISLAM/Examiner, Art Unit 2662
/AMANDEEP SAINI/Supervisory Patent Examiner, Art Unit 2662