Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
The response received on 10/2/2025 has been placed in the file and was considered by the examiner. An action on the merit follows.
Response to Amendment
The amendments filed on 2025 October 2 have been fully considered. Response to these amendments is provided below.
Summary of Amendment/ Arguments and Examiner’s Response:
The applicant has amended the claims and has argued that the prior art does not teach the newly claimed limitations.
All arguments are moot in view of new grounds of rejection, below.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-10, 12-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claims 1, 13 and 18 recite the limitation “the at least one first point” in line 14, 17 and 14, respectively. The applicant previously claims “at least first point” (claim 1) and “at least a first point” (claims 13 and 18). It appears if the applicant is referring to the same first point, but the language is inconsistent and indefinite because the language could be interpreted as different points. The examiner is interpreting the claim to be the same first point, but appropriate correction is required.
Claims 1, 13 and 18 recite the limitation “the at least one object classification” in the second to last line. It is unclear as to which at least one object classification the applicant is referring to, because “at least one first object classification” and “at least one second object classification” are previously claimed. It appears to be referring to “the at least one first object classification”. Please keep terms consistent.
Claim 12 recites the limitation “the first point” in lines 2 and 3. It is unclear as to which first point the applicant is referring to, since the applicant previously claims “the at least one first point”.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-4, 7-10, 12-16 and 18-20 are rejected under 35 U.S.C. 103(a) as being unpatentable over U.S. Patent Application Publication No. 20190145768 (Sasamoto et al) in view of U.S. Patent Application Publication No. 20200327685 (Ren et al).
Regarding claim 1, Sasamoto et al discloses a method for operating an autonomous vehicle (fig. 6, method carried out for fig. 8), the method comprising: obtaining image (fig. 2, fig. 6, item 601) associated with a scene of an autonomous vehicle (page 4, paragraph 50); determining a first estimated depth for each of a plurality of points in the image, the points of the images that are being evaluated for the process of fig. 4, and found as the boxes of fig. 3 as described in page 3, paragraph 39, fig. 5, item 5-1 and fig. 6, s603; generating a plurality of groups of points based on the first estimated depth for each of the plurality of points by finding the group of points in each box of fig. 3 and their depths associated with fig. 4, wherein each group of points of the plurality of groups of points corresponds to a different depth range, i.e. the depth ranges of fig. 4 (fig. 6, item 603, 605); determining a second estimated depth for at least first point of a first group of points of the plurality of groups of points by finding the accurate distance for the corresponding object of fig. 3, item 205 (page 4, paragraph 49, fig. 6, s606), using the range specific depth estimation head of item 105 of fig. 1, 8; determining a second estimated depth for at least one second point of a second group of points of the plurality of groups of points, i.e. the second estimated depth of the corresponding object of fig. 3, item 207 (page 4, paragraph 49, fig. 6, s606) using the range specific depth estimation head (fig. 1, 8, item 105); determining at least one first object classification for the at least one first point of the first group of points and at least one second object classification for the at least one second point of the second group of points by recognizing the objects (fig. 6, s607, fig. 8, item 108, recognition results of all regions); and causing the autonomous vehicle to be navigated based on the second estimated depth for the at least one first point, the second estimated depth for the at least one second point, since the second estimated depths are used in the object classification (fig. 6, s607 utilizes s606, page 4, paragraph 46) and the at least one object classification for the at least one first point of the first group of points, since the recognition result of item 108 of fig. 8 is used in the vehicle control of fig. 8, item 801 (page 4, paragraph 50).
Sasamoto et al does not disclose expressly determining second depths for different points of groups of points use different range specific depth estimation heads/ first and second range specific depth estimation heads for different groups of points at different depth ranges.
Ren et al discloses determining second/ refined depths for different points of groups of points use different range specific depth estimation heads/ first and second range specific depth estimation heads for different groups of points at different depth ranges (fig. 3, depth ranges, item 306, have different depth range estimation heads, item 308).
Sasamoto et al and Ren et al are combinable because they are from the same field of endeavor, i.e. depth estimation.
Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to use different range heads for the depth ranges.
The suggestion/motivation for doing so would have been to provide a more accurate system by allowing each range to be processed according to their capture features.
Therefore, it would have been obvious to combine the method of Sasamoto et al with depth estimation heads of Ren et al to obtain the invention as specified in claim 1.
Claims 13 and 18 are rejected for the same reasons as claim 1. Thus, the arguments analogous to that presented above for claim 1 are equally applicable to claims 13 and 18. Claims 13 and 18 distinguish from claim 1 only in that claim 13 is a system claim comprising at least one processor, and at least one non-transitory storage media storing instructions that, when executed by the at least one processor, cause the at least one processor to carry out the method of claim 1, and claim 18 is at least one non-transitory storage media claim storing instructions that, when executed by at least one processor, cause the at least one processor to carry out the method of claim 1. Sasamoto et al teaches further a system comprising at least one processor (page 3, paragraph 35), and, instructions when executed by the at least one processor, cause the at least one processor to carry out the method (fig. 6). Sasamoto et al does not disclose expressly that the system comprises at least one non-transitory storage media storing the instructions. Ren et al discloses systems that carry out methods include a non-transitory storage media storing instructions carried out by a processor (fig. 10, item 1020, 1030).
Regarding claim 2, Sasamoto et al discloses determining the first estimated depth for each of a plurality of points in the image comprises: identifying a first set of points in the image, i.e. the points of a first object such as a person (page 2, paragraph 23, fig. 3, item 205) ; identifying a second set of points in the image, i.e. the points of a second object such as the traffic signal (fig. 2, item 204), wherein the plurality of points includes the first set of points and the second set of points (fig. 3, points include the boxed areas); associating a set of semantic features with each of the plurality of points (page 2, paragraph 28); and determining the first estimated depth for each of the plurality of points in the image using the set of semantic features associated with the each of the plurality of points (page 2, paragraph 29, 30, page 3, paragraph 39).
Regarding claim 3, Sasamoto et al discloses the first set of points is greater than the second set of points, since the pixels of box 205 exceed the pixels of box 207 in fig. 3.
Regarding claim 4, Sasamoto et al discloses the first set of points are equally distributed across the image relative to each other and the second set of points are equally distributed across the image relative to each other when reinterpreting the first and second set of points as any set of points that are evenly distributed in image 201, fig. 3, since those points are all identified as being used in the process of fig. 6.
Regarding claim 7, Sasamoto et al discloses determining the first estimated depth for each of the plurality of points in the image comprises determining the first estimated depth for each of the plurality of points in the image using a depth estimation head, i.e. the depth estimation item 105 of fig. 1 and 8 and the head that carries out the processing of fig. 4.
Regarding claim 8, Sasamoto et al discloses each of the plurality of groups of points includes at least one point of the plurality of points, wherein the first estimated depth of the at least one point falls within the depth range of the respective group of points, i.e. the depth ranges shown in fig. 4 and shown in boxes in fig. 3.
Regarding claim 9, Sasamoto et al discloses the first group of points includes only points of the plurality of points with the first estimated depth that falls within the depth range of the first group of points (fig. 4).
Regarding claim 10, Sasamoto et al discloses generating the plurality of groups of points based on the first estimated depth for each of the plurality of points comprises assigning respective points of the plurality of points to the groups of points based on the first estimated depth of the respective points, one of the depths of fig. 4, wherein points of the plurality of points with the first estimated depth that falls within a first depth range are assigned to the first group of points, i.e. fig. 4, range near 5 m is associated with points of fig. 3, item 205, and points of the plurality of points with the first estimated depth that falls within a second depth range are assigned to a second group of points of the plurality of groups of points, i.e. fig. 4, range near 20 m is associated with pixels/ points of fig. 3, item 207.
Regarding claim 12, Sasamoto et al discloses the second estimated depth for the at least first point of the first group of points is more precise than the first estimated depth for the first point of the first group of points, because it is more accurate (page 4, paragraph 49).
Claims 14 and 19 are rejected for the same reasons as claim 2. Thus, the arguments analogous to that presented above for claim 2 are equally applicable to claims 14 and 19. Claims 14 and 19 distinguish from claim 2 only in that they have different dependencies, both of which have been previously rejected. Therefore, prior art applies.
Claims 15 and 20 are rejected for the same reasons as claim 3. Thus, the arguments analogous to that presented above for claim 3 are equally applicable to claims 15 and 20. Claims 15 and 20 distinguish from claim 3 only in that they have different dependencies, both of which have been previously rejected. Therefore, prior art applies.
Claims 16 is rejected for the same reasons as claim 4. Thus, the arguments analogous to that presented above for claims 4 is equally applicable to claim 16. Claim 16 distinguishes from claim 4 only in that they have different dependencies, both of which have been previously rejected. Therefore, prior art applies.
Claims 5, 6 and 17 are rejected under 35 U.S.C. 103(a) as being unpatentable over Sasamoto et al in view of Ren et al, as applied to claims 2 and 14 above, and further in view of U.S. Patent Application Publication No. 20230177840 (Telpaz et al)
Regarding claim 5, Sasamoto et al (as modified by Ren et al) discloses all of the claimed elements as set forth above and incorporated herein by reference.
Sasamoto et al (as modified by Ren et al) does not disclose expressly associating a set of semantic features with each of the plurality of points comprises generating a set of semantic features for each of the plurality of points using a neural network backbone.
Telpaz et al discloses associating a set of semantic features with each of the plurality of points that are being identified for segmentation comprises generating a set of semantic features for each of the plurality of points using a neural network backbone (page 8, paragraph 49).
Sasamoto et al (as modified by Ren et al) and Telpaz et al are combinable because they are from the same field of endeavor, i.e. semantic feature extraction for vehicle images.
Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to use a neural network to associate features.
The suggestion/motivation for doing so would have been to provide a more robust system by using a model that can account for history and data.
Therefore, it would have been obvious to combine the method of Sasamoto et al (as modified by Ren et al) with the neural backbone of Telpaz et al to obtain the invention as specified in claim 5.
Regarding claim 6, Telpaz et al discloses the neural network backbone is a deep residual network (ResNet) (page 8, paragraph 49).
Claim 17 is rejected for the same reasons as claim 5. Thus, the arguments analogous to that presented above for claim 5 are equally applicable to claim 17. Claim 17 distinguishes from claim 5 only in that they have different dependencies, both of which have been previously rejected. Therefore, prior art applies.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
/KATHLEEN Y DULANEY/Primary Examiner, Art Unit 2666 10/14/2025