DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 02/03/2026 has been entered.
Response to Amendment
In light of Applicant’s amendment of claim 31, the objection of record with respect to claim 31 has been withdrawn.
Status of Claims
Claims 24-28, 31-38, and 41-45 are pending. Claims 24, 26, 28, 31, 34 and 36 are amended. Claims 1-23, 29, 30, 39 and 40 are cancelled. Claims 44 and 45 are new.
Response to Arguments
Applicant’s amendment of independent Claims 24 and 34, which has altered the scope of the claims of the instant application, has necessitated the new ground(s) of rejection presented in this office action with respect to claims of the instant application. Accordingly, in response to Applicant’s arguments that are merely directed to the amended portion of the claims, new analyses have been presented below, which make Applicant’s arguments moot.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 24-27, 31-37 and 41-45 are rejected under 35 U.S.C. 103 as being unpatentable over Choi et al. (US 2023/0082097 A1) in view of Wigington et al. (US 2022/0237444 A1) and in further view of Murray et al. (US 2015/0186742 A1).
Regarding claim 24, Regarding independent claim 24, Choi et al. (US 2023/0082097 A1) teaches, A scene classification method for a RADAR or LIDAR (Choi, ¶0007: “a method and apparatus that may generate information helpful to recognize an object, such as a detection and a region segmentation, by fusing multi- sensor information of a camera, a LiDAR, and a radar”) vehicle sensor system, (Choi, ¶0059: “in the field of the autonomous vehicle, cognitive information on a surrounding environment and vehicle”) the method comprising: receiving feature maps generated from RADAR or LIDAR sensor data provided by the RADAR or LIDAR vehicle sensor system, (Choi, ¶0059: “ sensors are mounted to an autonomous vehicle, the proposed multi-sensor information fusion technology may achieve stability of the autonomous driving”) wherein the feature maps represent a vehicle-centric (Choi, ¶0015: “object detection or a region segmentation is performed by reconstructing precision map information around an own vehicle as a 2D image, by acquiring the feature map”) coordinate system (Choi, ¶0016: “a coordinate system converter configured to convert the acquired feature map to an integrated 3D coordinate system”) with directions of rows and columns of the feature maps being parallel with respective longitudinal and lateral axes of the vehicle-centric coordinate system; (Choi, ¶0040: “a 2D feature map in a direction of a bird's eye view may be acquired”). However, Choi does not explicitly teach, processing the feature maps using longitudinal feature pooling of each longitudinal column of the feature maps and lateral feature pooling of each lateral row of the features maps to generate longitudinal column feature pool and lateral row feature pool outputs, the longitudinal and lateral feature pooling each being performed using one of maximum or mean feature pooling, wherein maximum feature pooling includes determining a maximum element for each longitudinal column or lateral row of the feature maps and generating the longitudinal column feature pool and lateral row feature pool outputs to each include the maximum elements from each longitudinal column or lateral row of the feature maps, respectively, and wherein the mean feature pooling includes determining a mean of elements within each longitudinal column or lateral row and generating the longitudinal column feature pool and lateral row feature pool outputs to each include the mean of the elements from each longitudinal column or lateral row of the feature maps, respectively; generating inner products from the longitudinal column feature pool and lateral row feature pool outputs; and classifying a scene based on the generated inner products.
In an analogous field of endeavor, Wigington teaches, processing the feature maps using longitudinal feature pooling of each longitudinal column of the feature maps (Wigington, ¶0029: “vertical maximum pooling 360 can be calculated for element (3,6) as the maximum feature value along the sixth column”) and lateral feature pooling of each lateral row of the features maps to generate longitudinal column feature pool and lateral row feature pool outputs, (Wigington, ¶0028: “horizontal maximum pooling 350, with a pooling kernel length of 5, can be calculated for element (1,3) as the maximum feature value along the first row (i=1), of the 5 elements (j=1 . . . 5), which in this case is equal to 5”) the longitudinal and lateral feature pooling each being performed using one of maximum or mean feature pooling, (Wigington, ¶0018: “The pooling processes may include max pooling, min pooling, mean pooling, or any other desired type of pooling”) wherein maximum feature pooling includes determining a maximum element for each longitudinal column (Wigington, ¶0029: “vertical maximum pooling 360 can be calculated for element (3,6) as the maximum feature value along the sixth column”) or lateral row of the feature maps and generating the longitudinal column feature pool and lateral row feature pool (Wigington, ¶0028: “horizontal maximum pooling 350, with a pooling kernel length of 5, can be calculated for element (1,3) as the maximum feature value along the first row (i=1), of the 5 elements (j=1 . . . 5), which in this case is equal to 5”) outputs to each include the maximum elements from each longitudinal column (Wigington, ¶0029: “vertical maximum pooling 360 can be calculated for element (3,6) as the maximum feature”; also see Fig. 3) or lateral row of the feature maps, respectively, (Wigington, ¶0028: “horizontal maximum pooling 350, with a pooling kernel length of 5, can be calculated for element (1,3) as the maximum feature value”; also see Fig. 3) and wherein the mean feature pooling includes determining a mean of elements within each longitudinal column (Wigington, ¶0031: “vertical mean pooling 380 can be calculated for element (3,8) as the average or mean of the feature values along the eighth column”; also see Fig. 3) or lateral row and generating the longitudinal column feature pool and lateral row feature pool outputs (Wigington, ¶0030: “horizontal mean pooling 370 can be calculated for element (6,5) as the average or mean of the feature values along the sixth row (i=6), which in this case is equal to 3”; also see Fig. 3) to each include the mean of the elements from each longitudinal column (Wigington, ¶0031: “vertical mean pooling 380 can be calculated for element (3,8) as the average or mean of the feature values”; also see Fig. 3) or lateral row of the feature maps, respectively; (Wigington, ¶0030: “horizontal mean pooling 370 can be calculated for element (6,5) as the average or mean of the feature values”; also see Fig. 3).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Choi using the teachings of Wigington to introduce vertical and horizontal max/mean pooling. A person skilled in the art would be motivated to combine the known elements as described above and achieve the predictable result of down sampling a feature map to reduce the computation load. Therefore, it would have been obvious to combine the analogous arts Choi and Wigington to obtain the above-described limitations of claim 24. However, the combination of Choi and Wigington does not explicitly teach, generating inner products from the longitudinal column feature pool and lateral row feature pool outputs; and classifying a scene based on the generated inner products.
In another analogous field of endeavor, Murray teaches, generating inner products from the longitudinal column feature pool and lateral row feature pool outputs; and classifying a scene based on the generated inner products. (Murray, ¶0054: “a dot-product between pooled representations is advantageous because it enables efficient linear classifiers on these representations to be learned”).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Choi in view of Wigington using the teachings of Murray to introduce dot product of pooled representations. A person skilled in the art would be motivated to combine the known elements as described above and achieve the predictable result of efficiently classifying object representations of a scene. Therefore, it would have been obvious to combine the analogous arts Choi, Wigington and Murray to obtain the invention in claim 24.
Regarding claim 25, Choi in view of Wigington and in further view of Murray teaches, The method of claim 24, wherein generating the inner product further comprises: concatenating the longitudinal column feature pool and lateral row feature pool outputs. (Wigington, ¶0037: “the branch merging module 530 is configured to concatenate the pooled branches 520”).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Choi in view of Wigington in further view of Murray using the additional teachings of Wigington to introduce concatenating pooled features. A person skilled in the art would be motivated to combine the known elements as described above and achieve the predictable result of efficiently representing a feature map through concatenated pooled branches. Therefore, it would have been obvious to combine the analogous arts Choi, Wigington and Murray to obtain the invention in claim 25.
Regarding claim 26, Choi in view of Wigington and in further view of Murray teaches, The method of claim 24, wherein the longitudinal and lateral feature pooling are each performed using maximum feature pooling. (Wigington, ¶0028: “horizontal maximum pooling 350, with a pooling kernel length of 5, can be calculated for element (1,3) as the maximum feature value”; and ¶0029: “vertical maximum pooling 360 can be calculated for element (3,6) as the maximum feature”; also see Fig. 3).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Choi in view of Wigington in further view of Murray using the additional teachings of Wigington to introduce max feature pooling. A person skilled in the art would be motivated to combine the known elements as described above and achieve the predictable result of efficiently detecting the most prominent features from a feature map. Therefore, it would have been obvious to combine the analogous arts Choi, Wigington and Murray to obtain the invention in claim 26.
Regarding claim 27, Choi in view of Wigington and in further view of Murray teaches, The method of claim 24, wherein classifying the scene (Choi, ¶0006: “a plurality of cameras for including all viewing angles needs to be used to recognize a surrounding 360-degree environment”) further comprises: generating one or more scene classification scores (Choi, ¶0003: “recognizing or detecting an object through a statistical classifier using the acquired feature values”) using the generated inner products. (Choi, ¶0006: “extract integrated surrounding environment awareness information from the fused feature value”).
Regarding claim 31, Choi in view of Wigington and in further view of Murray teaches, The method of claim 24, further comprising: generating the feature maps from the RADAR or LIDAR sensor data provided by the RADAR or LIDAR sensor system, wherein generating the feature maps comprises processing the RADAR or LIDAR sensor data (Choi, ¶0016: “information generation method for 360-degree detection and recognition of a surrounding object proposed herein includes acquiring a feature map from a multi-sensor signal”) through an object detection system. (Choi, ¶0053: “method may apply to various artificial intelligence technologies for recognizing an environment or an object”).
Regarding claim 32, Choi in view of Wigington and in further view of Murray teaches, The method of claim 31, wherein the object detection system comprises an artificial neural network architecture. (Choi, ¶0016: “recognition of a surrounding object proposed herein includes a sensor data collector configured to acquire a feature map from a multi-sensor signal using a DNN”).
Regarding claim 33, Choi in view of Wigington and in further view of Murray teaches, The method of claim 32, wherein the artificial neural network architecture is a Radar Deep Object Recognition network. (Choi, ¶0007: “information of a camera, a LiDAR, and a radar, based on a deep learning network in a situation in which object recognition information for autonomous driving”).
Regarding claim 34, it recites a system with elements corresponding to the steps of the method recited in claim 24. Therefore, the recited elements of system claim 34 are mapped to the proposed combination in the same manner as the corresponding steps in method claim 24. Additionally, the rationale and motivation to combine Choi, Choi, Wigington and Murray presented in rejection of claim 24, apply to this claim. Additionally, Choi teaches, A scene classification system for processing data from a RADAR or LIDAR (Choi, ¶0007: “recognize an object, such as a detection and a region segmentation, by fusing multi-sensor information of a camera, a LiDAR, and a radar”) vehicle sensor system, (Choi, ¶0059: “in the field of the autonomous vehicle, cognitive information on a surrounding environment and vehicle”) the scene classification system comprising: one or more processors; (Choi, ¶0060: “system including a graphic processor unit”) and a non-transitory computer-readable medium coupled to the one or more processors, the non-transitory computer-readable medium storing instructions that, when executed by the one or more processors, (Choi, ¶0063: “computer storage medium or device, to be interpreted by the processing device or to provide an instruction or data to the processing device”) cause the one or more processors to: receive, (Choi, ¶0060: “multi-sensor information acquired from an embedded system including a graphic processor unit”) via an input, feature maps (Choi, ¶0058: “a LiDAR feature map as an input”).
Regarding claim 35, it recites a system with elements corresponding to the steps of the method recited in claim 25. Therefore, the recited elements of system claim 35 are mapped to the proposed combination in the same manner as the corresponding steps in method claim 25. Additionally, the rationale and motivation to combine Choi, Wigington and Murray presented in rejection of claim 25, apply to this claim.
Regarding claim 36, it recites a system with elements corresponding to the steps of the method recited in claim 26. Therefore, the recited elements of system claim 36 are mapped to the proposed combination in the same manner as the corresponding steps in method claim 26. Additionally, the rationale and motivation to combine Choi, Wigington and Murray presented in rejection of claim 26, apply to this claim.
Regarding claim 37, it recites a system with elements corresponding to the steps of the method recited in claim 27. Therefore, the recited elements of system claim 37 are mapped to the proposed combination in the same manner as the corresponding steps in method claim 27. Additionally, the rationale and motivation to combine Choi, Wigington and Murray presented in rejection of claim 24, apply to this claim.
Regarding claim 41, it recites a system with elements corresponding to the steps of the method recited in claim 31. Therefore, the recited elements of system claim 41 are mapped to the proposed combination in the same manner as the corresponding steps in method claim 31. Additionally, the rationale and motivation to combine Choi, Wigington and Murray presented in rejection of claim 24, apply to this claim.
Regarding claim 42 it recites a system with elements corresponding to the steps of the method recited in claim 32. Therefore, the recited elements of system claim 42 are mapped to the proposed combination in the same manner as the corresponding steps in method claim 32. Additionally, the rationale and motivation to combine Choi, Wigington and Murray presented in rejection of claim 24, apply to this claim.
Regarding claim 43 it recites a system with elements corresponding to the steps of the method recited in claim 33. Therefore, the recited elements of system claim 43 are mapped to the proposed combination in the same manner as the corresponding steps in method claim 33. Additionally, the rationale and motivation to combine Choi, Wigington and Murray presented in rejection of claim 24, apply to this claim.
Regarding claim 44, Choi in view of Wigington and in further view of Murray teaches, The method of claim 24, wherein the longitudinal and lateral feature pooling are each performed using mean feature pooling. (Wigington, ¶0031: “vertical mean pooling 380 can be calculated for element (3,8) as the average or mean of the feature values along the eighth column”; and ¶0030: “horizontal mean pooling 370 can be calculated for element (6,5) as the average or mean of the feature values”; also see Fig. 3).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Choi in view of Wigington in further view of Murray using the additional teachings of Wigington to introduce mean feature pooling. A person skilled in the art would be motivated to combine the known elements as described above and achieve the predictable result of efficiently down sampling the feature map in a smoother generalized representation. Therefore, it would have been obvious to combine the analogous arts Choi, Wigington and Murray to obtain the invention in claim 44.
Regarding claim 45 it recites a system with elements corresponding to the steps of the method recited in claim 44. Therefore, the recited elements of system claim 45 are mapped to the proposed combination in the same manner as the corresponding steps in method claim 44. Additionally, the rationale and motivation to combine Choi, Wigington and Murray presented in rejection of claim 44, apply to this claim.
Claims 28 and 38 are rejected under 35 U.S.C. 103 as being unpatentable over Choi et al. (US 2023/0082097 A1), in view of Wigington et al. (US 2022/0237444 A1), in further view of Murray et al. (US 2015/0186742 A1) and still in further view of Freeman et al. (US 2019/0220709 A1).
Regarding claim 28, Choi in view of Wigington and in further view of Murray teaches, The method of claim 27. However, the combination of Choi, Wigington and Murray does not explicitly teach wherein the one or more scene classification scores provide a probability value indicating a probability that an associated scene is detected.
In an analogous field of endeavor, Freeman teaches, wherein the one or more scene classification scores provide a probability value indicating a probability that an associated scene is detected. (Freeman, ¶0013: “if road scenes are considered, there can be a class for vehicles, another class for pedestrians, another class for roads and another class for buildings. Since there are four predetermined classes in this example, for each image four probability values, in particular pseudo probability values, are generated. The probability value for one of the classes then indicates the probability that the image shows an object from this particular class”).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Choi in view of Wigington and in further view of Murray using the teachings of Freeman to introduce probability calculations. A person skilled in the art would be motivated to combine the known elements as described above and achieve the predictable result of generating a value for indicating the probability of target detection. Therefore, it would have been obvious to combine the analogous arts Choi, Wigington, Murray and Freeman to obtain the invention in claim 28.
Regarding claim 38, it recites a system with elements corresponding to the steps of the method recited in claim 28. Therefore, the recited elements of system claim 38 are mapped to the proposed combination in the same manner as the corresponding steps in method claim 28. Additionally, the rationale and motivation to combine Choi, Wigington, Murray and Freeman presented in rejection of claim 28, apply to this claim.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MEHRAZUL ISLAM whose telephone number is (571)270-0489. The examiner can normally be reached Monday-Friday: 8am-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Saini Amandeep can be reached on (571) 272-3382. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MEHRAZUL ISLAM/Examiner, Art Unit 2662
/AMANDEEP SAINI/Supervisory Patent Examiner, Art Unit 2662