DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Drawings
The drawings are objected to under 37 CFR 1.83(a). The drawings must show every feature of the invention specified in the claims.
Figs. 2 and 4 are functional block diagrams that merely show blank boxes/blocks for the claimed methodology which do not show the features of the invention as recited in the claims. In other words, raw reference numbers and otherwise blank boxes are simply not sufficient to show the claimed method features. Moreover, the other figures are basic diagrams that do not show these features either.
Therefore, claim methodology must be shown or the features canceled from the claims. No new matter should be entered. It is suggested that English language labels be applied to the blank boxes of Figs. 2 and 4 which should resolve this objection.
Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. The figure or figure number of an amended drawing should not be labeled as “amended.” If a drawing figure is to be canceled, the appropriate figure must be removed from the replacement sheet, and where necessary, the remaining figures must be renumbered and appropriate changes made to the brief description of the several views of the drawings for consistency. Additional replacement sheets may be necessary to show the renumbering of the remaining figures. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance.
Claim Objections
Claim 3 is objected to because of the following informalities: claim 3 is an incomplete sentence ending in the conjunction “and”.
Appropriate correction is required.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 2, 3, and 5 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 2 recites “wherein the generating comprises, upon generating the rejection signal, generating a go-ahead signal that causes the second sensor data to be brought to further processing.” It is unclear to what “go-ahead signal” refers. Likewise, “to be brought to further processing” is wholly unclear. First of all, “brought to” infers physically bringing something to a location. Secondly, what is meant by “further processing”?
Claims 3 and 5 are rejected due to their dependency upon claim 2.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1 and 4 are rejected under 35 U.S.C. 102(a)(1) and (a)(2) as being anticipated by Ohgushi (US 2021/0117703 A1).
Claim 1
In regards to claim 1, Ohgushi discloses a method for avoiding or reducing false positives in a computer vision task {see below mapping particularly for the last step. Note also that “avoiding false positives for a computer vision task” is mere intended use because the body of the claim fails to breathe life and meaning or in any way refer to this intended use/generalized goal}, the method comprising:
receiving first sensor data and object recognition data generated based on the first sensor data, wherein the object recognition data is indicative at least of an object type and an object location of an object identified by the first sensor data, wherein the object type comprises at least one semantic descriptor {Fig. 1 semantic label estimation unit 14 inputs sensor data from in-vehicle camera 12 and generates object recognition data indicative of object type, location and including a semantic descriptor (label) for each pixel as per [0042]-[0043]};
generating comparative object data comprising the object defined by the object recognition data {the output of the semantic label estimation unit is comparative object data including the semantically labelled object on a pixel basis};
generating from the at least one semantic descriptor synthetic object data that correspond with the semantic descriptor {see original image estimation unit 16 which generates synthetic object data (reconstructed original image) from the semantic label image, Fig. 1, [0044]};
comparing the comparative object data with the synthetic object data {Fig. 1, difference calculation unit 18, [0045]-[0046]}; and
generating a confirmation signal that indicates confirmation of the validity object recognition data or a rejection signal that causes the object recognition data to be rejected from further processing based on a result of comparing the comparative object data with the synthetic object data {road obstacle detection unit 20 {road obstacle detection unit that determines that, for portions having a different over a threshold, that the object recognition data (semantic label) is incorrect (rejection signal such that the object recognition data from unit 14 is not used for the road obstacle detection; instead, the reconstructed image (synthetic) data is used to detect the road obstacle as per [0047]) or (generates a confirmation signal)}.
Claim 4
In regards to claim 4, Ohgushi discloses wherein the first sensor data comprises image data gathered by an imaging sensor apparatus {see above cites for claim 1 including camera 12}.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 2, 3, 5-11 are rejected under 35 U.S.C. 103 as being unpatentable over Ohgushi and Zhu {US 9,555,740 B1}.
Claim 2
In regards to claim 2, Ohgushi is not relied upon to disclose a second different sensor type or the processing steps of claim 2.
Zhu teaches receiving second sensor data from a sensor type different than a sensor type of the first sensor data, wherein the second sensor data comprises information about a same portion of an environment in which the object is located {column 8, lines 41--column 10, line 39}, and
wherein the generating comprises, upon generating the rejection signal, generating a go-ahead signal that causes the second sensor data to be brought to further processing {see column 1, lines 44-51; column 8, lines 10-40; cross-validation algorithm column 12, lines 14-37; column 14, line 58—column 15, line 52; column 17, line 44—column 18, line 12; and column 19, line 18—column 21, line 67}.
It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains to have modified Ohgushi’s method to include receiving second sensor data from a sensor type different than a sensor type of the first sensor data, wherein the second sensor data comprises information about a same portion of an environment in which the object is located, and wherein the generating comprises, upon generating the rejection signal, generating a go-ahead signal that causes the second sensor data to be brought to further processing as taught by Zhu because cross-validation between sensor modalities improves the accuracy, reliability an safety of object detection systems for vehicle applications, because there is a reasonable expectation of success and/or because doing so merely combines prior art elements according to known methods to yield predictable results.
Claims 3 and 5
Ohgushi is not relied upon to disclose but Zhu teaches (claim 3) wherein the comparing comprises comparing the first sensor data and the second sensor data and (claim 5) wherein the second sensor data comprises non-image data gathered by a non-imaging sensor apparatus {see column 8, lines 10-40; column 14, line 58—column 15, line 52; column 12, lines 14-37 while noting that the sensors include image data and non-imaging sensor data (sonar and/or radar)}.
It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains to have modified Ohgushi’s method to include wherein the comparing comprises comparing the first sensor data and the second sensor data and wherein the second sensor data comprises non-image data gathered by a non-imaging sensor apparatus as taught by Zhu because cross-validation between sensor modalities improves the accuracy, reliability an safety of object detection systems for vehicle applications, because there is a reasonable expectation of success and/or because doing so merely combines prior art elements according to known methods to yield predictable results.
Claim 6
In regards to claim 2, Ohgushi discloses a method controlling a motor vehicle, the method comprising:
obtaining first sensor data; determining object recognition data based on the first sensor data, wherein the object recognition data is indicative at least of an object type and an object location of an object identified by the first sensor data, wherein the object type comprises at least one semantic descriptor {Fig. 1 semantic label estimation unit 14 inputs sensor data from in-vehicle camera 12 and generates object recognition data indicative of object type, location and including a semantic descriptor (label) for each pixel as per [0042]-[0043]};
generating comparative object data that comprising the object defined by the object recognition data {the output of the semantic label estimation unit is comparative object data including the semantically labelled object on a pixel basis};
generating from the at least one semantic descriptor synthetic object data that correspond with the semantic descriptor {see original image estimation unit 16 which generates synthetic object data (reconstructed original image) from the semantic label image, Fig. 1, [0044]};
comparing the comparative object data with the synthetic object data {Fig. 1, difference calculation unit 18, [0045]-[0046]};
generating a confirmation signal that indicates confirmation of the validity object recognition data or otherwise a rejection signal that causes the object recognition data to be rejected from further processing data {road obstacle detection unit 20 {road obstacle detection unit that determines that, for portions having a different over a threshold, that the object recognition data (semantic label) is incorrect (rejection signal such that the object recognition data from unit 14 is not used for the road obstacle detection; instead, the reconstructed image (synthetic) data is used to detect the road obstacle as per [0047]) or (generates a confirmation signal)}; and
Zhu teaches generating a control signal for the motor vehicle in response to receiving the confirmation signal and the object recognition data, wherein the control signal is based on the object recognition data {column 12, lines 21-37; column 19, line 18—column 20, line 33}.
It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains to have modified Ohgushi’s method to include generating a control signal for the motor vehicle in response to receiving the confirmation signal and the object recognition data, wherein the control signal is based on the object recognition data
as taught by Zhu because cross-validation between sensor modalities improves the accuracy, reliability an safety of object detection systems for vehicle applications and control signals related thereto, because there is a reasonable expectation of success and/or because doing so merely combines prior art elements according to known methods to yield predictable results.
Claim 7
In regards to claim 7, Ohgushi is not relied upon to disclose but Zhu teaches wherein the generating comprises generating a control signal in response to receiving the rejection signal {column 12, lines 21-37; column 19, line 18—column 20, line 33}.
It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains to have modified Ohgushi’s method to include wherein the generating comprises generating a control signal in response to receiving the rejection signal as taught by Zhu because cross-validation between sensor modalities improves the accuracy, reliability an safety of object detection systems for vehicle applications and control signals related thereto, because there is a reasonable expectation of success and/or because doing so merely combines prior art elements according to known methods to yield predictable results.
Claim 8
In regards to claim 8, Ohgushi is not relied upon to disclose but Zhu teaches
receiving second sensor data that includes information about a same portion of an environment around the motor vehicle in which the object is located {column 8, lines 41--column 10, line 39}.
It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains to have modified Ohgushi’s method to include receiving second sensor data that includes information about a same portion of an environment around the motor vehicle in which the object is located as taught by Zhu because cross-validation between sensor modalities improves the accuracy, reliability an safety of object detection systems for vehicle applications and control signals related thereto, because there is a reasonable expectation of success and/or because doing so merely combines prior art elements according to known methods to yield predictable results.
Claims 9-11
The rejection of method claims 3-5 above applies mutatis mutandis to the corresponding limitations of method claim 9-11.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Antonides US 20220405578 A1 discloses multi-modal fusion including a GAN to generate synthetic sensor data. See [0064] and Fig. 1.
Karjalainen, Antti Ilari, Roshenac Mitchell, and Jose Vazquez. "Training and validation of automatic target recognition systems using generative adversarial networks." 2019 Sensor Signal Processing for Defence Conference (SSPD). IEEE, 2019.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Michael R Cammarata whose telephone number is (571)272-0113. The examiner can normally be reached M-Th 7am-5pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Matthew Bella can be reached at 571-272-7778. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MICHAEL ROBERT CAMMARATA/ Primary Examiner, Art Unit 2667