DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Election/Restrictions
Applicant’s election without traverse of claims 1-3, 5-11, 13-19 in the reply filed on 01/14/2026 is acknowledged.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-3, 6-7, 9-11, 14-15, 17-19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Nakamura (US2023/0390935) in view of Hiasa (US2020/0285901).
To claim 1. Nakamura teach an apparatus comprising:
at least one processor; and a memory coupled to the at least one processor, the memory storing instructions that, when executed by the at least one processor (Fig. 2), cause the at least one processor to:
identify a partial image corresponding to an area in an image (paragraph 0073, an estimated image is outputted from a camera image; obviously the estimated image corresponds to an area in the camera image, or alternatively an area feature targeted for analysis) in which performance of inference by a learning model that performs predetermined inference on an input image is less than or equal to a threshold (S13=>S16 of Fig. 6, S24=>S30 of Fig. 8; paragraphs 0079, 0102);
collect a similar image similar to the identified partial image (Fig. 5, paragraph 0060, 0070, collects learning data; paragraph 0087, camera images of plurality of pieces of learning data coincide with the estimated image for a certain work scene; paragraphs 0099-0100, similar); and
based on additional images including the collected similar image, improve a result of the inference by the learning model targeted at a test environment different from a training environment of the learning model (paragraphs 0005-0007,0010, 0101, different environmental condition)
In further said obviousness, Hiasa teach in a machine learning neural network an estimated image being outputted from processing of an input image that corresponds to an area of a captured image (paragraphs 0005-0008).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate teaching of Hiasa into the apparatus of Nakamura, in order to further recognition task implementation.
To claim 9, Nakamura and Hiasa teach a method (as explained in response to claim 1 above).
To claim 17, Nakamura and Hiasa teach a non-transitory computer-readable storage medium storing a program for causing a computer to execute a method (as explained in response to claim 1 above).
To claims 2, 10 and 18, Nakamura and Hiasa teach claims 1, 9 and 17.
Nakamura and Hiasa teach wherein in a training process of the learning model, the partial image in which the performance of the inference on the input image by the learning model is less than or equal to the threshold is identified (Nakamura, paragraphs 0077-0080).
To claims 3, 11 and 19, Nakamura and Hiasa teach claims 1, 9 and 17.
Nakamura and Hiasa teach wherein the additional images include the collected similar image and the identified partial image (Nakamura, Fig. 5, data collection).
To claims 6 and 14, Nakamura and Hiasa teach claim 1 and 9.
Nakamura and Hiasa teach wherein the learning model detects a predetermined detection target in the input image and outputs a result of detecting the detection target as the result of the inference (Nakamura, paragraphs 0096-0101), and wherein results of detecting the detection target from a plurality of images successive in a chronological direction are compared with each other, and a partial image according to a result of detecting the detection target from an image having a tendency different from a tendency of a result of detecting the detection target from another image is identified (Nakamura, Fig. 5, paragraph 0111, a camera image is used as the information indicating the environment at the time of learning).
To claims 7 and 15, Nakamura and Oniki teach claims 1 and 9.
Nakamura and Oniki teach wherein the predetermined inference is learned by updating a parameter of the learning model using the additional images (Nakamura, paragraphs 0072-0082).
Claim(s) 5, 8, 13, 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Nakamura (US2023/0390935) in view of Hiasa (US2020/0285901) and Arvidsson (US2009/0276148).
To claims 5 and 13, Nakamura and Hiasa claim 1 and 9,
Nakamura and Hiasa teach wherein the learning model restores the input image, thereby outputting a restoration image as the result of the inference, and wherein a cumulative average image of images of areas in still states in a plurality of images successive in a chronological direction and the restoration image output as the result of the inference by the learning model are compared with each other (Nakamura, Fig. 5, paragraph 0111, a camera image is used as the information indicating the environment at the time of learning, a camera image closest to an average of a plurality of camera images among the plurality of camera images or a camera image corresponding to a median value of the plurality of camera images can be used based on information of each of pixels of the plurality of camera images, obviously images collected in Fig. 5 are in a chronological direction, and the average would be a running mean or cumulative average), thereby identifying the partial image in which the performance of the inference is less than or equal to the threshold (Nakamura, Fig. 6).
Arvidsson teach using a moving average for each spatial region of image and for detecting differences (paragraph 0026), which would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate into the apparatus of Nakamura and Hiasa, in order to further analysis of time sequenced images by design preference.
.
To claim 8, Nakamura, Hiasa and Arvidsson teach an apparatus (as explained in response to claim 5 above).
To claim 16, Nakamura, Hiasa and Arvidsson teach a method (as explained in response to claim 5 above).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ZHIYU LU whose telephone number is (571)272-2837. The examiner can normally be reached Weekdays: 8:30AM - 5:00PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Stephen R Koziol can be reached at (408) 918-7630. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
ZHIYU . LU
Primary Examiner
Art Unit 2669
/ZHIYU LU/Primary Examiner, Art Unit 2665 March 5, 2026