DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
An Amendment was received 09/29/2025 along with associated Remarks. Claims 1-20 are pending in which claims 1-20 were amended.
Response to Arguments
Applicant’s arguments, see Remarks, pg 16, filed 09/29/2025, with respect to the interpretation of claims 1-18 under 35 U.S.C. § 112(f) has been fully considered and, in light of the associated amendments to claims 1-18, is persuasive. Therefore, the interpretation is withdrawn. In light of the withdrawn interpretation of the claims under 35 U.S.C. § 112(f), the scope and interpretation of the claims changed and the previously indicated allowable subject matter is withdrawn. An updated search was performed based on the amended claims and prior art was identified, as discussed below.
Applicant’s arguments, see Remarks, pg 16, filed 09/29/2025, with respect to the rejections of claims 20, under 35 U.S.C. § 101 has been fully considered and, in light of the associated amendment, is persuasive. Therefore, the rejection of claim 20 is withdrawn.
Applicant’s arguments, see Remarks, pg 17-21, filed 09/29/2025, with respect to the rejections of claim 19-20 under 35 U.S.C. § 102 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
No further argument is presented and all arguments were addressed.
Claim Objections
Claim 16 objected to because of the following informalities: Claim 16 is objected to for the amendment “the processor is further configured to configured to”. Appropriate correction is required.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-3, 5-9, 16-20 are rejected under 35 U.S.C. 103 as being unpatentable over Yamaguchi et al (US 2020/0042803) in view of Borges Oliveira et al (US 2020/0353622).
Regarding Claim 1, Yamaguchi et al teach an information processing apparatus (information processing apparatus 100; Fig 1-3 and ¶ [0054], [0060], [0085]), comprising:
a processor (information processing apparatus 100 includes processor 101; Fig 2, 4 and ¶ [0060]) configured to:
control a timing to collect a first plurality of learning image candidates (the processor 101 executes a control program (¶ [0061]) to control a transmitter 230, via the communication interfaces 104, 204 to transmit a plurality of image data from detector 210 to the information processing apparatus 100 periodically (daily or weekly in example and at given times, such as when the vehicle is parked) and are received via acquirer 110; Fig 1-4 and ¶ [0071], [0081], [0085], [0089] ), wherein
each of the collected first plurality of learning image candidates corresponds to a candidate for a first learning image in a relearning operation of a first recognition model (the plurality of image data is used by an object detection model, and the selector 140 selects image data used as the learning data for the object detection model; Fig 4 and ¶ [0089], [0123]); and
select at least one first learning image candidate from the collected first plurality of learning image candidates as the first learning image (the selector 140 selects image data as the learning data for the object detection model according to the degree of agreement obtained by the determining of the determiner 130; Fig 4 and ¶ [0123]), wherein
the at least one first learning image candidate is selected based on at least one of a feature of each of the collected first plurality of learning image candidates or a similarity of each of the collected first plurality of learning image candidates to a second learning image (the image data is selected when the degree of agreement obtained between a first object detection result (image data) and a second object detection result (point cloud data) is lower than a predetermined agreement value; Fig 4 and ¶ [0123]), and
the second learning image corresponds to a learning image accumulated in the information processing apparatus (the LiDAR point cloud data (second learning image) is accumulated with the camera image data and transmitted to the information processing apparatus 100 which is accumulated when acquired by acquirer 110; Fig 4 and ¶ [0089]).
Yamaguchi et al teaches the first learning image is used in a learning operation (¶ [0123]) but does not teach the first learning image is used in a relearning operation of a first recognition model.
Borges Oliveira et al is analogous art pertinent to the technological problem addressed in this application and teaches the first learning image is used in a relearning operation of a first recognition model (a trained image classifier is retrained based on images selected as training data sets, step 208; Fig 2 and ¶ [0026]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the current application to combine the teachings of Yamaguchi et al with Borges Oliveira et al including the first learning image is used in a relearning operation of a first recognition model. By retraining the classifier, objects are readily identified and datasets are more comprehensive, thereby effectively and efficiently improving the model to navigate in a physical environment, as recognized by Borges Oliveira et al (¶ [0002]).
Regarding Claim 2, Yamaguchi et al in view of Borges Oliveira et al teach the information processing apparatus according to claim 1 (as described above), wherein the first recognition model corresponds to a model to recognize a specific recognition target around a first vehicle (Yamaguchi et al, the object detection model of information processing apparatus 100 is used to recognize objects around a vehicle 200; Fig 1-6 and ¶ [0097]-[0098]), the collected first plurality of learning image candidates includes a third image of surroundings of the first vehicle (Yamaguchi et al, a plurality of images are collected surrounding the vehicle 200, which would include a third image and may be a training image; Fig 1-4 and ¶ [0056], [0089], [0123]), an image sensor captures the third image (Yamaguchi et al, the camera 205 is used to acquire a plurality of image data; Fig 2 and ¶ [0080]-[0081]), and the image sensor is in the first vehicle (Yamaguchi et al, the camera 205 is a sensor mounted to vehicle 200; Fig 2 and ¶ [0080]).
Regarding Claim 3, Yamaguchi et al in view of Borges Oliveira et al teach the information processing apparatus according to claim 2 (as described above), wherein the processor (Yamaguchi et al, information processing apparatus 100 includes processor 101; Fig 2, 4 and ¶ [0060]) is further configured to control the timing based on at least one of a first place associated with a travel path of the first vehicle or an environment associated with the travel path of the first vehicle (Yamaguchi et al, the processor 101 executes a control program (¶ [0061]) to control a transmitter 230, via the communication interfaces 104, 204 to transmit a plurality of image data from detector 210 to the information processing apparatus 100, which may occur when the vehicle 200 is parked at the home of the vehicle driver (interpreted as a place associated with a travel path (origin or destination, described in specification to include start to goal ¶ [0054]); Fig 4 and ¶ [0071], [0081], [0085]), and the first vehicle is on the travel path (Yamaguchi et al, the image data is transmitted when the vehicle 200 is parked at the home of the vehicle driver; Fig 4 and ¶ [0085]).
Regarding Claim 5, Yamaguchi et al in view of Borges Oliveira et al teach the information processing apparatus according to claim 2 (as described above), wherein the processor (Yamaguchi et al, information processing apparatus 100 includes processor 101; Fig 2, 4 and ¶ [0060]) is further configured to: determine a decrease in reliability of a recognition result of the first recognition model at a time the first vehicle is on a travel path (Yamaguchi et al, the image data is analyzed by the object detection model according to the degree of agreement (reliability) between the image data; Fig 4 and ¶ [0123]); and collect the first plurality of learning image candidates based on the determination of the decrease in the reliability of the recognition result (Yamaguchi et al, the selector 140 selects image data as the learning data for the object detection model according to the degree of agreement obtained by the determining of the determiner 130; Fig 4 and ¶ [0085], [0123]).
Regarding Claim 6, Yamaguchi et al in view of Borges Oliveira et al teach the information processing apparatus according to claim 2 (as described above), wherein the processor (Yamaguchi et al, information processing apparatus 100 includes processor 101; Fig 2, 4 and ¶ [0060]) is further configured to collect one of the first plurality of learning image candidates based on at least one of a change of the image sensor or a change of a position of the image sensor (Yamaguchi et al, the selected image data used for the learning data is based on a degree of agreement based on the first object detection results and the second object detection results, where the first and second object detection results are based on different sensor results (change of the image sensor) detecting in the region of interest; ¶ [0116], [0123]).
Regarding Claim 7, Yamaguchi et al in view of Borges Oliveira et al teach the information processing apparatus according to claim 2 (as described above), wherein the first vehicle receives a fourth image from outside of the first vehicle (Yamaguchi et al, the LiDAR 206 is a sensor mounted to outside vehicle 200 and used to acquire a plurality of point cloud image data; Fig 2 and ¶ [0073], [0081]), and the processor (Yamaguchi et al, information processing apparatus 100 includes processor 101; Fig 2, 4 and ¶ [0060]) is further configured to collect the received fourth image as one of the collected first plurality of learning image candidates (Yamaguchi et al, the selector 140 includes point cloud data as the learning data for the object detection model according to the degree of agreement obtained by the determining of the determiner 130; Fig 4 and ¶ [0123]-[0124]).
Regarding Claim 8, Yamaguchi et al in view of Borges Oliveira et al teach the information processing apparatus according to claim 1 (as described above), wherein the collected first plurality of learning image candidates includes at least one of a backlight region, a shadow region, a reflector, a region in which a similar pattern is repeated, a construction site, an accident site, rain, snow, smog, haze, or a region including a plurality of feature patterns, and the plurality of feature patterns are the same (Yamaguchi et al, the selector 140 selects image data as the learning data for the object detection model according to the degree of agreement obtained by the determining of the determiner 130 and can be based on the features containing the same feature pattern (matching of the same object between the camera and LiDAR image data); Fig 4 and ¶ [0123]).
Regarding Claim 9, Yamaguchi et al in view of Borges Oliveira et al teach the information processing apparatus according to claim 1 (as described above), wherein the processor (Yamaguchi et al, information processing apparatus 100 includes processor 101; Fig 2, 4 and ¶ [0060]) is further configured to:
collect a plurality of verification image candidates (Yamaguchi et al, a plurality of LiDAR point cloud data is collected and transmitted to the information processing apparatus 100 with the camera image data (first learning image) and is acquired by acquirer 110; Fig 4 and ¶ [0089]); and
select at least one verification image candidate from the collected plurality of verification image candidates as a first verification image, each of the collected plurality of verification image candidates corresponds to a candidate for the first verification image for verification of the first recognition model (Yamaguchi et al, the point cloud data is matched to the associated image data and used for comparative analysis for object detection in the image data; Fig 5A-C ¶ [0089]-[0091]),
the at least one verification image candidate is selected based on a similarity of each of the collected plurality of verification image candidates to a second verification image (Yamaguchi et al, the LiDAR point cloud data (second learning image) is transmitted to the information processing apparatus 100 with the camera image data (first learning image) and is acquired by acquirer 110; Fig 4 and ¶ [0089]), and
the second verification image corresponds to a verification image accumulated in the information processing apparatus (the LiDAR point cloud data is accumulated with the camera image data and transmitted to the information processing apparatus 100 and accumulated when acquired by acquirer 110; Fig 4 and ¶ [0089]).
Regarding Claim 16, Yamaguchi et al in view of Borges Oliveira et al teach the information processing apparatus according to claim 1 (as described above), wherein the processor (Yamaguchi et al, information processing apparatus 100 includes processor 101; Fig 2, 4 and ¶ [0060]) is further configured to configured to: recognize a specific recognition target via the first recognition model (Yamaguchi et al, the objects detected in a region 512 may be of expected shapes, such as persons, sidewalks, crosswalks, other vehicles; Fig 8,-10 and ¶ [0104]-[0110]); and estimate reliability of a first recognition result of the first recognition model based on the recognized specific recognition target (Yamaguchi et al, the degree of agreement is determined for the object in the given region 440 based on the image 400; Fig 11A, 11B and 0118]-[0119]).
Regarding Claim 17, Yamaguchi et al in view of Borges Oliveira et al teach the information processing apparatus according to claim 16 (as described above), wherein the processor (Yamaguchi et al, information processing apparatus 100 includes processor 101; Fig 2, 4 and ¶ [0060]) is further configured to estimate reliability of a second recognition result of a second recognition model, and the reliability is estimated based on statistics of the second recognition result (Yamaguchi et al, the point cloud data forms second object detection results and is based on a predetermined value (statistics of the expected object); Fig 4 and ¶ [0089], [0123]-[0124]).
Regarding Claim 18, Yamaguchi et al in view of Borges Oliveira et al teach the information processing apparatus according to claim 1 (as described above), wherein the processor is further configured to relearn the first recognition model via the first learning image (Borges Oliveira et al, a trained image classifier is retrained based on images selected as training data sets, step 208, with the image identified to have a low confidence score for classification accuracy, steps 204, 206; Fig 2 and ¶ [0024]-[0026]).
Regarding Claim 19, Yamaguchi et al teach an information processing method (method of using information processing apparatus 100; Fig 1-4 and ¶ [0135]-[0142]), comprising:
by an information processing apparatus (information processing apparatus 100 includes processor 101; Fig 2, 4 and ¶ [0060]): performing steps identical to claim 1 (as described above).
Regarding Claim 20, Yamaguchi et al teach a non-transitory computer-readable medium having stored thereon, computer-executable instructions that when executed by a computer (information processing apparatus 100 includes processor 101 to execute control program stored in storage 103; Fig 2, 4 and ¶ [0060]-[0061]), cause the computer to execute operations, the operations comprising: steps identical to claim 1 (as described above).
Allowable Subject Matter
Claims 4, 10-15 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Claim 4 recites:
The information processing apparatus according to claim 3, wherein the processor is further configured to collect at least one of the first plurality of learning image candidates in at least one of a second place, a vicinity of a third place, or a vicinity of a fourth places the third place is a newly installed construction site, each of the collected at least one of the first plurality of learning image candidates is associated with a respective one of the second place, the third place, and the fourth place, a second learning image candidate associated with the second place is absent in a second plurality of learning image candidates, the second plurality of learning image candidates corresponds to learning image candidates collected prior to the collection of the first plurality of learning image candidates, the fourth place corresponds to a place of an accident of a second vehicle, the accident of the second vehicle is prior to the collection of the first plurality of learning image candidates, the first vehicle and the second vehicle include a first vehicle control system and a second vehicle control system, respectively, and the first vehicle control system is similar to the second vehicle control system.
Claim 10 recites:
The information processing apparatus according to claim 9, wherein the processor is further configured to: relearn the first recognition model via the first learning image that has been collected; and update the first recognition model on a basis of a result of based on a comparison between a first recognition accuracy and a second recognition accuracy, wherein the first recognition accuracy corresponds to a recognition accuracy of a first second recognition model for the first verification image, the second recognition accuracy corresponds to a recognition accuracy of a third recognition model for the first verification image, the second recognition model corresponds to the recognition model prior to the relearn of the first recognition model,
and the third recognition model corresponds to the recognition model subsequent to the relearn of the first recognition model.
Claim 11 is dependent on claim 10 and therefore allowable for similar reasons.
Claim 12 recites:
The information processing apparatus according to claim 9, wherein the processor is further configured to: recognize a specific predetermined recognition target for each pixel of a plurality of pixels of an input image; and estimate reliability of a first recognition result for the plurality of pixels based on the recognized specific recognition target, wherein the first recognition result corresponds to a result of the first recognition model for the plurality of pixels; and extract a region for the first verification image, wherein the region is extracted based on a comparison between the estimated reliability of the first recognition result and a threshold value that is dynamically set.
Claim 13-15 are dependent on claim 12 and therefore allowable for similar reasons.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Yokoi et al (US 2015/0071529) teach a learning image collection apparatus to collect a plurality of candidate images of different areas and determining a degree of similarity between candidate areas in a predetermined area with similarity based on size and feature characteristics.
Liao et al (US 2022/0188674) teach a method and system for generating different confidence levels of classifiers from defined target regions of interest and using the classifier data to train a second classifier.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to KATHLEEN M BROUGHTON whose telephone number is (571)270-7380. The examiner can normally be reached Monday-Friday 8:00-5:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, John Villecco can be reached at (571) 272-7319. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/KATHLEEN M BROUGHTON/Primary Examiner, Art Unit 2661