Prosecution Insights
Last updated: April 19, 2026
Application No. 18/096,905

APPARATUS FOR SELECTING A TRAINING IMAGE OF A DEEP LEARNING MODEL AND A METHOD THEREOF

Final Rejection §103
Filed
Jan 13, 2023
Examiner
SHIN, SOO JUNG
Art Unit
2667
Tech Center
2600 — Communications
Assignee
Kia Corporation
OA Round
2 (Final)
87%
Grant Probability
Favorable
3-4
OA Rounds
2y 4m
To Grant
99%
With Interview

Examiner Intelligence

Grants 87% — above average
87%
Career Allow Rate
527 granted / 604 resolved
+25.3% vs TC avg
Strong +16% interview lift
Without
With
+16.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 4m
Avg Prosecution
28 currently pending
Career history
632
Total Applications
across all art units

Statute-Specific Performance

§101
7.6%
-32.4% vs TC avg
§103
37.5%
-2.5% vs TC avg
§102
19.9%
-20.1% vs TC avg
§112
24.2%
-15.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 604 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action. Response to Amendment The amendment filed on 14 August 2025 has been entered. The amendment of claims 1 and 11, cancellation of claims 2, 3, 12, and 13, and addition of claim 21 have been acknowledged. In view of the amendment, the 35 U.S.C. 102(a)(2) rejections have been withdrawn. Response to Arguments Applicant's arguments filed on 14 August 2025, with respect to the pending claims, have been fully considered but they are not persuasive. Applicant’s Representative submits that the prior art of record does not teach “determining validity of the training image based on the detected similarity” because Oshima fails to disclose determining whether a training image is valid and Xu does not cure the deficiency of Oshima because Xu determines the similarity between certain regions of images. The examiner respectfully disagrees. The claim recites “determine a similarity between a structure of the object in the simulation image and a structure of an object in the training image” (emphasis added). The claim language does not require the entire images to be compared. Oshima teaches comparing to a similarity threshold to determine whether the feature extraction should be re-trained, i.e., the previous set of training images are deemed invalid due to not meeting the similarity requirement (Oshima ¶¶0053, ¶¶0056) and Xu teaches that a training image is invalid when the similarity does not exceed a threshold value (Xu ¶¶0013, ¶¶0122). Claim Rejections - 35 USC § 103 Claim(s) 1, 4-11, and 14-21 is/are rejected under 35 U.S.C. 103 as being unpatentable over Oshima et al. (US 2022/0391698 A1), in view of Xu et al. (US 2019/0333219 A1), hereinafter referred to as Oshima and Xu, respectively. Regarding claim 1, Oshima teaches an apparatus for selecting a training image of a deep learning model (Oshima Abstract: “a training recognition device that implements training of a DNN for article recognition”), the apparatus comprising: an input device configured to receive a simulation image and information about an object in the simulation image from a simulation tool, and receive a training image corresponding to the simulation image from an image conversion device (Oshima ¶¶0009: “an image conversion unit that inputs a simulation image and an actual site image into a generative adversarial network and converts the simulation image into an artificial site image”); and a controller configured to detect a similarity between a structure of the object in the simulation image and a structure of an object in the training image, determine validity of the training image based on the detected similarity (Oshima ¶¶0009: “re-trains a difference between the simulation image and the artificial site image, and outputs a feature point of the artificial site image”; Oshima ¶¶0035: “The re-training of the re-training feature extraction unit 12 is performed using a group of the feature point (ideal features). That is, a difference between a feature point output from the re-training feature extraction unit 12 and a feature point (ideal feature) output from the pre-trained feature extraction unit 15 is calculated by the error calculation unit for feature extraction unit 16 … a feature point similar to the feature point of the simulation image is obtained as the output of the re-training feature extraction unit 12”), determine that the recognition result is invalid when the detected similarity does not exceed a threshold value (Oshima ¶¶0053: “When the calculated current recognition accuracy is equal to or higher than a target value, the re-training of the DNN for article recognition (that is, the re-training feature extraction unit and the re-training identification unit) is completed. On the other hand, when the current recognition accuracy is less than the target value, the process shifts to the re-training of the second layer of the re-training feature extraction unit”; Oshima ¶¶0056: “when the current recognition accuracy is less than the target value, the feature point output by the re-training feature extraction unit is determined to be still deviated from the ideal feature (ideal feature (final layer)) only by the re-training of the first layer and second layer”), and store the training image in a storage (Oshima Fig. 1: 12, 15, 17; Oshima ¶¶0028: “Various data stored in the present device or system or used for processing can be implemented by being read and used by the CPU 1601 from the memory 1602 or the external storage device 1603”; Oshima ¶¶0034: “the DNN is trained in advance using the image generated by the simulation and the correct answer information, and a portion of the feature extraction unit in a preceding stage of a pre-trained DNN model is applied to the pre-trained feature extraction unit 15. The re-training feature extraction unit 12 also includes the same network structure as that of the pre-trained feature extraction unit 15”). However, Oshima does not appear to explicitly teach determining that the training image is invalid and storing the training image in a storage when the detected similarity exceeds a threshold value. Pertaining to the same field of endeavor, Xu teaches determining that the training image is invalid when the detected similarity does not exceeds a threshold value storing the training image in a storage when the detected similarity exceeds a threshold value (Xu ¶¶0013: “the CycleGAN is trained to apply a threshold to the metric such that when the similarity level exceeds the threshold, the metric is applied to the first and second pixel based loss terms and otherwise a zero value is applied to the first and second pixel based loss terms”; Xu ¶¶0122: “when the weight values of SSIM(x,y) are less than a threshold α (a hyper-parameter), all those weights may be set to be zero (e.g., the weight may be ignored, thereby decreasing the likelihood that the loss term is reduced or minimized)”; Xu Fig. 1: 150). Oshima and Xu are considered to be analogous art because they are directed to image processing using DNN. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the training recognition device and method (as taught by Oshima) to determine that the training image is invalid when the similarity value does not exceed a threshold value (as taught by Xu) because the combination decreases the likelihood that the loss term is reduced or minimized (Xu ¶¶0122). Regarding claim 4, Oshima, in view of Xu, teaches the apparatus of claim 1, wherein the controller is configured to determine that the training image is invalid when similarities are detected in a plurality of objects, and at least one of the similarities of the pluralities of objects does not exceed a threshold value (Xu ¶¶0013: “the CycleGAN is trained to apply a threshold to the metric such that when the similarity level exceeds the threshold, the metric is applied to the first and second pixel based loss terms and otherwise a zero value is applied to the first and second pixel based loss terms”; Xu ¶¶0122: “when the weight values of SSIM(x,y) are less than a threshold α (a hyper-parameter), all those weights may be set to be zero (e.g., the weight may be ignored, thereby decreasing the likelihood that the loss term is reduced or minimized)”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the training recognition device and method (as taught by Oshima) to determine that the training image is invalid when the similarity value does not exceed a threshold value (as taught by Xu) because the combination decreases the likelihood that the loss term is reduced or minimized (Xu ¶¶0122). Regarding claim 5, Oshima, in view of Xu, teaches the apparatus of claim 1, wherein the controller is configured to determine that the training image is valid (Oshima ¶¶0009 & ¶¶0035 discussed above), and store the training image in a storage when similarities are detected in a plurality of objects (Oshima ¶¶0028 & ¶¶0034 discussed above), and all the similarities of the plurality of objects exceed a threshold value (Xu ¶¶0013 & ¶¶0122 discussed above teach that the values not exceeding the threshold value are set to zero, resulting in all of the similarity values exceeding the threshold value). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the training recognition device and method (as taught by Oshima) to determine that the training image is invalid when the similarity value does not exceed a threshold value (as taught by Xu) because the combination decreases the likelihood that the loss term is reduced or minimized (Xu ¶¶0122). Regarding claim 6, Oshima, in view of Xu, teaches the apparatus of claim 1, wherein the controller is configured to determine a region of a first object in the simulation image and a region of a second object in the training image based on information on the object in the simulation image, and detect a structural similarity between the first object and the second object (Xu ¶¶0053: “computing system 110 may instruct a CBCT device to obtain an image of a target region of a subject (e.g., a brain region)”), and detecting a structural similarity between the first object and the second object (Xu ¶¶0105: “The training input is provided to the GAN model training 430 to produce a trained generator model 460 used in the GAN model usage 450. Mappings of anatomical areas 424 and 425 provide the metric used to compare similarities between two images (e.g., using SSIM weights)”; Xu ¶¶0119: “the metric used to measure relationship between the two images is an SSIM weight … SSIM weighted sCT-CT L1 term: Ex˜p(CBCT),y˜p(CT) SSIM(x,y)⋅||G′1(x)−y||1, and SSIM weighted sCBCT-CBCT L1 term: Ex˜p(CBCT),y˜p(CT) SSIM(x,y) ⋅||2(y)−x||1”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the training recognition device and method (as taught by Oshima) to calculate SSIM (as taught by Xu) because the SSIM metric can compare similarities between two images (Xu ¶¶0105) and compensate for imperfect matching of paired (Xu ¶¶0119). Regarding claim 7, Oshima, in view of Xu, teaches the apparatus of claim 1, wherein the controller is configured to detect a similarity between the structure of the object in the simulation image and the structure of the object in the training image based on a structural similarity index measure (SSIM) (Xu ¶¶0105 & ¶¶0119 discussed above). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the training recognition device and method (as taught by Oshima) to calculate SSIM (as taught by Xu) because the SSIM metric can compare similarities between two images (Xu ¶¶0105) and compensate for imperfect matching of paired (Xu ¶¶0119). Regarding claim 8, Oshima, in view of Xu, teaches the apparatus of claim 7, wherein the controller is configured to assign a weight to a structural comparison term of the SSIM (Xu ¶¶0105 & ¶¶0119 discussed above). Regarding claim 9, Oshima, in view of Xu, teaches the apparatus of claim 1, wherein the simulation tool is configured to generate the simulation image based on various scenarios (Oshima ¶¶0021: “The invention can be implemented in various other forms”), and generate information about objects in the simulation image (Oshima ¶¶0032: “An image for re-training (simulation image) generated by simulation and correct answer information (annotation data) accompanying the image are supplied at the time of re-training”). Regarding claim 10, Oshima, in view of Xu, teaches the apparatus of claim 1, wherein the image conversion device is configured to convert the simulation image into the training image based on a generative adversarial network (GAN) (Oshima Abstract: “The training recognition device includes: an image conversion unit that inputs a simulation image and an actual site image into a generative adversarial network and converts the simulation image into an artificial site image”). Regarding claim 11, Oshima, in view of Xu, teaches that the apparatus performs a method (Oshima Abstract: “re-trains a method for identifying an article”). Therefore, claim 11 is rejected using the same rationale as applied to claim 1 discussed above. Claim 14 is rejected using the same rationale as applied to claim 4 discussed above. Claim 15 is rejected using the same rationale as applied to claim 5 discussed above. Claim 16 is rejected using the same rationale as applied to claim 6 discussed above. Claim 17 is rejected using the same rationale as applied to claim 7 discussed above. Claim 18 is rejected using the same rationale as applied to claim 8 discussed above Claim 19 is rejected using the same rationale as applied to claim 9 discussed above. Claim 20 is rejected using the same rationale as applied to claim 10 discussed above. Claim 21 is rejected using the same rationale as applied to claims 1 and 11 discussed above (also refer to Oshima Fig. 1: 12, 15, 17 & Oshima ¶¶0060: “instead of comprehensively performing re-training from the preceding layer including the first layer, the second layer, and the third layer of the feature extraction unit, the re-training may be performed only for effective layers from the preceding layer, such as the first layer, the third layer, and a sixth layer. The effective layer may be selected, for example, based on an experience value of a user, or may be selected according to a past training result”). Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SOO J SHIN whose telephone number is (571)272-9753. The examiner can normally be reached M-F; 10-6. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Matthew Bella can be reached at (571)272-7778. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Soo Shin/Primary Examiner, Art Unit 2667
Read full office action

Prosecution Timeline

Jan 13, 2023
Application Filed
May 08, 2025
Non-Final Rejection — §103
Aug 14, 2025
Response Filed
Sep 15, 2025
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602768
SURFACE DEFECT DETECTION MODEL TRAINING METHOD, AND SURFACE DEFECT DETECTION METHOD AND SYSTEM
2y 5m to grant Granted Apr 14, 2026
Patent 12586411
TARGET IDENTIFICATION DEVICE, ELECTRONIC DEVICE, TARGET IDENTIFICATION METHOD, AND STORAGE MEDIUM
2y 5m to grant Granted Mar 24, 2026
Patent 12586204
Detecting Optical Discrepancies In Captured Images
2y 5m to grant Granted Mar 24, 2026
Patent 12586216
METHOD OF DETERMINING A MOTION OF A HEART WALL
2y 5m to grant Granted Mar 24, 2026
Patent 12573021
ULTRASONIC DEFECT DETECTION AND CLASSIFICATION SYSTEM USING MACHINE LEARNING
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
87%
Grant Probability
99%
With Interview (+16.0%)
2y 4m
Median Time to Grant
Moderate
PTA Risk
Based on 604 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month