Prosecution Insights
Last updated: April 19, 2026
Application No. 17/665,032

LEARNING APPARATUS, LEARNING METHOD AND STORAGE MEDIUM THAT ENABLE EXTRACTION OF ROBUST FEATURE FOR DOMAIN IN TARGET RECOGNITION

Final Rejection §103§112
Filed
Feb 04, 2022
Examiner
SALOMON, PHENUEL S
Art Unit
2146
Tech Center
2100 — Computer Architecture & Software
Assignee
Honda Motor Co. Ltd.
OA Round
2 (Final)
73%
Grant Probability
Favorable
3-4
OA Rounds
3y 4m
To Grant
91%
With Interview

Examiner Intelligence

Grants 73% — above average
73%
Career Allow Rate
519 granted / 715 resolved
+17.6% vs TC avg
Strong +18% interview lift
Without
With
+18.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
23 currently pending
Career history
738
Total Applications
across all art units

Statute-Specific Performance

§101
12.8%
-27.2% vs TC avg
§103
52.8%
+12.8% vs TC avg
§102
15.9%
-24.1% vs TC avg
§112
7.6%
-32.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 715 resolved cases

Office Action

§103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION 2. This office action is in response to the amendment filed on 11/27/2025. Claim 1-13 are pending and have been considered below 3. The rejections of Claims 1-13 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph are moot pursuant to amendments. The rejections of Claim(s) 1, 4 and 10-13 under 35 USC 112(a) or 35 USC 112 (pre-AIA ), first paragraph, are moot pursuant to claims amendments. The rejections of Claims 1-8 and 10-13 under 35 U.S.C. 101 as directed to an abstract idea without significantly more are moot pursuant to amendments and arguments. Claim Rejections - 35 USC § 112 4. The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 4-5 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 4 recites the limitation "…the predetermined feature of the target…" in line 5. There is insufficient antecedent basis for this limitation in the claim. Therefore, dependent claim 5 is also rejected. Claim Rejections - 35 USC § 103 5. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-5, and 7-13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zhang et al. (US 2020/0357196) in view of ISHII (US 2020/0193285) and further in view of NARAYAN et al. (US 2023/0368544). Claim 1. ZHANG discloses a learning apparatus comprising: one or more processors (fig. 6, item 601; [0007]); and a memory storing instructions which, when the instructions are executed by the one or more processors (fig. 6, items 602, 603; [0007]), cause the learning apparatus to execute processing of: a first neural network that extracts a first feature of a target in image data (…first convolutional neural network 311 is configured for extracting a first feature Pm characterizing a part of a vehicle from the input image 320…) ([0028]); a second neural network that extracts a predetermined biased feature of the target in the image data and that is trained neural network with a network structure different from the first neural network (…the second convolutional neural network 312 is configured for extracting a second feature Dm characterizing a damage type of the vehicle from the input image 320…) ([0028]) The damage types include scratches, indentations, cracks, and the like (predetermined biased feature) ([0031]); and a learning support neural network that extracts the predetermined biased feature from the first feature extracted by the first neural network (…the third convolutional neural network 313 is configured for integrating the first feature Pm and the second feature Dm into a third feature Fm..)([0028],[0030]), ZHANG does not explicitly disclose wherein the one or more processors causes the learning apparatus to train the learning support neural network so that a difference between the predetermined biased feature extracted from the first feature by the learning support neural network and the predetermined biased feature extracted from the image data by the second neural network is reduced, and train the first neural network so that the predetermined biased feature appearing in the first feature extracted by the first neural network is reduced based on the difference, wherein the trained first neural network is deployed in a vehicle and executes inference processing for an image captured in the vehicle. However, ISHII discloses wherein the one or more processors causes the learning apparatus to train the learning support neural network so that a difference between the predetermined biased feature extracted from the first feature by the learning support neural network and the predetermined biased feature extracted from the image data by the second neural network is reduced, and train the first neural network so that the predetermined biased feature appearing in the first feature extracted by the first neural network is reduced (approaches a desired output) based on the difference (abstract). Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Zhang further in view of ISHII to incorporate the above cited features. One would have been motivated to do so in order to efficiently generate data which contribute to an improvement of learning and by learning those data. However, NARAYAN discloses wherein the trained first neural network is deployed in a vehicle and executes inference processing for an image captured in the vehicle ([0010], [0013],[0016],[0020]). Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Zhang further in view of NARAYAN to incorporate the above cited features. One would have been motivated to do so in order to efficiently avoid erroneous data being processed. Claim 2. ZHANG ISHII and NARAYAN disclose the learning apparatus according to claim 1, ZHANG further discloses wherein the scale of the network structure of the second neural network is smaller than the scale of the network structure of the first neural network ([0030], [0033]) [wherein each network is designed to assess different type of damage]. Claim 3. ZHANG ISHII and NARAYAN disclose the learning apparatus according to claim 1, ZHANG further discloses wherein the first neural network and the second neural network comprise a respective kernel for extracting a local feature in an image, and the size of the kernel of the second neural network is smaller than the size of the kernel of the first neural network ([0033]). Claim 4. ZHANG ISHII and NARAYAN disclose the learning apparatus according to claim 1, ZHANG further discloses wherein the first neural network is a neural network that classifies the target by extracting the first feature of the target in the image data, the second neural network is a neural network that classifies the target by extracting the predetermined feature of the target in the image data (..the first feature Pm and the second feature Dm are connected in series and are then integrated by the third convolutional neural network 313 into the third feature Fm with a dimension of H*W*(Cp+Cd). For example, the dimension of the first feature Pm is 20*30*32 (namely, the pixel is 20*30, and the number of kinds of vehicle parts is 32) and the dimension of the second feature Dm is 20*30*6 (namely, the pixel is 20*30, and the number of damage types is 6…) ([0033]) Claim 5. ZHANG ISHII and NARAYAN disclose the learning apparatus according to claim 4, ISHII further discloses wherein the one or more processors causes the learning apparatus to train the first neural network so that a difference between a classification result output from the first neural network and training data for the target is reduced while training the first neural network so that the predetermined biased feature appearing in the first feature extracted by the first neural network is reduced (abstract). One would have been motivated to do so in order to efficiently generate data which contribute to an improvement of learning and by learning those data. Claim 7. ZHANG ISHII and NARAYAN disclose the learning apparatus according to claim 1, ZHANG further discloses wherein the second neural network is a trained neural network for extracting the second feature of the target in the image data ([0040]). Claim 8. ZHANG ISHII and NARAYAN disclose the learning apparatus according to claim 1, ZHANG further discloses wherein the learning apparatus is an information processing server ([0037]). Claim 9. ZHANG ISHII and NARAYAN disclose the learning apparatus according to claim 1, NARAYAN further discloses wherein the learning apparatus is a vehicle ([0007]). One would have been motivated to do so in order to reduce the computational load (number of calculations/operations) and reliance on large datasets. Claim 10. Supra claim1 and ZHANG further discloses wherein the one or more processors further cause the learning apparatus to compare the feature including the bias factor extracted from the image data by the second neural network with the feature including the bias factor extracted by the learning support neural network, from the first feature extracted by the first neural network, and to output a loss (..the fifth convolutional neural network 315 is configured for determining a damage recognition result of the vehicle (namely, the output result 330)) ([0028], [0031], [0046]). Claim 11. Supra claim1 and ZHANG does not explicitly disclose wherein the one or more processors causes the learning apparatus to train the learning support neural network so that a difference between the biased feature extracted from the features extracted by the first neural network by the learning support neural network and the biased feature extracted from the image data by the second neural network is reduced, and train the first neural network so as to extract the features from the image data that makes the difference increase in a result of the extraction by the learning support neural network. However, ISHII discloses wherein the one or more processors causes the learning apparatus to train the learning support neural network so that a difference between the biased feature extracted from the features extracted by the first neural network by the learning support neural network and the biased feature extracted from the image data by the second neural network is reduced, and train the first neural network so as to extract the features from the image data that makes the difference increase in a result of the extraction by the learning support neural network (approaches a desired output) based on the difference (abstract). Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Zhang further in view of ISHII to incorporate the above cited features. One would have been motivated to do so in order to efficiently generate data which contribute to an improvement of learning and by learning those data. Claims 12 and 13 represent the method and medium of claim 1 and are rejected along the same rationale. 6. Claim(s) 6 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zhang et al. (US 2020/0357196) in view of ISHII (US 2020/0193285) in view of NARAYAN et al. (US 2023/0368544) and further in view of ZHAO et al. (US 2022/0198339). Claim 6. ZHANG ISHII and NARAYAN disclose the learning apparatus according to claim 1, but fails to explicitly disclose wherein the one or more processors causes the learning apparatus to utilize GRL (Gradient reversal layer) to vary weight coefficients of the first neural network and the weight coefficients of the learning support neural network in association with each other. However, ZHAO discloses GRL (Gradient reversal layer) to vary weight coefficients of the first neural network and the weight coefficients of the learning support neural network in association with each other ([0115]). Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Zhang further in view of ZHAO to incorporate the above cited features. One would have been motivated to do so in order to streamline data collection and processing cost. Response to Arguments 7. Applicant’s arguments filed 11/27/2025 have been fully considered but they are moot in light of new ground of rejection(s). Conclusion 8. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. 9. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure (See PTO-892). Any inquiry concerning this communication or earlier communications from the examiner should be directed to Phenuel S. Salomon whose telephone number is (571) 270-1699. The examiner can normally be reached on Mon-Fri 7:00 A.M. to 4:00 P.M. (Alternate Friday Off) EST. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Usmaan Saeed can be reached on (571) 272-4046. The fax phone number for the organization where this application or proceeding is assigned is 571-273-3800. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /PHENUEL S SALOMON/Primary Examiner, Art Unit 2146
Read full office action

Prosecution Timeline

Feb 04, 2022
Application Filed
Aug 23, 2025
Non-Final Rejection — §103, §112
Nov 25, 2025
Examiner Interview Summary
Nov 25, 2025
Applicant Interview (Telephonic)
Nov 27, 2025
Response Filed
Mar 05, 2026
Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602348
DATA ACTOR AND DATA PROCESSING METHOD THEREOF
2y 5m to grant Granted Apr 14, 2026
Patent 12597486
DISEASE PREDICTION METHOD, APPARATUS, AND COMPUTER PROGRAM
2y 5m to grant Granted Apr 07, 2026
Patent 12586004
METHODS OF PREDICTING RELIABILITY INFORMATION OF STORAGE DEVICES AND METHODS OF OPERATING STORAGE DEVICES
2y 5m to grant Granted Mar 24, 2026
Patent 12572827
ARTIFICIAL INTELLIGENCE (AI) MODEL DEPLOYMENT
2y 5m to grant Granted Mar 10, 2026
Patent 12572249
DISPLAY DEVICE, EVALUATION METHOD, AND EVALUATION SYSTEM
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
73%
Grant Probability
91%
With Interview (+18.3%)
3y 4m
Median Time to Grant
Moderate
PTA Risk
Based on 715 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month