Prosecution Insights
Last updated: April 19, 2026
Application No. 18/400,070

IMAGE CLASSIFICATION METHOD, MODEL TRAINING METHOD, DEVICE, STORAGE MEDIUM, AND COMPUTER PROGRAM

Non-Final OA §102
Filed
Dec 29, 2023
Examiner
POPOVICI, DOV
Art Unit
2681
Tech Center
2600 — Communications
Assignee
Huawei Technologies Co., Ltd.
OA Round
1 (Non-Final)
86%
Grant Probability
Favorable
1-2
OA Rounds
2y 11m
To Grant
99%
With Interview

Examiner Intelligence

Grants 86% — above average
86%
Career Allow Rate
482 granted / 557 resolved
+24.5% vs TC avg
Strong +43% interview lift
Without
With
+42.6%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
13 currently pending
Career history
570
Total Applications
across all art units

Statute-Specific Performance

§101
12.0%
-28.0% vs TC avg
§103
24.1%
-15.9% vs TC avg
§102
32.3%
-7.7% vs TC avg
§112
24.9%
-15.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 557 resolved cases

Office Action

§102
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-2, 17-18 and 20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Yun Yang et al., titled “Two-Stage Selective Ensemble of CNN via Deep Tree Training for Medical Image Classification”, IEEE TRANSACTIONS ON CYBERNETICS, VOL. 52, NO. 9, publication date 11 MARCH 2021, Pages 9194-9207, cited by applicant submitted IDS, dated 09/18/2024. As to claim 1, Yun Yang et al. discloses a device, wherein the device comprises: one or more processors (see page 9200, left column, i.e., GPU of NIVIDIA 1080ti); and a memory (see page 9200, left column, i.e., GPU of NIVIDIA 1080ti, inherently includes a memory, VRAM) coupled to the one or more processors (see page 9200, left column, i.e., GPU of NIVIDIA 1080ti), wherein the memory stores instructions, and when the instructions are executed by the device, the device is enabled to perform operations including: processing a target image by using a current neural network model, to obtain a current classification result output by the current neural network model, wherein the current classification result comprises a probability that the target image belongs to each of a plurality of categories (see figure 1, see, classifier 1 to classifier T in the selected classifiers, and final classifier in figure 1, and see section III B, see page 9197, second column, i.e., “The output of h(m)(.) is a probability vector of instance Xi classified as different categories.” and see section III C. 3, “combination of selected classifiers” and see page 9198, C. second col. “final classifier via a weighted ensemble strategy” and see figure 1), the current neural network model is a neural network model i corresponding to a largest probability (i.e., see page 9197, 2nd col. i.e., “highest probability”) in a selection result output by a neural network model a, the selection result comprises probabilities corresponding to p neural network models in m trained neural network models, the p neural network models are after the neural network model a and are allowed to be used to process the target image, the p neural network models comprise the neural network model i, the m trained neural network models comprise the neural network model a, m is an integer greater than 1, p is an integer greater than or equal to 1, and p is less than m (see figure 1, and see section III. B and section III. C, see, selected classifiers, input classifiers and accuracy-based selection at page 9198, 2nd col. C. 1 “Accuracy-Based Selection”); determining a current integration result based on the current classification result, wherein the current integration result comprises an integrated probability that the target image belongs to each of the plurality of categories (see section III. C. 3, i.e., formula (12), i.e., combination of selected classifiers, a weighted model, weight for classifier, and see section III B, see page 9197, second column, i.e., “The output of h(m)(.) is a probability vector of instance Xi classified as different categories.”); and determining a category of the target image based on the current integration result (see figure 1 and see section III C 3., and see page 9197, see second column, i.e., “The output of h(m)(.) is a probability vector of instance Xi classified as different categories.”, and see page 9197, 2nd col. “The output of h(m) (·) is a probability vector of instance Xi classified as different categories. The class label is then determined by the category with the highest probability.”). Regarding dependent claim 2, the closest prior art of record, namely, Yun Yang et al. discussed above, discloses, wherein a probability of a first category in the current integration result comprises an average value of probabilities of the first category in a plurality of classification results, wherein the first category is one of the plurality of categories, and the plurality of classification results comprise a classification result output by the neural network model a and a classification result output by the neural network model i; or (“or” is interpreted to read on one of the two claimed limitations in claim 2, the second limitation of claim 2 is met by the prior art as follows) wherein the probability of the first category in the current integration result comprises a first probability of the first category in the classification result output by the neural network model i (see page 9197, 2nd col., “where Oj (m) refers to the jth element of input vector O(m) , and y denotes the possible category of sample Xi. Giving an example of classification task on Alzheimer’s disease, h(m) (y=’AD’|Xi) calculates the probability of that sample Xi is diagnosed as AD by branch classifier m. The output of h(m) (·) is a probability vector of instance Xi classified as different categories. The class label is then determined by the category with the highest probability.”, see page 9195, CNN models, and page 9199, combination of selected classifiers). As to claim 17, Yun Yang et al. discloses a method, wherein the method comprises: processing a target image by using a current neural network model, to obtain a current classification result output by the current neural network model, wherein the current classification result comprises a probability that the target image belongs to each of a plurality of categories (see figure 1, see, classifier 1 to classifier T in the selected classifiers, and final classifier in figure 1, and see section III B, see page 9197, second column, i.e., “The output of h(m)(.) is a probability vector of instance Xi classified as different categories.” and see section III C. 3, “combination of selected classifiers” and see page 9198, C. second col. “final classifier via a weighted ensemble strategy” and see figure 1), the current neural network model is a neural network model i corresponding to a largest probability (i.e., see page 9197, 2nd col. i.e., “highest probability”) in a selection result output by a neural network model a, the selection result comprises probabilities corresponding to p neural network models in m trained neural network models, the p neural network models are after the neural network model a and are allowed to be used to process the target image, the p neural network models comprise the neural network model i, the m trained neural network models comprise the neural network model a, m is an integer greater than 1, p is an integer greater than or equal to 1, and p is less than m (see figure 1, and see section III. B and section III. C, see, selected classifiers, input classifiers and accuracy-based selection at page 9198, 2nd col. C. 1 “Accuracy-Based Selection”); determining a current integration result based on the current classification result, wherein the current integration result comprises an integrated probability that the target image belongs to each of the plurality of categories (see section III. C. 3, i.e., formula (12), i.e., combination of selected classifiers, a weighted model, weight for classifier, and see section III B, see page 9197, second column, i.e., “The output of h(m)(.) is a probability vector of instance Xi classified as different categories.”); and determining a category of the target image based on the current integration result (see figure 1 and see section III C 3., and see page 9197, see second column, i.e., “The output of h(m)(.) is a probability vector of instance Xi classified as different categories.”, and see page 9197, 2nd col. “The output of h(m) (·) is a probability vector of instance Xi classified as different categories. The class label is then determined by the category with the highest probability.”). Regarding dependent claim 18, the closest prior art of record, namely, Yun Yang et al. discussed above, discloses, wherein a probability of a first category in the current integration result comprises an average value of probabilities of the first category in a plurality of classification results, wherein the first category is one of the plurality of categories, and the plurality of classification results comprise a classification result output by the neural network model a and a classification result output by the neural network model i; or (“or” is interpreted to read on one of the two claimed limitations in claim 18, the second limitation of claim 18 is met by the prior art as follows) wherein the probability of the first category in the current integration result comprises a probability of the first category in the classification result output by the neural network model i (see page 9197, 2nd col., “where Oj (m) refers to the jth element of input vector O(m) , and y denotes the possible category of sample Xi. Giving an example of classification task on Alzheimer’s disease, h(m) (y=’AD’|Xi) calculates the probability of that sample Xi is diagnosed as AD by branch classifier m. The output of h(m) (·) is a probability vector of instance Xi classified as different categories. The class label is then determined by the category with the highest probability.”, see page 9195, CNN models, and page 9199, combination of selected classifiers). As to claim 20, Yun Yang et al. discloses a non-transitory computer-readable storage medium (see page 9200, left column, i.e., GPU of NIVIDIA 1080ti, inherently includes a memory, VRAM, such as a non-transitory computer-readable storage medium), wherein the non-transitory computer-readable storage medium stores instructions, and when the instructions are executed by a computer device (see page 9200, left column, i.e., GPU of NIVIDIA 1080ti), the computer device is enabled to perform operations including: processing a target image by using a current neural network model, to obtain a current classification result output by the current neural network model, wherein the current classification result comprises a probability that the target image belongs to each of a plurality of categories (see figure 1, see, classifier 1 to classifier T in the selected classifiers, and final classifier in figure 1, and see section III B, see page 9197, second column, i.e., “The output of h(m)(.) is a probability vector of instance Xi classified as different categories.” and see section III C. 3, “combination of selected classifiers” and see page 9198, C. second col. “final classifier via a weighted ensemble strategy” and see figure 1), the current neural network model is a neural network model i corresponding to a largest probability (i.e., see page 9197, 2nd col. i.e., “highest probability”) in a selection result output by a neural network model a, the selection result comprises probabilities corresponding to p neural network models in m trained neural network models, the p neural network models are after the neural network model a and are allowed to be used to process the target image, the p neural network models comprise the neural network model i, the m trained neural network models comprise the neural network model a, m is an integer greater than 1, p is an integer greater than or equal to 1, and p is less than m (see figure 1, and see section III. B and section III. C, see, selected classifiers, input classifiers and accuracy-based selection at page 9198, 2nd col. C. 1 “Accuracy-Based Selection”); determining a current integration result based on the current classification result, wherein the current integration result comprises an integrated probability that the target image belongs to each of the plurality of categories (see section III. C. 3, i.e., formula (12), i.e., combination of selected classifiers, a weighted model, weight for classifier, and see section III B, see page 9197, second column, i.e., “The output of h(m)(.) is a probability vector of instance Xi classified as different categories.”); and determining a category of the target image based on the current integration result (see figure 1 and see section III C 3., and see page 9197, see second column, i.e., “The output of h(m)(.) is a probability vector of instance Xi classified as different categories.”, and see page 9197, 2nd col. “The output of h(m) (·) is a probability vector of instance Xi classified as different categories. The class label is then determined by the category with the highest probability.”). Allowable Subject Matter Claims 3-16 and 19 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The following is a statement of reasons for the indication of allowable subject matter: Regarding dependent claim 3, the closest prior art of record, namely, Yun Yang et al. discussed above, does not disclose, teach or suggest, wherein the determining the category of the target image based on the current integration result comprises: determining a category corresponding to a largest probability in the current integration result as the category of the target image; or wherein the current neural network model further outputs a current selection result, the current selection result comprises probabilities corresponding to d neural network models in the m trained neural network models, the d neural network models are after the neural network model i and are allowed to be used to process the target image, d is an integer greater than or equal to 1, and d is less than m; and the determining the category of the target image based on the current integration result comprises: based on that the current integration result meets a first convergence condition or the current neural network model meets a second convergence condition, determining the category corresponding to the largest probability in the current integration result as the category of the target image, as claimed in dependent claim 3. Claims 4-6 are objected to because they are dependent on objected to claim 3 above. Regarding dependent claim 7, the closest prior art of record, namely, Yun Yang et al. discussed above, does not disclose, teach or suggest, wherein the operations further comprise: obtaining n sample images and n sample labels, wherein the n sample labels one-to-one correspond to the n sample images; determining, based on the n sample images and the n sample labels, a gradient of each parameter of each of m to-be-trained neural network models; and updating the m to-be-trained neural network models based on the gradient of each parameter of the m to-be-trained neural network models, to obtain the m trained neural network models, as recited in dependent claim 7. Claims 8-16 are objected to because they are dependent on objected to claim 7 above. Regarding dependent claim 19, the closest prior art of record, namely, Yun Yang et al. discussed above, does not disclose, teach or suggest, wherein the determining the category of the target image based on the current integration result comprises: determining a category corresponding to a largest probability in the current integration result as the category of the target image; or wherein the current neural network model further outputs a current selection result, the current selection result comprises probabilities corresponding to d neural network models in the m trained neural network models, the d neural network models are after the neural network model i and are allowed to be used to process the target image, d is an integer greater than or equal to 1, and d is less than m; and the determining the category of the target image based on the current integration result comprises: based on that the current integration result meets a first convergence condition or the current neural network model meets a second convergence condition, determining the category corresponding to the largest probability in the current integration result as the category of the target image, as claimed in dependent claim 19. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Peng et al. (US 11, 361,192 B2) teaches connecting at least two classification layers of the combined neural network model to the fully-connected layer, each classification layer corresponding to one of the at least two objects and outputting probabilities of the corresponding object belonging to different categories (see claim 1 at column 26). Any inquiry concerning this communication or earlier communications from the examiner should be directed to DOV POPOVICI whose telephone number is (571)272-4083. The examiner can normally be reached Monday - Friday 8:00 am- 4:30 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Akwasi M. Sarpong can be reached at 571-270-3438. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DOV POPOVICI/Primary Examiner, Art Unit 2681
Read full office action

Prosecution Timeline

Dec 29, 2023
Application Filed
Jan 28, 2026
Non-Final Rejection — §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603989
IMAGE DATA DECOMPRESSION
2y 5m to grant Granted Apr 14, 2026
Patent 12579599
X-RAY IMAGING DEVICE WITH A RECONFIGURABLE IMAGE PROCESSING MODULE
2y 5m to grant Granted Mar 17, 2026
Patent 12564479
METHOD FOR MANUFACTURING AN ORTHODONTIC APPLIANCE
2y 5m to grant Granted Mar 03, 2026
Patent 12557608
PROCESSING APPARATUS FOR FORMING A COATING FILM ON A SUBSTRATE HAVING A CAMERA AND A MIRROR MEMBER
2y 5m to grant Granted Feb 17, 2026
Patent 12555407
VITAL SIGNS MONITORING METHOD, DEVICES RELATED THERETO AND COMPUTER-READABLE STORAGE MEDIUM
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
86%
Grant Probability
99%
With Interview (+42.6%)
2y 11m
Median Time to Grant
Low
PTA Risk
Based on 557 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month