Prosecution Insights
Last updated: April 19, 2026
Application No. 18/441,229

IMAGE CLASSIFICATION METHOD AND RELATED DEVICE THEREOF

Non-Final OA §101§102§103
Filed
Feb 14, 2024
Examiner
VAZ, JANICE EZVI
Art Unit
2667
Tech Center
2600 — Communications
Assignee
Huawei Technologies Co., Ltd.
OA Round
1 (Non-Final)
77%
Grant Probability
Favorable
1-2
OA Rounds
3y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 77% — above average
77%
Career Allow Rate
48 granted / 62 resolved
+15.4% vs TC avg
Strong +28% interview lift
Without
With
+27.5%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
21 currently pending
Career history
83
Total Applications
across all art units

Statute-Specific Performance

§101
9.2%
-30.8% vs TC avg
§103
45.8%
+5.8% vs TC avg
§102
36.5%
-3.5% vs TC avg
§112
8.5%
-31.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 62 resolved cases

Office Action

§101 §102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to abstract idea without significantly more. According to the USPTO guidelines, a claim is directed to non-statutory subject matter if: STEP 1: the claim does not fall within one of the four statutory categories of invention (process, machine, manufacture or composition of matter), or STEP 2: the claim recites a judicial exception, e.g. an abstract idea, without reciting additional elements that amount to significantly more than the judicial exception, as determined using the following analysis: STEP 2A (PRONG 1): Does the claim recite an abstract idea, law of nature, or natural phenomenon? STEP 2A (PRONG 2): Does the claim recite additional elements that integrate the judicial exception into a practical application? STEP 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception? Using the two-step inquiry, it is clear that claims 1-20 are directed to an abstract idea as shown below: STEP 1: Do the claims fall within one of the statutory categories (i.e. process, a computer readable medium, i.e. a system)? YES. Claims 1-7 are directed to a method, Claims 8-14 are directed to an apparatus, and Claims 15-20 are directed to a non-transitory computer readable medium. STEP 2A (PRONG 1): Is the claim directed to a law of nature, a natural phenomenon or an abstract idea? YES, the claims are directed towards an abstract idea With regard to STEP 2A (PRONG 1), the guidelines provide three groupings of subject matter that are considered abstract ideas: - Mathematical concepts — mathematical relationships, mathematical formulas or equations, mathematical calculations; - Certain methods of organizing human activity — fundamental economic principles or practices (including hedging, insurance, mitigating risk); commercial or legal interactions (including agreements in the form of contracts; legal obligations - Mental processes – concepts that are practicably performed in the human mind (including an observation, evaluation, judgement, opinion). The claim(s) recite(s): Regarding Claim 1, representative of Claims 8 and 15, reciting an image classification method, wherein the method is implemented by using a transformer network, and the method comprises: obtaining M first features of a target image, wherein M ≥ 1 (insignificant extra-solution/data gathering step); performing linear transformation processing based on a kth first feature to obtain a kth second feature, a kth third feature, and a kth fourth feature, wherein k=1, ..., M (Mathematical concepts — mathematical relationships, mathematical formulas or equations, mathematical calculations); calculating a distance between the kth second feature and the kth third feature to obtain a kth fifth feature (Mathematical concepts — mathematical relationships, mathematical formulas or equations, mathematical calculations); performing first fusion processing based on the kth fifth feature and the kth fourth feature to obtain a kth sixth feature (Mathematical concepts — mathematical relationships, mathematical formulas or equations, mathematical calculations); and obtaining a classification result of the target image based on M sixth features (extra-solution/field of use – see step 2A prong 2). Regarding Claim 2, representative of Claims 9 and 16, reciting the method according to claim 1, wherein the calculating a distance between the kth second feature and the kth third feature to obtain a kth fifth feature comprises: calculating the distance between the kth second feature and the kth third feature based on an addition operation to obtain the kth fifth feature (Mathematical concepts — mathematical relationships, mathematical formulas or equations, mathematical calculations). Regarding Claim 3, representative of Claims 10 and 17, reciting the method according to claim 2, wherein the kth second feature comprises N row vectors, the kth third feature comprises N row vectors, and calculating the distance between the kth second feature and the kth third feature based on an addition operation to obtain the kth fifth feature comprises: performing subtraction processing on a jth row vector of the kth second feature and an ith row vector of the kth third feature to obtain a pth first intermediate vector, wherein j=1, …, N, i=1, …, N, and p=1, …, NxN (Mathematical concepts — mathematical relationships, mathematical formulas or equations, mathematical calculations); performing addition processing on all elements of the pth first intermediate vector to obtain an element in a jth row and an ith column of a kth seventh feature (Mathematical concepts — mathematical relationships, mathematical formulas or equations, mathematical calculations); and performing scaling processing and normalization processing on the kth seventh feature to obtain the kth fifth feature (Mathematical concepts — mathematical relationships, mathematical formulas or equations, mathematical calculations). Regarding Claim 4, representative of Claims 11 and 18, reciting the method according to claim 1, wherein the performing first fusion processing based on the kth fifth feature and the kth fourth feature to obtain a kth sixth feature comprises: processing an element of the kth fifth feature and an element of the kth fourth feature based on an addition operation to obtain the kth sixth feature (Mathematical concepts — mathematical relationships, mathematical formulas or equations, mathematical calculations). Regarding Claim 5, representative of Claims 12 and 19, reciting the method according to claim 4, wherein the kth fourth feature comprises Nxd/M elements, and the processing an element of the kth fifth feature and an element of the kth fourth feature based on an addition operation to obtain the kth sixth feature comprises: performing absolute value processing on an xth column vector of the kth fourth feature to obtain an absolute-value xth column vector of the kth fourth feature (Mathematical concepts — mathematical relationships, mathematical formulas or equations, mathematical calculations); performing addition processing on the absolute-value xth column vector and a yth row vector of the kth fifth feature to obtain a qth second intermediate vector, wherein x=1,…,d/M, y=1,…,N, and h=1,…, Nxd/M(Mathematical concepts — mathematical relationships, mathematical formulas or equations, mathematical calculations); setting a sign of the qth second intermediate vector to be the same as a sign of the xth column vector, to obtain a sign-set qth second intermediate vector (Mathematical concepts — mathematical relationships, mathematical formulas or equations, mathematical calculations); and performing addition processing on all elements of the sign-set qth second intermediate vector to obtain an element in a yth row and an xth column of the kth sixth feature (Mathematical concepts — mathematical relationships, mathematical formulas or equations, mathematical calculations). Regarding Claim 6, representative of Claim 13, reciting the method according to claim 1, wherein the linear transformation processing is formed by addition operations (Mathematical concepts — mathematical relationships, mathematical formulas or equations, mathematical calculations). Regarding Claim 7, representative of Claims 14 and 20, reciting the method according to claim 6, wherein the performing linear transformation processing based on a kth first feature to obtain a kth second feature, a kth third feature, and a kth fourth feature comprises: obtaining a first weight matrix, a second weight matrix, and a third weight matrix (Mathematical concepts — mathematical relationships, mathematical formulas or equations, mathematical calculations); performing, by using the first weight matrix, the linear transformation processing formed by addition operations, on the kth first feature to obtain the kth second feature (Mathematical concepts — mathematical relationships, mathematical formulas or equations, mathematical calculations); performing, by using the second weight matrix, the linear transformation processing formed by addition operations, on the kth first feature to obtain the kth third feature (Mathematical concepts — mathematical relationships, mathematical formulas or equations, mathematical calculations); and performing, by using the third weight matrix, the linear transformation processing formed by addition operations, on the kth first feature to obtain the kth fourth feature (Mathematical concepts — mathematical relationships, mathematical formulas or equations, mathematical calculations). STEP 2A (PRONG 2): Does the claim recite additional elements that integrate the judicial exception into a practical application? NO, the claims do not recite additional elements that integrate the judicial exception into a practical application. With regard to STEP 2A (prong 2), whether the claim recites additional elements that integrate the judicial exception into a practical application, the guidelines provide the following exemplary considerations that are indicative that an additional element (or combination of elements) may have integrated the judicial exception into a practical application: an additional element reflects an improvement in the functioning of a computer, or an improvement to other technology or technical field; an additional element that applies or uses a judicial exception to affect a particular treatment or prophylaxis for a disease or medical condition; an additional element implements a judicial exception with, or uses a judicial exception in conjunction with, a particular machine or manufacture that is integral to the claim; an additional element effects a transformation or reduction of a particular article to a different state or thing; and an additional element applies or uses the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort designed to monopolize the exception. While the guidelines further state that the exemplary considerations are not an exhaustive list and that there may be other examples of integrating the exception into a practical application, the guidelines also list examples in which a judicial exception has not been integrated into a practical application: an additional element merely recites the words “apply it” (or an equivalent) with the judicial exception, or merely includes instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea; an additional element adds insignificant extra-solution activity to the judicial exception; and an additional element does no more than generally link the use of a judicial exception to a particular technological environment or field of use. Claims 1-20 do not recite any of the exemplary considerations that are indicative of an abstract idea having been integrated into a practical application. Claim 1 notably recites obtaining a classification result of the target image based on M sixth features, however this appears to be generally linking the result of the mathematical operations to a technological environment/field of use – that being generic image classification. STEP 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception? NO, the claims do not recite additional elements that amount to significantly more than the judicial exception. With regard to STEP 2B, whether the claims recite additional elements that provide significantly more than the recited judicial exception, the guidelines specify that the pre-guideline procedure is still in effect. Specifically, that examiners should continue to consider whether an additional element or combination of elements: adds a specific limitation or combination of limitations that are not well-understood, routine, conventional activity in the field, which is indicative that an inventive concept may be present; or simply appends well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception, which is indicative that an inventive concept may not be present. Claim(s) 1-20 does/do not recite any additional elements that are not well-understood, routine or conventional. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1-2, 4, 8, 11, 8-9, 15-16, and 18 are rejected under 35 U.S.C. 102(a)(1) as being unpatentable by Dosovitskiy (Alexey Dosovitskiy et al: "An Image is Worth 16X16 Words: Transformers for Image Recognition at Scale." arXiv:2010.11929v2 [cs.CV] 3 Jun 2021). Regarding Claim 1, representative of Claims 8 and 15, Dosovitskiy teaches an image classification method, wherein the method is implemented by using a transformer network, and the method comprises: obtaining M first features of a target image, wherein M ≥ 1 ([Section 3.1, paragraph 1]: to handle 2D images we reshape the image x into a sequence of flattened 2D patches x.sub.p…resulting number of patches, which also serves as the effective input sequence length for the Transformer); performing linear transformation processing based on a kth first feature to obtain a kth second feature, a kth third feature, and a kth fourth feature, wherein k=1, ..., M ([Appendix, section A]: Standard qkv self-attention (SA, Vaswani et al. (2017)) is a popular building block. See equation 5. Examiner notes the second feature to be the query, the third feature to be the key, and the fourth feature to be the value); calculating a distance between the kth second feature and the kth third feature to obtain a kth fifth feature ([Appendix, section A]: attention weights Aij are based on the pairwise similarity between two elements of the sequence and their respective query q i and key k j representations. See equation 6. Examiner notes the similarity calculation between queries (second feature) and keys (third features) to obtain attention weights (fifth feature)); performing first fusion processing based on the kth fifth feature and the kth fourth feature to obtain a kth sixth feature ([Appendix A, see equation 7], Examiner notes, the attention weights (fifth features) are then multiplied with the values (fourth features) for the self-attention calculation); and obtaining a classification result of the target image based on M sixth features (see Fig. 1 transformer encoder results fed into MLP head leading to classification (i.e. bird, ball, car)). Regarding Claim 2, representative of Claims 9 and 16, Dosovitskiy teaches the method according to claim 1. In addition, Dosovitskiy teaches wherein the calculating a distance between the kth second feature and the kth third feature to obtain a kth fifth feature comprises: calculating the distance between the kth second feature and the kth third feature based on an addition operation to obtain the kth fifth feature ([Appendix, section A]: attention weights Aij are based on the pairwise similarity between two elements of the sequence and their respective query q i and key k j representations. See equation 6 including qkT describing a dot product which involves addition between individually multiplied elements. Examiner notes the similarity calculation between queries (second feature) and keys (third features) to obtain attention weights (fifth feature)). Regarding Claim 4, representative of Claims 11 and 18, Dosovitskiy teaches the method according to claim 1. In addition, Dosovitskiy teaches wherein the performing first fusion processing based on the kth fifth feature and the kth fourth feature to obtain a kth sixth feature comprises: processing an element of the kth fifth feature and an element of the kth fourth feature based on an addition operation to obtain the kth sixth feature ([Appendix A, see equation 7], Examiner notes, the attention weights (fifth features) are then multiplied with the values (fourth features) for the self-attention calculation as a weighted sum). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 6-7, 13-14, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Dosovitskiy (Alexey Dosovitskiy et al: "An Image is Worth 16X16 Words: Transformers for Image Recognition at Scale." arXiv:2010.11929v2 [cs.CV] 3 Jun 2021) in view of Vaswani (Ashish Vaswani et al, "Attention Is All You Need", arXiv: 1706.03762v5 [cs.CL] 6 Dec 2017, XP055506908). Regarding Claim 6, representative of Claim 13, Dosovitskiy teaches the method according to claim 1. Dosovitskiy does not explicitly teach the remaining limitations of Claim 6, however Vaswani teaches wherein the linear transformation processing is formed by addition operations ([section 3.2.2 paragraph 1]: linearly project the queries, keys and values h times with different, learned linear projections to dk, dk and dv dimensions, respectively. On each of these projected versions of queries, keys and values we then perform the attention function. Examiner notes the linear transformations appear to be matrix multiplication involving addition). It would have been obvious to one of ordinary skill in the art before the effective filing date of the present invention to have modified Dosovitskiy to explicitly include the teachings of Vaswani by substituting the general mention of attaining queries, keys, and values for Vaswani’s explicit linear projections resulting in queries, keys, and values. Doing so would provide the predictable result of attaining queries, keys, and values for a transformer network. Regarding Claim 7, representative of Claims 14 and 20, Dosovitskiy teaches the method according to claim 6. Dosovitskiy does not explicitly teach the remaining limitations of Claim 6, however Vaswani teaches wherein the performing linear transformation processing based on a kth first feature to obtain a kth second feature, a kth third feature, and a kth fourth feature comprises: obtaining a first weight matrix, a second weight matrix, and a third weight matrix ([Section 3.2.2]: where the projections are parameter matrices W Q i ∈ R dmodel×dk , W K i ∈ R dmodel×dk , WV i ∈ R dmodel×dv); performing, by using the first weight matrix, the linear transformation processing formed by addition operations, on the kth first feature to obtain the kth second feature ([section 3.2.2] linearly project the queries, keys and values h times with different, learned linear projections to dk, dk and dv dimensions, respectively... where the projections are parameter matrices W Q i ∈ R dmodel×dk , W K i ∈ R dmodel×dk , WV i ∈ R dmodel×dv. Examiner notes linear projection with the W_Q matrix results in projected queries (second features)); performing, by using the second weight matrix, the linear transformation processing formed by addition operations, on the kth first feature to obtain the kth third feature([section 3.2.2] linearly project the queries, keys and values h times with different, learned linear projections to dk, dk and dv dimensions, respectively... where the projections are parameter matrices W Q i ∈ R dmodel×dk , W K i ∈ R dmodel×dk , WV i ∈ R dmodel×dv. Examiner notes linear projection with the W_K matrix results in projected keys (third feature)); and performing, by using the third weight matrix, the linear transformation processing formed by addition operations, on the kth first feature to obtain the kth fourth feature ([section 3.2.2] linearly project the queries, keys and values h times with different, learned linear projections to dk, dk and dv dimensions, respectively... where the projections are parameter matrices W Q i ∈ R dmodel×dk , W K i ∈ R dmodel×dk , WV i ∈ R dmodel×dv. Examiner notes linear projection with the W_V matrix results in projected values (fourth feature)). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to JANICE VAZ whose telephone number is (703)756-4685. The examiner can normally be reached Monday-Friday 9:00-5:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Matthew Bella can be reached at (571) 272-7778. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JANICE E. VAZ/Examiner, Art Unit 2667 /MATTHEW C BELLA/Supervisory Patent Examiner, Art Unit 2667
Read full office action

Prosecution Timeline

Feb 14, 2024
Application Filed
Mar 18, 2024
Response after Non-Final Action
Feb 21, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602831
METHOD AND SYSTEM FOR ENHANCING IMAGES USING MACHINE LEARNING
2y 5m to grant Granted Apr 14, 2026
Patent 12602811
IMAGE PROCESSING SYSTEM
2y 5m to grant Granted Apr 14, 2026
Patent 12602935
DRIVING ASSISTANCE DEVICE AND DRIVING ASSISTANCE METHOD
2y 5m to grant Granted Apr 14, 2026
Patent 12591847
SYSTEMS AND METHODS OF TRANSFORMING IMAGE DATA TO PRODUCT STORAGE FACILITY LOCATION INFORMATION
2y 5m to grant Granted Mar 31, 2026
Patent 12591977
AUTOMATICALLY AUTHENTICATING AND INPUTTING OBJECT INFORMATION
2y 5m to grant Granted Mar 31, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
77%
Grant Probability
99%
With Interview (+27.5%)
3y 1m
Median Time to Grant
Low
PTA Risk
Based on 62 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month