Prosecution Insights
Last updated: April 19, 2026
Application No. 18/579,866

METHOD AND SYSTEM FOR CLASSIFYING IMAGES, STORAGE MEDIUM, AND TERMINAL

Non-Final OA §103
Filed
Jan 17, 2024
Examiner
HAIDER, SYED
Art Unit
2633
Tech Center
2600 — Communications
Assignee
Shanghai Midu Science And Technology Co. Ltd.
OA Round
1 (Non-Final)
83%
Grant Probability
Favorable
1-2
OA Rounds
2y 6m
To Grant
88%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
709 granted / 850 resolved
+21.4% vs TC avg
Minimal +4% lift
Without
With
+4.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
35 currently pending
Career history
885
Total Applications
across all art units

Statute-Specific Performance

§101
5.6%
-34.4% vs TC avg
§103
54.5%
+14.5% vs TC avg
§102
22.9%
-17.1% vs TC avg
§112
9.2%
-30.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 850 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “a construction module”, “an object detection module”, “an image recognition module”, and “a classification module” in claim 8. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-4, 6, and 8-10, is/are rejected under 35 U.S.C. 103 as being unpatentable over Kunii (US PGPUB 2004/0052413 A1) and further in view of Romley (US PGPUB 2016/0225053 A1). As per claim 1, Kunii discloses a method for classifying images (Kunii, Figs. 1-22), comprising: constructing an object vector retrieval library (Kunii, Fig. 1:24, and Fig. 3:4:41, a learning window feature vector memory unit 24), wherein the object vector retrieval library stores object feature vectors and object (names) types of stored objects (Kunii, paragraphs 8, 36 and 41, discloses classifying by the pair of object position and type, and storing as learning windows, feature extraction matrix calculating means 42 for calculating a matrix for feature extraction from the learning windows stored in the learning window database 41); performing object detection on a to-be-classified image and obtaining an object image of a detected object contained in the to-be-classified image (Kunii, Fig. 3:5, and paragraphs 8 and 41, discloses learning means 4 for preparing models of objects to be recognized, feature vector extracting means 5 for extracting a feature vector by using a matrix for feature extraction calculated in the learning means 4 in each input window divided in the image divider 3, input divided image discriminating means 6 for calculating the similarity measure by comparing the feature vector extracted in the feature vector extracting means 5 and the feature vector of a learning window feature vector database 43); performing image recognition on the object image of the detected object to obtain an object feature vector of the object image (Kunii, paragraph 41, discloses The judging means 7 includes an input image judging unit 71 for judging the input divided image and class of the highest value of the similarity measure entered from the input divided image discriminating means 6); and searching the object vector retrieval library for an object (name) type of a first stored object of the stored objects whose object feature vector matches the object feature vector of the object image of the detected object (Kunii, paragraphs 36 and 41, discloses The judging means 7 includes an input image judging unit 71 for judging the input divided image and class of the highest value of the similarity measure entered from the input divided image discriminating means 6, and an object position and type detector 72 for judging the position and type of the object of the class selected by the input image judging unit 71 to be the position and type of the object of the input image), and using the object (name) type of the first stored object as a category of the to-be-classified image (Kunii, paragraphs 36 and 41, discloses Thus, using the learning window of the learning window memory unit 23, the matching processor 25 repeats the procedure of similarity measure calculation and output. The object type estimator 26 estimates, when the similarity measure is maximum, the target object for recognition included in the input image to be located at which position and to belong to which type). Although Kunii discloses storing object feature vector and object type as being explained above however does not explicitly discloses store and search names of store objects. Romley discloses store and search names of store objects (Romley, Fig. 2:201, and paragraphs 30, and 35, discloses The object store can also contain a set of feature vectors, each extracted from and associated with an image of a different model of a watch. Additional information relating to each object can be included within the object store 201, for instance a name of the object, purchase information for the object, a web link associated with the object, or any other suitable descriptive information such as a customs classification for the object). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Kunii teachings by storing objects in the database, as taught by Romley. The motivation would be to improve the ability of image classifiers to operate on the input image (paragraph 49), as taught by Romley. As per claim 2, Kunii in view of Romley further discloses the method for classifying images according to claim 1, wherein constructing the object vector retrieval library comprises: acquiring object images of the stored objects (Kunii, paragraphs 8, and 41); performing image recognition on the object images of the stored objects to obtain respective object feature vectors of the object images (Kunii, paragraphs 41 and 71); and obtaining respective object names of the object images of the stored objects (Romley, paragraph 30); and storing the object names and the object feature vectors of the object images of the stored objects in a one-to-one correspondence manner (Romley, paragraph 30). As per claim 3, Kunii in view of Romley further discloses the method for classifying images according to claim 1, further comprising updating the object vector retrieval library in response to a new object image (Kunii, paragraph 38, discloses the object in the input image can be recognized even in the case of non-registered object, and the type and position of the object can be estimated); wherein updating the object vector retrieval library comprises: obtaining the new object image for image recognition (Kunii, paragraph 38), obtaining an object feature vector and an object name of the new object image (Romley, paragraph 30); and adding the object name and the object feature vector of the new object image to the object vector retrieval library (Romley, paragraph 78). As per claim 4, Kunii in view of Romley further discloses the method for classifying images according to claim 1, wherein performing object detection on the to-be-classified image and obtaining the object image of the detected object contained in the to-be-classified image comprises: performing object detection on the to-be-classified image based on an object detection model to obtain an object position of the detected object contained in the to-be-classified image (Kunii, paragraphs 33, and 41); and obtaining the object image of the object by cropping the to-be-classified image based on the object position (Kunii, paragraph 41). As per claim 6, Kunii in view of Romley further discloses the method for classifying images according to claim 1, wherein searching the object vector retrieval library for the object name of the first stored object whose object feature vector matches the object feature vector of the object image of the detected object comprises: calculating a similarity between the object feature vector of the object image of the detected object and each of the object feature vectors of the stored objects in the object vector retrieval library (Kunii, paragraphs 36 and 41, discloses The similarity measure between the input window feature vector and the learning vector is calculated, and issued to the object type estimator 26 ), wherein among the object feature vectors of the stored objects, the first stored object has the greatest similarity and is determined to match the object feature vector of the object image of the detected object (Kunii, paragraph 36, discloses The object type estimator 26 estimates, when the similarity measure is maximum, the target object for recognition included in the input image to be located at which position and to belong to which type); and obtaining the object name of the first stored object from the object vector retrieval library (Romley, paragraph 30). As per claim 8, Kunii discloses a system for classifying images (Kunii, Fig. 1-21), comprising: a construction module (Kunii, Fig. 3:4:41:43), an object detection module (Kunii, Fig. 3:4, and paragraph 41, discloses learning means 4 for preparing models of objects to be recognized), an image recognition module (Kunii, Fig. 3:5:6, and paragraph 37), and a classification module (Kunii, Fig. 1:28, and Fig. 3:7, and paragraphs 8 and 33, discloses The classifier classifies the window by the pair of object type and position, and multiple classified windows are stored as a set of learning); For rest of claim limitations please see the analysis of claim 1. As per claim 9, Kunii discloses a non-transitory storage medium having a computer program stored thereon (Kunii, Fig. 4:219:205, and paragraph 43), wherein the program, when executed by a processor (Kunii, Fig. 4:206, and paragraph 43), implements the method for classifying images according to claim 1 (please see the analysis of claim 1). As per claim 10, Kunii discloses a terminal for classifying images (Kunii, Fig. 4:219), comprising: a processor (Kunii, Fig. 4:206) and a memory (Kunii, Fig. 4:205); wherein the memory is configured to store a computer program (Kunii, Fig. 4:205, and paragraph 43); wherein the processor is for executing a computer program stored in the memory to cause the terminal for classifying images to perform the method for classifying images (Kunii, Fig. 4:219, and paragraph 43) according to claim 1 (please see the analysis of claim 1). Claim(s) 5, is/are rejected under 35 U.S.C. 103 as being unpatentable over Kunii (US PGPUB 2004/0052413 A1) and further in view of Romley (US PGPUB 2016/0225053 A1) and further in view of Yang (CN 202210127102 A, English translation of the CN is attached and used for citations). As per claim 5, Kunii in view of Romley further discloses the method for classifying images according to claim 1, wherein performing image recognition on the object image of the detected object to obtain the object feature vector of the object image comprises: Kunii in view of Romley does not explicitly disclose performing image recognition on the object image of the detected object based on a PP-LCNet image recognition model; and outputting the object feature vector of the object image by the PP-LCNet image recognition model. Yang discloses performing image recognition on the object image of the detected object based on a PP-LCNet image recognition model (Yang, page 4, discloses the feature extraction module uses PPLCNet ); and outputting the object feature vector of the object image by the PP-LCNet image recognition model (Yang, page 4, discloses the feature extraction using PPLCNet and outputting the extracted feature vectors). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Kunii in view of Romley teachings by implementing machine learning model to the system, as taught by Yang. The motivation would be to provide a system with improved tracking algorithm (page 3), as taught by Yang. Claim(s) 7, is/are rejected under 35 U.S.C. 103 as being unpatentable over Kunii (US PGPUB 2004/0052413 A1) and further in view of Romley (US PGPUB 2016/0225053 A1) and further in view of Zou (US PGPUB 2020/0065563 A1) As per claim 7, Kunii in view of Romley further discloses the method for classifying images according to claim 6, wherein the Kunii in view of Romley does not explicitly disclose similarity is a cosine similarity. Zou discloses similarity is a cosine similarity (Zou, paragraph 21). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Kunii in view of Romley teachings by implementing vector matching technique to the system, as taught by Zou. The motivation would be to provide an improved and accurate object recognition technique with reduced search space (paragraph 11), as taught by Zou. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to SYED Z HAIDER whose telephone number is (571)270-5169. The examiner can normally be reached MONDAY-FRIDAY 9-5:30 EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, SAM K Ahn can be reached at 571-272-3044. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SYED HAIDER/Primary Examiner, Art Unit 2633
Read full office action

Prosecution Timeline

Jan 17, 2024
Application Filed
Nov 28, 2025
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602430
Method for Constructing Positioning DB Using Clustering of Local Features and Apparatus for Constructing Positioning DB
2y 5m to grant Granted Apr 14, 2026
Patent 12604296
NETWORKED ULTRAWIDEBAND POSITIONING
2y 5m to grant Granted Apr 14, 2026
Patent 12597163
Systems and Methods to Optimize Imaging Settings for a Machine Vision Job
2y 5m to grant Granted Apr 07, 2026
Patent 12586394
METHOD, APPARATUS AND SYSTEM FOR AUTO-LABELING
2y 5m to grant Granted Mar 24, 2026
Patent 12579676
EGO MOTION-BASED ONLINE CALIBRATION BETWEEN COORDINATE SYSTEMS
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
83%
Grant Probability
88%
With Interview (+4.4%)
2y 6m
Median Time to Grant
Low
PTA Risk
Based on 850 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month