Prosecution Insights
Last updated: April 19, 2026
Application No. 17/696,040

Artificial Intelligence (AI) Model Evaluation Method and System, and Device

Final Rejection §103§112
Filed
Mar 16, 2022
Examiner
THAI, JASMINE THANH
Art Unit
2129
Tech Center
2100 — Computer Architecture & Software
Assignee
Huawei Cloud Computing Technologies Co. Ltd.
OA Round
4 (Final)
25%
Grant Probability
At Risk
5-6
OA Rounds
4y 0m
To Grant
81%
With Interview

Examiner Intelligence

Grants only 25% of cases
25%
Career Allow Rate
6 granted / 24 resolved
-30.0% vs TC avg
Strong +56% interview lift
Without
With
+56.3%
Interview Lift
resolved cases with interview
Typical timeline
4y 0m
Avg Prosecution
30 currently pending
Career history
54
Total Applications
across all art units

Statute-Specific Performance

§101
23.6%
-16.4% vs TC avg
§103
37.2%
-2.8% vs TC avg
§102
14.6%
-25.4% vs TC avg
§112
21.8%
-18.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 24 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant's arguments filed 11/12/2022 have been fully considered but they are not persuasive. Regarding applicant' s remarks directed to the rejection of claims under 35 USC § 101, the applicant argues that the amended claims directed to a technical solution. Examiner respectfully agrees and withdraws the rejection of claims under 35 USC § 101. Regarding applicant’s remarks directed to the rejection of claims under 35 USC § 103, the arguments are directed to newly amended limitations that were not previously examined by the examiner. Therefore, applicants arguments are rendered moot. The examiner refers to the rejection under 35 USC § 103 in the current office action for more details. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 1, 4-7, 10, 12, 15-18, 20-24, and 26-29 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 1 and analogous claims 12 and 24 recites the limitation "training the AI model with new data that meet the condition; and training the AI model with the new data." There is insufficient antecedent basis for this limitation in the claim and it is unclear if “the new data” is the same as “new data that meet the condition.” Examiner further notes that if they are the same, then the limitation of “training the AI model with the new data” appears to be redundant. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1, 4-7, 10, 12, 15-18, 20-21, 23-24, 26-27, and 29 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Pub. No. US20180308202A1 Appu et al. (“Appu”) in view of U.S. Pub No. US20210133518A1 of U.S. Patent No. US11120314B2 Yao et al. (“Yao”) In regards to claim 1, Appu teaches A method implemented by a computing device and comprising: obtaining an artificial intelligence (AI) model and an evaluation data set, wherein the evaluation data set comprises evaluation data, (Appu, “[0215] A second exemplary type of neural network is the Convolutional Neural Network (CNN) [obtaining an artificial intelligence (AI) model]. A CNN is a specialized feedforward neural network for processing data having a known, grid-like topology, such as image data [an evaluation data set, wherein the evaluation data set comprises evaluation data; wherein the evaluation data is an image]. Accordingly, CNNs are commonly used for compute vision and image recognition applications, but they also may be used for other types of pattern recognition such as speech and language processing. The nodes in the CNN input layer are organized into a set of “filters” (feature detectors inspired by the receptive fields found in the retina), and the output of each set of filters is propagated to nodes in successive layers of the network. The computations for a CNN include applying the convolution mathematical operation to each filter to produce the output of that filter. Convolution is a specialized kind of mathematical operation performed by two functions to produce a third function that is a modified version of one of the two original functions. In convolutional network terminology, the first function to the convolution can be referred to as the input, while the second function can be referred to as the convolution kernel. The output may be referred to as the feature map. For example, the input to a convolution layer can be a multidimensional array of data that defines the various color components of an input image. The convolution kernel can be a multidimensional array of parameters, where the parameters are adapted by the training process for the neural network.”) Appu teaches and wherein the evaluation data comprise labels indicating a real result corresponding to the evaluation data; (Appu, “[0200] Before a machine learning algorithm can be used to model a particular problem, the algorithm is trained using a training data set. Training a neural network involves selecting a network topology, using a set of training data representing a problem being modeled by the network, and adjusting the weights until the network model performs with a minimal error for all instances of the training data set. For example, during a supervised learning training process for a neural network, the output produced by the network in response to the input representing an instance in a training data set is compared to the “correct” labeled output for that instance [wherein the evaluation data comprise labels indicating a real result corresponding to the evaluation data], an error signal representing the difference between the output and the labeled output is calculated, and the weights associated with the connections are adjusted to minimize that error as the error signal is backward propagated through the layers of the network. The network is considered “trained” when the errors for each of the outputs generated from the instances of the training data set are minimized.”) Appu teaches determining an inference result of the AI model on the evaluation data; comparing the inference result to a label of the evaluation data to obtain a comparison result indicating whether the inference result and the label are different or the same; (Appu, “[0200] Before a machine learning algorithm can be used to model a particular problem, the algorithm is trained using a training data set. Training a neural network involves selecting a network topology, using a set of training data representing a problem being modeled by the network, and adjusting the weights until the network model performs with a minimal error for all instances of the training data set. For example, during a supervised learning training process for a neural network, the output produced by the network in response to the input representing an instance in a training data set [determining an inference result of an AI model on the evaluation data; wherein the determined output produced by the network is an inference result of the AI model] is compared to the “correct” labeled output for that instance, an error signal representing the difference between the output and the labeled output is calculated [comparing the inference result to a label of the evaluation data to obtain a comparison result indicating whether the inference result and the label are different or the same; wherein the error signal is the comparison result that indicates whether the output and the label is the same or different], and the weights associated with the connections are adjusted to minimize that error as the error signal is backward propagated through the layers of the network. The network is considered “trained” when the errors for each of the outputs generated from the instances of the training data set are minimized.”) Appu teaches calculating, based on the comparison result, an inference accuracy of the AI model to obtain an evaluation result of the AI model [on data that meet the condition.] (Appu, “[0232] Supervised learning is a learning method in which training is performed as a mediated operation, such as when the training dataset 1502 includes input paired with the desired output for the input, or where the training dataset includes input having known output and the output of the neural network is manually graded. The network processes the inputs and compares the resulting outputs against a set of expected or desired outputs. Errors are then propagated back through the system. The training framework 1504 can adjust to adjust the weights that control the untrained neural network 1506. The training framework 1504 can provide tools to monitor how well the untrained neural network 1506 is converging towards a model suitable to generating correct answers based on known input data. The training process occurs repeatedly as the weights of the network are adjusted to refine the output generated by the neural network. The training process can continue until the neural network reaches a statistically desired accuracy [calculating, based on the comparison result, an inference accuracy of the AI model to obtain an evaluation result of the AI model [on data that meet the condition]; wherein the accuracy obtained during training is the inference accuracy] associated with a trained neural net 1508. The trained neural network 1508 can then be deployed to implement any number of machine learning operations.”) Appu teaches and generating an optimization suggestion for the AI model based on the evaluation result. (Appu, “[0159] Embodiments provide for a novel technique for adding the ability to configure the processing hardware, such as that of graphics processor 614, application processor 612, etc., to suit a dataset to improve energy efficiency of the inferencing compute [generating an optimization suggestion for the AI model]. For example, inference/prediction data precision may be determined by first detecting and monitoring datasets as facilitated by detection/monitoring logic 701 and simultaneously or subsequently, analyze the precision [based on the evaluation result] associated with such datasets, which, when used and applied, can allow for maintaining energy efficiency while adapting hardware built for superset capabilities.”) Appu teaches wherein the optimization suggestion comprises training the AI model with new data [that meet the condition]; and training the AI model with the new data. (Appu, Figure 15, “[0234] Variations on supervised and unsupervised training may also be employed. Semi-supervised learning is a technique in which in the training dataset 1502 includes a mix of labeled and unlabeled data of the same distribution. Incremental learning is a variant of supervised learning in which input data is continuously used to further train the model. Incremental learning enables the trained neural network 1508 to adapt to the new data 1512 without forgetting the knowledge instilled within the network during initial training [wherein the optimization suggestion comprises training the AI model with new data].”) However, Appu does not explicitly teach wherein: the evaluation data are images and the data feature is a quantity of bounding boxes, an area ratio of a bounding box to an image, an area variance of the bounding box, a degree of a distance from the bounding box to an image edge, an overlapping degree of the bounding boxes, an aspect ratio of the image, a resolution of the image, a blurriness of the image, or a saturation of the image; the evaluation data are text data and the data feature is a first quantity of words, a second quantity of non-repeated words, a length, a third quantity of stop words, a fourth quantity of punctuations, a fifth quantity of title-style words, a mean length of the words,a term frequency, or an inverse document frequency; or the evaluation data are audio data and the data feature is a short time average zero crossing rate, a short time energy, an entropy of energy, a spectrum centroid, a spectral spread, a spectral entropy, or a spectral flux; classifying, based on a numeric value of the data feature meeting a condition, the evaluation data to obtain an evaluation data subset [calculating, based on the comparison result, an inference accuracy of the AI model to obtain an evaluation result of the AI model on] data that meet the condition… new data that meet the condition Yao teaches obtaining a data feature of a task type of the AI model, wherein: the evaluation data are images and the data feature is a quantity of bounding boxes, an area ratio of a bounding box to an image, an area variance of the bounding box, a degree of a distance from the bounding box to an image edge, an overlapping degree of the bounding boxes, an aspect ratio of the image, a resolution of the image, a blurriness of the image, or a saturation of the image; the evaluation data are text data and the data feature is a first quantity of words, a second quantity of non-repeated words, a length, a third quantity of stop words, a fourth quantity of punctuations, a fifth quantity of title-style words, a mean length of the words,a term frequency, or an inverse document frequency; or the evaluation data are audio data and the data feature is a short time average zero crossing rate, a short time energy, an entropy of energy, a spectrum centroid, a spectral spread, a spectral entropy, or a spectral flux; classifying, based on a numeric value of the data feature meeting a condition, the evaluation data to obtain an evaluation data subset; (Yao, “[0015] The convolutional module 102 is shown receiving a mini-batch 116 of example images for training. In some examples, the mini-batch may include positive and negative example images of objects for training. For example, the mini-batches may contain a few thousand images. In some examples, the example images received in the mini-batch man be resized into a standard scale. For example, the images may be resized to a shortest dimension of 600 pixels while keeping the aspect ratio [obtaining a data feature of a task type of the AI model, wherein: the evaluation data are images and the data feature is an aspect ratio of the image] of the image constant. In some examples, the mini-batches of example images may be used by the convolutional layer 102 to generate basic multi-scale feature maps [classifying, based on a numeric value of the data feature meeting a condition ie a standard scale for image size, the evaluation data to obtain an evaluation data subset ie mini-batch of resized images]. In some examples, each of the convolutional layers 108A, 1086, 108C, 108D, and 108E may have a different input size or resolution. For example, the convolutional layer 108A may have an input size of 200×200 pixels, the convolutional layer 1086 may have an input size of 100×100 pixels, the convolutional layer 108C may have an input size of 64×64 pixels, etc.”) Yao further teaches back-propagating selected sample candidates (Yao, [0057], “In some examples, a predetermined number of the selected sample candidates can be iteratively grouped for back-propagating and updating a detection network [training the AI model with new data that meet the condition]. In some examples, the selected sample candidates can be used to jointly train a region proposal network and a detection network.”) Appu and Yao is considered to be analogous to the claimed invention because they are in the same field of convolutional neural networks. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Appu to incorporate the teachings of Yao in order to provide standard scaling and resizing to subsequently provide native size matching (Yao, [0015], “Each of the convolutional layers 108A, 1086, 108C, 108D, and 108E may thus generate feature maps with a native size matching the size of each of the layers.”) In regards to claim 4, Appu and Yao teaches The method of claim 1, Appu teaches further comprising obtaining performance data indicating a performance of hardware performing an inference on the evaluation data using the AI model. (Appu, Figure 7, “[0185] As illustrated, context scheduler 820 performs monitoring of processor utilization, such as monitoring utilization of graphics processor 614, through GPU utilization monitoring block 825 as facilitated by detection/monitoring logic 701 of FIG. 7. As further illustrated, context-0 and context-1 as represented by EU 831A-831D and EU833A-833F, respectively, have separate address spaces, where, in one embodiment, microcontroller context scheduler 820 of graphics processor 614 monitors how much of graphics processor 614 is utilized. If the utilization is regarded as low, context scheduler 820 may dispatches more contexts, such allowing for solving additional inference issues [obtaining performance data indicating a performance of hardware performing an inference on the evaluation data using the AI model; wherein the performance data ie GPU utilization is monitored while the AI model is performing inferencing such to allow for solving inference issues].”) In regards to claim 5, Appu and Yao teaches The method of claim 1, Appu teaches further comprising obtaining performance data indicating a usage status of an operator in the AI model while performing an inference on the evaluation data using the AI model. (Appu, “[0181] In the illustrated embodiment, framework 800 includes training data 801, learning block 803, inference data 805, and configurable hardware models 807, where training data 801 is shown as being communicated on to learning block 803 and one or more of configurable hardware models 807, such as communicating configuration information 809 from training data 801 to configurable hardware models 807. Further, in embodiment, receiving inputs from training data 801, learning block 803, and inference data 805, one or more configurable hardware models 807 produce inference/prediction 811, as illustrated. [0182] For example, to increase the number of operations performed per second, those blocks of processing hardware that are needed for addition, multiplication, accumulation, etc., may be reconfigured using as part of configurable hardware models 807 using configuration information 809 from training data 801 [obtaining performance data indicating a usage status of an operator in the AI model; wherein the usage status of an operator ie the blocks in the model (which can be a layer in the CNN) is the number of MAC operations performed per second]. This configuration information 809 may be generated at training time [while performing an inference on the evaluation data using the AI model] based on one or more datasets and communicated over to a hardware configuration controller at application and/or graphics processors 612, 614, at runtime, as facilitated by pre-analyzed training logic 703 of FIG. 7.”) In regards to claim 6, Appu and Yao teaches The method of claim 1, Yao teaches wherein the condition comprises sub-conditions, and wherein the evaluation data have data features that correspond to the sub-conditions. (Yao, “[0015] The convolutional module 102 is shown receiving a mini-batch 116 of example images for training. In some examples, the mini-batch may include positive and negative example images of objects for training. For example, the mini-batches may contain a few thousand images. In some examples, the example images received in the mini-batch man be resized into a standard scale. For example, the images may be resized to a shortest dimension of 600 pixels while keeping the aspect ratio of the image constant. In some examples, the mini-batches of example images may be used by the convolutional layer 102 to generate basic multi-scale feature maps. In some examples, each of the convolutional layers 108A, 1086, 108C, 108D, and 108E may have a different input size or resolution. For example, the convolutional layer 108A may have an input size of 200×200 pixels, the convolutional layer 1086 may have an input size of 100×100 pixels, the convolutional layer 108C may have an input size of 64×64 pixels, etc [wherein the condition comprises sub-conditions ie different input sizes for specific convolutional layers, and wherein the evaluation data have data features that correspond to the sub-conditions].”) In regards to claim 7, Appu and Yao teaches The method of claim 6, Yao teaches wherein each of the data features meets one of the sub- conditions. (Yao, [0015], “For example, the convolutional layer 108A may have an input size of 200×200 pixels, the convolutional layer 1086 may have an input size of 100×100 pixels, the convolutional layer 108C may have an input size of 64×64 pixels, etc [wherein each of the data features meets one of the sub- conditions].”) In regards to claim 10, Appu and Yao teaches The method of claim 5, Appu teaches wherein the usage status comprises a use duration of the operator and a use quantity of the operator. (Appu, “[0181] In the illustrated embodiment, framework 800 includes training data 801, learning block 803, inference data 805, and configurable hardware models 807, where training data 801 is shown as being communicated on to learning block 803 and one or more of configurable hardware models 807, such as communicating configuration information 809 from training data 801 to configurable hardware models 807. Further, in embodiment, receiving inputs from training data 801, learning block 803, and inference data 805, one or more configurable hardware models 807 produce inference/prediction 811, as illustrated. [0182] For example, to increase the number of operations performed per second, those blocks of processing hardware that are needed for addition, multiplication, accumulation, etc., may be reconfigured using as part of configurable hardware models 807 using configuration information 809 from training data 801 [wherein the usage status comprises a use duration of the operator and a use quantity of the operator; wherein the usage status of an operator ie the blocks in the model (which can be a layer in the CNN) is the number of MAC operations (quantity) performed per second (duration)]. This configuration information 809 may be generated at training time based on one or more datasets and communicated over to a hardware configuration controller at application and/or graphics processors 612, 614, at runtime, as facilitated by pre-analyzed training logic 703 of FIG. 7.”) Claim 12 and 24 are rejected on the same grounds under 35 U.S.C. 103 as claim 1 as they are substantially similar, respectively, Mutatis mutandis. Claim 15 and 26 are rejected on the same grounds under 35 U.S.C. 103 as claim 4 as they are substantially similar, respectively, Mutatis mutandis. Claim 16 is rejected on the same grounds under 35 U.S.C. 103 as claim 5 as they are substantially similar, respectively, Mutatis mutandis. Claim 17 is rejected on the same grounds under 35 U.S.C. 103 as claim 6 as they are substantially similar, respectively, Mutatis mutandis. Claim 18 is rejected on the same grounds under 35 U.S.C. 103 as claim 7 as they are substantially similar, respectively, Mutatis mutandis. Claim 20 is rejected on the same grounds under 35 U.S.C. 103 as claim 21 as they are substantially similar, respectively, Mutatis mutandis. In regards to claim 21, Appu and Yao teaches The method of claim 1, Appu teaches further comprising: generating an evaluation report comprising the optimization suggestion and performance data associated with the AI model; and sending the evaluation report to a terminal device or a mailbox of a user. (Appu, “[0048] FIG. 1 is a block diagram illustrating a computing system 100 configured to implement one or more aspects of the embodiments described herein. The computing system 100 includes a processing subsystem 101 having one or more processor(s) 102 and a system memory 104 communicating via an interconnection path that may include a memory hub 105. The memory hub 105 may be a separate component within a chipset component or may be integrated within the one or more processor(s) 102. The memory hub 105 couples with an I/O subsystem 111 via a communication link 106. The I/O subsystem 111 includes an I/O hub 107 that can enable the computing system 100 to receive input from one or more input device(s) 108. Additionally, the I/O hub 107 can enable a display controller, which may be included in the one or more processor(s) 102, to provide outputs [generate an evaluation report comprising the optimization suggestion and performance data associated with the AI model ie wherein the outputs of the computing system implementing the AI model can be displayed] to one or more display device(s) 110A [send the evaluation report to a terminal device]. In one embodiment, the one or more display device(s) 110A coupled with the I/O hub 107 can include a local, internal, or embedded display device.”; wherein generating the evaluation report in BRI merely encompasses displaying the optimization suggestion and performance data (outputs of the computing system as aforementioned in the teachings of Appu) to a terminal device) In regards to claim 23, Appu and Yao teaches The method of claim 1, Yao teaches wherein the data feature is the aspect ratio, and wherein the condition is that the aspect ratio is even. (Yao, [0015], “For example, the convolutional layer 108A may have an input size of 200×200 pixels, the convolutional layer 1086 may have an input size of 100×100 pixels, the convolutional layer 108C may have an input size of 64×64 pixels, etc [wherein the data feature is the aspect ratio, and wherein the condition is that the aspect ratio is even wherein 200 width is equal to 200 height of the image]. In some examples, each of the convolutional layers may have an input size that is a fraction of the standard scale size. For example, the convolution layer 108C may have a size that is ⅛ of the standard resized scale discussed above. Each of the convolutional layers 108A, 1086, 108C, 108D, and 108E may thus generate feature maps with a native size matching the size of each of the layers.”) Claim 27 is rejected on the same grounds under 35 U.S.C. 103 as claim 10 as they are substantially similar, respectively, Mutatis mutandis. Claim 29 is rejected on the same grounds under 35 U.S.C. 103 as claim 23 as they are substantially similar, respectively, Mutatis mutandis. Claim(s) 22 and 28 are rejected under 35 U.S.C. 103 as being unpatentable over Appu in view of Yao in further view of Hao, X. Wang, J. Zhang, J. Liu, X. Du and L. Liu, "Automatic Detection of Fungi in Microscopic Leucorrhea Images Based on Convolutional Neural Network and Morphological Method," 2019 IEEE 3rd Information Technology, Networking, Electronic and Automation Control Conference (ITNEC) (“Hao”) In regards to claim 22, [done; need motiv] Appu and Yao teaches The method of claim 1, Hao teaches wherein the evaluation data comprise microbial images, wherein the task type is object detection, and wherein the inference result comprises epithelial cells, blastospores, cocci, white blood cells, spores, fungi, or clue cells. (Hao, Section I, “In this paper, we propose an automatic identification [wherein the task type is object detection] of fungi in microscopic leucorrhea images [evaluation data comprise microbial images] based on convolutional neural network and morphological method. The structure is organized as follows: Section 2 used the maximum inter-class variance method to segment original image and obtained possible fungi subimages. Section 3 establishes a fully trained CNN to recognize fungi [wherein the inference result comprises epithelial cells, blastospores, cocci, white blood cells, spores, fungi, or clue cells]. Section 4 presents morphological method to further classify the selected candidate. Section 5 shows the experimental results. Finally, Section 6 concludes the paper and discusses the future work.”) Hao is considered to be analogous to the claimed invention because they are in the same field of convolutional neural networks. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Appu and Yao to incorporate the teachings of Hao in order to apply the methods of Appu and Yao to improve detection accuracy of fungi and provide important evidence for fungal vaginitis (Hao, Abstract, “Leucorrhea routine test is one of the most widely used tests in gynecological examinations, and fungi inspection is vital for gynecological test because fungi is an important evidence for fungal vaginitis. In order to improve detection accuracy, an automatic identification of fungi in microscopic leucorrhea images based on convolutional neural network (CNN) and morphological method is proposed in this paper. First, we use the maximum inter-class variance method to segment original image and obtain possible fungi subimages. Then, a fully trained CNN is applied to recognize fungi. Finally, morphological method, such as template match method and concave point detection method, is used to further classify the selected candidate to improve recognize accuracy. In experiments, the method using CNN and morphological method achieved 93.26% accuracy.”) Claim 28 is rejected on the same grounds under 35 U.S.C. 103 as claim 22 as they are substantially similar, respectively, Mutatis mutandis. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. US Pub No. US20120269436A1 Xerox teaches Learning structured prediction models for interactive image labeling Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JASMINE THAI whose telephone number is (703)756-5904. The examiner can normally be reached M-F 8-4. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Michael Huntley can be reached at (303) 297-4307. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /J.T.T./Examiner, Art Unit 2129 /MICHAEL J HUNTLEY/Supervisory Patent Examiner, Art Unit 2129
Read full office action

Prosecution Timeline

Mar 16, 2022
Application Filed
Dec 02, 2024
Non-Final Rejection — §103, §112
Mar 17, 2025
Response Filed
Mar 24, 2025
Final Rejection — §103, §112
May 21, 2025
Examiner Interview Summary
May 21, 2025
Applicant Interview (Telephonic)
Jun 30, 2025
Response after Non-Final Action
Jul 28, 2025
Request for Continued Examination
Aug 01, 2025
Response after Non-Final Action
Aug 14, 2025
Non-Final Rejection — §103, §112
Nov 12, 2025
Response Filed
Jan 11, 2026
Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12561603
SYSTEM FOR TIME BASED MONITORING AND IMPROVED INTEGRITY OF MACHINE LEARNING MODEL INPUT DATA
2y 5m to grant Granted Feb 24, 2026
Patent 12555000
GENERATION OF CONVERSATIONAL TASK COMPLETION STRUCTURE
2y 5m to grant Granted Feb 17, 2026
Patent 12462154
METHOD AND SYSTEM FOR ASPECT-LEVEL SENTIMENT CLASSIFICATION BY MERGING GRAPHS
2y 5m to grant Granted Nov 04, 2025
Patent 12395590
REDUCTION AND GEO-SPATIAL DISTRIBUTION OF TRAINING DATA FOR GEOLOCATION PREDICTION USING MACHINE LEARNING
2y 5m to grant Granted Aug 19, 2025
Patent 12380361
Federated Machine Learning Management
2y 5m to grant Granted Aug 05, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
25%
Grant Probability
81%
With Interview (+56.3%)
4y 0m
Median Time to Grant
High
PTA Risk
Based on 24 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month