Prosecution Insights
Last updated: April 19, 2026
Application No. 18/371,147

HIGH-DIMENSIONAL COMPUTING BASED TRAINING AND INFERENCING

Non-Final OA §101§103
Filed
Sep 21, 2023
Examiner
XIA, XUYANG
Art Unit
2143
Tech Center
2100 — Computer Architecture & Software
Assignee
International Business Machines Corporation
OA Round
1 (Non-Final)
71%
Grant Probability
Favorable
1-2
OA Rounds
3y 4m
To Grant
99%
With Interview

Examiner Intelligence

Grants 71% — above average
71%
Career Allow Rate
327 granted / 460 resolved
+16.1% vs TC avg
Strong +54% interview lift
Without
With
+53.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
44 currently pending
Career history
504
Total Applications
across all art units

Statute-Specific Performance

§101
14.4%
-25.6% vs TC avg
§103
59.2%
+19.2% vs TC avg
§102
15.0%
-25.0% vs TC avg
§112
3.7%
-36.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 460 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. When considering subject matter eligibility under 35 U.S.C. 101, it must be determined whether the claim is directed to one of the four statutory categories of invention, i.e., process, machine, manufacture, or composition of matter (Step 1). If the claim does fall within one of the statutory categories, the second step in the analysis is to determine whether the claim is directed to a judicial exception (Step 2A). The Step 2A analysis is broken into two prongs. In the first prong (Step 2A, Prong 1), it is determined whether or not the claims recite a judicial exception (e.g., mathematical concepts, mental processes, certain methods of organizing human activity). If it is determined in Step 2A, Prong 1 that the claims recite a judicial exception, the analysis proceeds to the second prong (Step 2A, Prong 2), where it is determined whether or not the claims integrate the judicial exception into a practical application. If it is determined at step 2A, Prong 2 that the claims do not integrate the judicial exception into a practical application, the analysis proceeds to determining whether the claim is a patent-eligible application of the exception (Step 2B). If an abstract idea is present in the claim, any element or combination of elements in the claim must be sufficient to ensure that the claim integrates the judicial exception into a practical application, or else amounts to significantly more than the abstract idea itself. Applicant is advised to consult the 2019 PEG for more details of the analysis. Step 1 According to the first part of the analysis, in the instant case, claims 1- 9 , 1 0 -1 6 , 1 7 -20 are directed to a method, computer program product and system of a neural network . Thus, each of the claims falls within one of the four statutory categories (i.e. process, machine, manufacture, or composition of matter). Step 2A, Step 2A, Prong 1 Following the determination of whether or not the claims fall within one of the four categories (Step 1), it must be determined if the claims recite a judicial exception (e.g. mathematical concepts, mental processes, certain methods of organizing human activity) (Step 2A, Prong 1). In this case, the claims are determined to recite a judicial exception as explained below. Regarding Claims 1, 1 0 and 1 7 these claims recite establishing a neural network, wherein the neural network comprises a plurality of layers; receiving a plurality of input data sequences into a layer of the neural network, the plurality of input data sequences comprising a first input data sequence and a second input data sequence; superposing the first input data sequence and the second input data sequence, thereby creating a superposed embedding; transforming the superposed embedding by applying a function to the superposed embedding, thereby creating a transformed superposed embedding; and inferring a first output data element corresponding to the first input data sequence and a second output data element corresponding to the second input data sequence, wherein the inferring the first output data element and the second output data element comprises applying a shared unbinding operation on the transformed superposed embedding. The claims recite a mental process. As set forth in MPEP 2106.04(a)(2)(III)(C), “Claims can recite a mental process even if they are claimed as being performed on a computer”. These are recited at a high level such that they could be performed mentally, and they are also disclosed as a human user performing these functions, simply using a computer as a tool-see spec, [0 0 4 6] -[ 00 6 2], Fig. 1, etc. Thus, the claim recites abstract ideas. Step 2A, Prong 2 Following the determination that the claims recite a judicial exception, it must be determined if the claims recite additional elements that integrate the exception into a practical application of the exception (Step 2A, Prong 2). In this case, after considering all claim elements individually and as an ordered combination, it is determined that the claims do not include additional elements that integrate the exception into a practical application of the exception as explained below. In Prong Two, a claim is evaluated as a whole to determine whether the recited judicial exception is integrated into a practical application of that exception. A claim is not “directed to” a judicial exception, and thus is patent eligible, if the claim as a whole integrates the recited judicial exception into a practical application of that exception. A claim that integrates a judicial exception into a practical application will apply, rely on, or use the judicial exception in a manner that imposes a meaningful limit on the judicial exception, such that the claim is more than a drafting effort designed to monopolize the judicial exception. MPEP 2106.04(d). The claims recite an abstract idea and further the claims as a whole does not integrate the recited judicial exception into a practical application of the exception. A claim that integrates a judicial exception into a practical application will apply, rely on, or use the judicial exception in a manner that imposes a meaningful limit on the judicial exception, such that the claim is more than a drafting effort designed to monopolize the judicial exception. MPEP 2106.04(d). Regarding Claims 1 , 1 0 and 1 7 these claims This limitation recites using one or more neural networks as a tool to perform an abstract idea, which is not indicative of integration into a practical application . MPEP 2106.05(f). ) This limitation is understood to be generic computer equipment and mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.0S(f)) Step 2B Based on the determination in Step 2A of the analysis that the claims are directed to a judicial exception, it must be determined if the claims contain any element or combination of elements sufficient to ensure that the claim amounts to significantly more than the judicial exception (Step 2B). In this case, after considering all claim elements individually and as an ordered combination, it is determined that the claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception for the same reasons given above in the Step 2A, Prong 2 analysis. Furthermore, each additional element identified above as being insignificant extra-solution activity is also well-known, routine, conventional as described below. Claims 1, 10 and 1 7 : The claims do not include additional elements, alone or in combination, that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements amount to no more than generic computing components and field of use/technological environment which do not amount to significantly more than the abstract idea. The underlying concept merely receives information, analyzes it, and store the results of the analysis – this concept is not meaningfully different than concepts found by the courts to be abstract (see Electric Power Group , collecting information, analyzing it, and displaying certain results of the collection and analysis; see Cybersource , obtaining and comparing intangible data; see Digitech , organizing information through mathematical correlations; see Grams , diagnosing an abnormal condition by performing clinical tests and thinking about the results; see Cyberfone , using categories to organize store and transmit information; see Smartgene , comparing new and stored information and using rules to identify options). The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements when considered both individually and as a combination do not amount to significantly more than the abstract idea. For example, claim 1 recites “establishing…”, “receiving…” , ”superposing …”, “transforming…”, inferring…” etc. These elements are recited at a high level of generality and are well-understood, routine, and conventional activities in the computer art. Generic computers performing generic computer functions, without an inventive concept, do not amount to significantly more than the abstract idea. Looking at the elements as a combination does not add anything more than the elements analyzed individually. Therefore, these claims do not amount to significantly more than the abstract idea itself. Step 2A/2 B Prong 2 Dependent Claims Regarding to claim 2 , 13, 18 Claim 2 , 13, 18 merely recite other additional elements that de fine applying a function to the data sequences which performing generic functions that when looking at the elements as a combination does not add anything more than the elements analyzed individually. Therefore, these claims also do not amount to significantly more than the abstract idea itself. These claims are not patent eligible. Regarding to claim 3-4 , 1 4-15 , 1 9 Claim 3-4, 14-15, 19 merely recite other additional elements that define data sequences which performing generic functions that when looking at the elements as a combination does not add anything more than the elements analyzed individually. Therefore, these claims also do not amount to significantly more than the abstract idea itself. These claims are not patent eligible. Regarding to claim 5 , 1 6 , 20 Claim 5, 16, 20 merely recite other additional elements that define generating keys for the data sequences which performing generic functions that when looking at the elements as a combination does not add anything more than the elements analyzed individually. Therefore, these claims also do not amount to significantly more than the abstract idea itself. These claims are not patent eligible. Regarding to claim 6 , 16, 20 Claim 6 , 16, 20 merely recite other additional elements that define inferring the output elements by applying the keys which performing generic functions that when looking at the elements as a combination does not add anything more than the elements analyzed individually. Therefore, these claims also do not amount to significantly more than the abstract idea itself. These claims are not patent eligible. Regarding to claim 7-8 Claim 7-8 merely recite other additional elements that define the neural network which performing generic functions that when looking at the elements as a combination does not add anything more than the elements analyzed individually. Therefore, these claims also do not amount to significantly more than the abstract idea itself. These claims are not patent eligible. Regarding to claim 9 Claim 9 merely recite other additional elements that define the method is repeated for each layer of the neural network which performing generic functions that when looking at the elements as a combination does not add anything more than the elements analyzed individually. Therefore, these claims also do not amount to significantly more than the abstract idea itself. These claims are not patent eligible. Regarding to claim 11 Claim 11 merely recite other additional elements that define the instructions is stored at the storage device which performing generic functions that when looking at the elements as a combination does not add anything more than the elements analyzed individually. Therefore, these claims also do not amount to significantly more than the abstract idea itself. These claims are not patent eligible. Regarding to claim 1 2 Claim 1 2 merely recite other additional elements that define the instructions are downloaded to a remote data processing system and the meter use of the program instructions which performing generic functions that when looking at the elements as a combination does not add anything more than the elements analyzed individually. Therefore, these claims also do not amount to significantly more than the abstract idea itself. These claims are not patent eligible. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim s 1-2 , 7-8, 10, 13, 17-18 are rejected under 35 U.S.C. 103 as being unpatentable over Li et al. (Li) “ MIA- Net:Multi -informationaggregationnetworkcombining transformersandconvolutionalfeaturelearningforpolyp segmentation ” Knowledge-Based Systems , Volume 247 , 22 April 2022 , 108824 , ISSN: 0950-7051, https://doi.org/10.1016/j.knosys.2022.108824 in view of Wisotzky et al. ( Wisotzky ) US 202 10137375 . In regard to claim 1 , Li disclose A computer-implemented method comprising: (abstract, method) establishing a neural network, wherein the neural network comprises a plurality of layers; (Section 3, Materials and methods, 3.1 Network structure, page 3-4, constructing a NN, and the NN has layers) receiving a plurality of input data sequences into a layer of the neural network, the plurality of input data sequences comprising a first input data sequence and a second input data sequence; (Fig. 2, Section 3, Materials and methods, 3.1 Network structure, page 3-4 , receiving inputs into a layer of the NN and the inputs has multiple data sequences) combining the first input data sequence and the second input data sequence, thereby creating a combined embedding; ( Fig. 2, Section 3, Materials and methods, 3.1 -3. 4 page 3- 5 , aggregate the first and second data input features to generate the first and second embeddings, these low-level and high-level features are embeddings of the first and second input at the same time) transforming the superposed embedding by applying a function to the superposed embedding, thereby creating a transformed superposed embedding; ( Fig. 2, Section 3, Materials and methods, 3.1 Network structure, page 3-4, applying a transformer function to transform the embedding to generate a transformed combined embedding corresponding to the input patch . Note : please further define how the data sequences are transform ed , etc. to help move forward the prosecution, call to discuss if necessary) and inferring a first output data element corresponding to the first input data sequence and a second output data element corresponding to the second input data sequence, wherein the inferring the first output data element and the second output data element comprises applying a shared unbinding operation on the transformed superposed embedding. ( Fig. 2, 5, Section 3, Materials and methods, 3.1-3.4 , page 3- 6 , the inferring is equated to the process of having an interpretation of the whole predicted output based on the first and second transformed combined embeddings which are equated to the part of P1 representing the corresponding input patch . Inputs are patches of the one bigger image, the P1 result contains a part corresponding to each patch by applying 1x1 conv. network to decouple and concatenate. Note : please use functional language (such as shared unbinding operation, etc.) to help move forward the prosecution, call to discuss if necessary. ) But Li fail to explicitly disclose “ superposing the first input data sequence and the second input data sequence, thereby creating a superposed embedding; ” Wisotzky disclose superposing the first input data sequence and the second input data sequence, thereby creating a superposed embedding; ( [0006] -[ 0013] [0036]-[0058] superimposing the first and second image sequences and generating the superimposed embedding with highlighted region characterized with features . Note : please further define how the data sequences are superpose d , etc. to help move forward the prosecution, call to discuss if necessary.) It would have been obvious to one having ordinary skill in the art before the effective filing data of the claimed invention was made to incorporate Wisotzky ‘ s image feature learning into Li ’s invention as they are related to the same field endeavor of feature learning. The motivation to combine these arts, as proposed above, at least because Wisotzky ‘ s image feature learning by superposing the images sequences would help to provide more image learning into Li ’s system. Therefore it would have been obvious to one having ordinary skill in the art before the effective filing data of the claimed invention was made that provid ing more image learning by superposing the images sequences would help to improve accuracy of the feature identification. In regard to claim 2 , Li and Wisotzky disclose The computer-implemented method of claim 1, But Li fail to explicitly disclose “ further comprising applying a linearization function to the first input data sequence and the second input data sequence. ” Wisotzky disclose further comprising applying a linearization function to the first input data sequence and the second input data sequence. ([0047] -[ 0052] applying a linear scaling of the first and second pixel dataset) It would have been obvious to one having ordinary skill in the art before the effective filing data of the claimed invention was made to incorporate Wisotzky ‘ s image feature learning into Li ’s invention as they are related to the same field endeavor of feature learning. The motivation to combine these arts, as proposed above, at least because Wisotzky ‘ s image feature learning by using a linear function would help to provide more image learning into Li’s system. Therefore it would have been obvious to one having ordinary skill in the art before the effective filing data of the claimed invention was made that providing more image learning by using a linear function would help to improve accuracy of the feature identification. In regard to claim 7 , Li and Wisotzky disclose The computer-implemented method of claim 1, Li disclose wherein the neural network comprises a linear-attention transformer network. (Fig. 2, Section 3, Materials and methods, 3.1 Network structure, page 3- 5 , the NN has a attention transformer network with ReLu ) In regard to claim 8 , Li and Wisotzky disclose The computer-implemented method of claim 1, Li disclose wherein the neural network comprises a performer network. (Fig. 2, Sect ion 4.2 Evaluation metrics, page 6-7 , the NN can evaluate the performance based on the evaluation metrics ) In regard to claim 10, claim 10 is a computer program product claim corresponding to the system claim 1 above and, therefore, is rejected for the same reasons set forth in the rejections of claim 1. In regard to claim 13, claim 13 is a computer program product claim corresponding to the system claim 2 above and, therefore, is rejected for the same reasons set forth in the rejections of claim 2. In regard to claims 17- 18 , claims 17- 18 are system claim corresponding to the system claim 1-2 above and, therefore, is rejected for the same reasons set forth in the rejections of claim 1-2. Claim s 3-6 , 9, 11 , 14-16, 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Li et al. (Li) “ MIA- Net:Multi -informationaggregationnetworkcombining transformersandconvolutionalfeaturelearningforpolyp segmentation” Knowledge-Based Systems , Volume 247 , 22 April 2022 , 108824 , ISSN: 0950-7051, https://doi.org/10.1016/j.knosys.2022.108824 and Wisotzky et al. ( Wisotzky ) US 202 10137375 as applied to claim 1, further in view of Hawthorne et al. (Hawthorne) U S 2023/0244907 In regard to claim 3 , Li and Wisotzky disclose The computer-implemented method of claim 1, But Li and Wisotzky fail to explicitly disclose “ wherein the first input data sequence comprises a first query vector embedding and the second data sequence comprises a second query vector embedding. ” Hawthorne disclose wherein the first input data sequence comprises a first query vector embedding and the second data sequence comprises a second query vector embedding. ([0072] -[ 0073] generate query embedding for each input data sequences) It would have been obvious to one having ordinary skill in the art before the effective filing data of the claimed invention was made to incorporate Hawthorne ‘ s generating sequences of data elements using cross-attention operations into Wisotzky and Li ’s invention as they are related to the same field endeavor of feature learning. The motivation to combine these arts, as proposed above, at least because Hawthorne ‘ s generating sequences of data elements using cross-attention operations would help to provide more data learnin g into Wisotzky and Li’s system. Therefore it would have been obvious to one having ordinary skill in the art before the effective filing data of the claimed invention was made that providing more data learning by generatin g query embeddings for data sequences would help to improve accuracy of the feature identification. In regard to claim 4 , Li and Wisotzky disclose The computer-implemented method of claim 1, But Li and Wisotzky fail to explicitly disclose “ wherein the first input data sequence comprises a first key-value vector embedding and the data sequence comprises a second key- value vector embedding. ” Hawthorne disclose wherein the first input data sequence comprises a first key-value vector embedding and the data sequence comprises a second key- value vector embedding. ([0072] -[ 0073] there are key-value embedding s for each input data sequences) It would have been obvious to one having ordinary skill in the art before the effective filing data of the claimed invention was made to incorporate Hawthorne ‘ s generating sequences of data elements using cross-attention operations into Wisotzky and Li ’s invention as they are related to the same field endeavor of feature learning. The motivation to combine these arts, as proposed above, at least because Hawthorne ‘ s generating sequences of data elements using cross-attention operations would help to provide more data learning into Wisotzky and Li’s system. Therefore it would have been obvious to one having ordinary skill in the art before the effective filing data of the claimed invention was made that providing more data learning by generating query embeddings for data sequences would help to improve accuracy of the feature identification. In regard to claim 5 , Li and Wisotzky disclose The computer-implemented method of claim 1, But Li and Wisotzky fail to explicitly disclose “ further comprising generating a first vector identification key corresponding to the first input data sequence and generating a second vector identification key corresponding to the second input data sequence. ” Hawthorne disclose further comprising generating a first vector identification key corresponding to the first input data sequence and generating a second vector identification key corresponding to the second input data sequence. ([0072] -[ 0073] generating keys for each input data sequences) It would have been obvious to one having ordinary skill in the art before the effective filing data of the claimed invention was made to incorporate Hawthorne ‘ s generating sequences of data elements using cross-attention operations into Wisotzky and Li ’s invention as they are related to the same field endeavor of feature learning. The motivation to combine these arts, as proposed above, at least because Hawthorne ‘ s generating sequences of data elements using cross-attention operations would help to provide more data learning into Wisotzky and Li’s system. Therefore it would have been obvious to one having ordinary skill in the art before the effective filing data of the claimed invention was made that providing more data learning by generating query embeddings for data sequences would help to improve accuracy of the feature identification. In regard to claim 6 , Li and Wisotzky disclose The computer-implemented method of claim 5, But Li and Wisotzky fail to explicitly disclose “ wherein the inferring the first output data element and the second output data element comprises applying the first vector identification key to infer the first output data element and applying the second vector identification key to infer the second output data element. ” Hawthorne disclose wherein the inferring the first output data element and the second output data element comprises applying the first vector identification key to infer the first output data element and applying the second vector identification key to infer the second output data element. ([0072] -[ 00 82 ] inferring the respective output elements by applying the respective keys using QKV attention ) It would have been obvious to one having ordinary skill in the art before the effective filing data of the claimed invention was made to incorporate Hawthorne ‘ s generating sequences of data elements using cross-attention operations into Wisotzky and Li ’s invention as they are related to the same field endeavor of feature learning. The motivation to combine these arts, as proposed above, at least because Hawthorne ‘ s inferring the data sequence using QKV attention mechanism would help to provide more data learning mechanism i nto Wisotzky and Li’s system. Therefore it would have been obvious to one having ordinary skill in the art before the effective filing data of the claimed invention was made that providing more data learning by inferring the data sequence using QKV attention mechanism would help to improve accuracy of the feature identification . In regard to claim 9 , Li and Wisotzky disclose The computer-implemented method of claim 1, But Li and Wisotzky fail to explicitly disclose “ wherein the method is repeated for each layer of the neural network. ” Hawthorne disclose wherein the method is repeated for each layer of the neural network. ([ 0004] [ 0007] [0070]-[008 6 ] each layer can apply the method to a received input to generate an output) It would have been obvious to one having ordinary skill in the art before the effective filing data of the claimed invention was made to incorporate Hawthorne ‘ s generating sequences of data elements using cross-attention operations into Wisotzky and Li ’s invention as they are related to the same field endeavor of feature learning. The motivation to combine these arts, as proposed above, at least because Hawthorne ‘ s repeating the method in each layer of the NN would help to provide more data learning mechanism into Wisotzky and Li’s system. Therefore it would have been obvious to one having ordinary skill in the art before the effective filing data of the claimed invention was made that providing more data learning by repeating the method in each layer of the NN would help to improve accuracy of the feature identification. In regard to claim 11 , Li and Wisotzky disclose The computer program product of claim 10, But Li and Wisotzky fail to explicitly disclose “ wherein the stored program instructions are stored in a computer readable storage device in a data processing system, and wherein the stored program instructions are transferred over a network from a remote data processing system. ” Hawthorne disclose wherein the stored program instructions are stored in a computer readable storage device in a data processing system, and wherein the stored program instructions are transferred over a network from a remote data processing system. ([0039] [0040] [0135] -[ 0137] storage device store instructions in a system and stored instructions can be transferred from remote over a data communication network) It would have been obvious to one having ordinary skill in the art before the effective filing data of the claimed invention was made to incorporate Hawthorne ‘ s generating sequences of data elements using cross-attention operations into Wisotzky and Li ’s invention as they are related to the same field endeavor of feature learning. The motivation to combine these arts, as proposed above, at least because Hawthorne ‘ s data storage would help to provide more programs into Wisotzky and Li’s system. Therefore it would have been obvious to one having ordinary skill in the art before the effective filing data of the claimed invention was made that providing more programs by the storage device would help to facilitate of the feature identification. In regard to claim s 1 4 -16 , claim s 1 4 -16 are computer program product claim corresponding to the system claim 3 -4, 5+6 above and, therefore, is rejected for the same reasons set forth in the rejections of claim 3 -4, 5+6 . In regard to claims 19 -20 , claims 1 9 -20 are system claim corresponding to the system claim 3+4 , 5+6 above and, therefore, is rejected for the same reasons set forth in the rejections of claim 3+ 4, 5+6. Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over Li et al. (Li) “ MIA- Net:Multi -informationaggregationnetworkcombining transformersandconvolutionalfeaturelearningforpolyp segmentation” Knowledge-Based Systems , Volume 247 , 22 April 2022 , 108824 , ISSN: 0950-7051, https://doi.org/10.1016/j.knosys.2022.108824 and Wisotzky et al. ( Wisotzky ) US 202 10137375 as applied to claim 1, further in view of Gan et al. ( Gan ) US 11263003 as In regard to claim 12 , Li and Wisotzky disclose The computer program product of claim 10, But Li and Wisotzky fail to explicitly disclose “ wherein the stored program instructions are stored in a computer readable storage device in a server data processing system, and wherein the stored program instructions are downloaded in response to a request over a network to a remote data processing system for use in a computer readable storage device associated with the remote data processing system, further comprising: program instructions to meter use of the program instructions associated with the request; and program instructions to generate an invoice based on the metered use. ” Gan disclose wherein the stored program instructions are stored in a computer readable storage device in a server data processing system, and wherein the stored program instructions are downloaded in response to a request over a network to a remote data processing system for use in a computer readable storage device associated with the remote data processing system, further comprising : program instructions to meter use of the program instructions associated with the request; and program instructions to generate an invoice based on the metered use. (col. 9, line 25- 51, col. 21, line 5-24, claim 13, code may stored in the storage medium, code may be downloaded over the network to a remote system in response to a request and metering use of the system associated the request and billing for the use) It would have been obvious to one having ordinary skill in the art before the effective filing data of the claimed invention was made to incorporate Gan ‘ s intelligent versioning of ML models into Wisotzky and Li ’s invention as they are related to the same field endeavor of learning models . The motivation to combine these arts, as proposed above, at least because Gan ‘ s meter ing usage would add more metered resource usage into Wisotzky and Li’s system. Therefore it would have been obvious to one having ordinary skill in the art before the effective filing data of the claimed invention was made that add ing metered resource usage would help to monetize the resource usage. Conclusion The prior art made of record and not relied upon is considered pertinent to Applicant's disclosure. PATENT PUB. # PUB. DATE INVENTOR(S) TITLE US 20210397944 A1 2021-12-23 Lakshmanan et al. Automated Structured Textual Content Categorization Accuracy With Neural Networks Lakshmanan et al. disclose To provide automated categorization of structured textual content individual nodes of textual content, from a document object model encapsulation of the structured textual content, have a multidimensional vector associated with them, where the values of the various dimensions of the multidimensional vector are based on the textual content in the corresponding node, the visual features applied or associated with the textual content of the corresponding node, and positional information of the textual content of the corresponding node. The multidimensional vectors are input to a neighbor-imbuing neural network. The enhanced multidimensional vectors output by the neighbor-imbuing neural network are then be provided to a categorization neural network. The resulting output can be in the form of multidimensional vectors whose dimensionality is proportional to categories into which the structured textual content is to be categorized. A weighted merge takes into account multiple nodes that are grouped together … see abstract. Any inquiry concerning this communication or earlier communications from the examiner should be directed to FILLIN "Examiner name" \* MERGEFORMAT XUYANG XIA whose telephone number is FILLIN "Phone number" \* MERGEFORMAT (571)270-3045 . The examiner can normally be reached FILLIN "Work Schedule?" \* MERGEFORMAT Monday-Friday 8am-4pm . Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, FILLIN "SPE Name?" \* MERGEFORMAT Jennifer Welch can be reached at FILLIN "SPE Phone?" \* MERGEFORMAT 571-272-7212 . The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. FILLIN "Examiner Stamp" \* MERGEFORMAT XUYANG XIA Primary Examiner Art Unit 2143 /XUYANG XIA/ Primary Examiner, Art Unit 2143
Read full office action

Prosecution Timeline

Sep 21, 2023
Application Filed
Mar 31, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596962
DATA TRANSMISSION USING DATA PRIORITIZATION
2y 5m to grant Granted Apr 07, 2026
Patent 12586180
ASSESSMENT OF IMAGE QUALITY FOR A MEDICAL DIAGNOSTICS DEVICE
2y 5m to grant Granted Mar 24, 2026
Patent 12572840
CONTROLLING QUANTUM COMMUNICATION VIA QUANTUM MEMORY MANAGEMENT
2y 5m to grant Granted Mar 10, 2026
Patent 12561594
QUANTUM CIRCUITS FOR MATRIX TRACE ESTIMATION
2y 5m to grant Granted Feb 24, 2026
Patent 12530367
SYSTEM FOR TRANSFORMATION OF DATA STRUCTURES TO MAINTAIN DATA ATTRIBUTE EQUIVALENCY IN DIAGNOSTIC DATABASES
2y 5m to grant Granted Jan 20, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
71%
Grant Probability
99%
With Interview (+53.8%)
3y 4m
Median Time to Grant
Low
PTA Risk
Based on 460 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month