Prosecution Insights
Last updated: April 19, 2026
Application No. 17/948,625

GENERATING SURROGATE PROGRAMS USING ACTIVE LEARNING

Non-Final OA §101§103
Filed
Sep 20, 2022
Examiner
JEON, JAE UK
Art Unit
2193
Tech Center
2100 — Computer Architecture & Software
Assignee
International Business Machines Corporation
OA Round
1 (Non-Final)
75%
Grant Probability
Favorable
1-2
OA Rounds
2y 8m
To Grant
99%
With Interview

Examiner Intelligence

Grants 75% — above average
75%
Career Allow Rate
296 granted / 395 resolved
+19.9% vs TC avg
Strong +47% interview lift
Without
With
+47.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
40 currently pending
Career history
435
Total Applications
across all art units

Statute-Specific Performance

§101
26.8%
-13.2% vs TC avg
§103
49.7%
+9.7% vs TC avg
§102
3.7%
-36.3% vs TC avg
§112
14.6%
-25.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 395 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION 1. This Office Action is in response to the application filed on 09/20/2022. Claims 1-20 are pending in this application. Claims 1, 11 and 16 are independent claims. Claim Rejections - 35 USC § 101 2. 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. 3. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The independent claims 1, 11 and 16 are corresponding to one of four statutory categories including method, system, and product respectively under step 1. The claims 1, 11 and 16 similarly recite “a method of providing a surrogate program for a program endpoint, comprising: obtaining, by a processor set, a set of plural input/output pairs generated using the program endpoint; generating, by the processor set, transformations based on the input/output pairs; generating, by the processor set, a model that classifies inputs of the input/output pairs to ones of the transformations based on parameters of one or more strings of the inputs; receiving, by the processor set, a new input; selecting, by the processor set and using the model, one of the transformations based on parameters of one or more strings of the new input; and generating, by the processor set, a new output by applying the selected one of the transformations to the new input”. The limitation of the claims 1, 11 and 16 of “generating, by the processor set, transformations based on the input/output pairs;” as drafted, is a mental process that, under its broadest reasonable interpretation, covers mental processes but for the recitation of generic computer components. For example, but for the “generating transformation models (designing or writing code)” in the context of this claim encompasses the user may generate transformations based on the input/output pairs with a pen and paper or in a human mind. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea under Step 2A Prong 1. The limitation of the claims 1, 11 and 16 of “generating, by the processor set, a model that classifies inputs of the input/output pairs to ones of the transformations based on parameters of one or more strings of the inputs;” as drafted, is a mental process that, under its broadest reasonable interpretation, covers mental processes but for the recitation of generic computer components. For example, but for the “generating a model (designing or writing code)” in the context of this claim encompasses the user may generate a model that classifies inputs of the input/output pairs to ones of the transformations based on parameters of one or more strings of the inputs with a pen and paper or in a human mind. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea under Step 2A Prong 1. The limitation of the claims 1, 11 and 16 of “selecting, by the processor set and using the model, one of the transformations based on parameters of one or more strings of the new input;” as drafted, is a mental process that, under its broadest reasonable interpretation, covers mental processes but for the recitation of generic computer components. For example, but for the “selecting” in the context of this claim encompasses the user may select, using the model, one of the transformations based on parameters of one or more strings of the new input with a pen and paper or in a human mind. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea under Step 2A Prong 1. This judicial exception is not integrated into a practical application. In particular, the claims 1, 7 and 13 recite additional elements such as “obtaining, by a processor set, a set of plural input/output pairs generated using the program endpoint;”. Examiner would like to point out that with the broad reasonable interpretation, this element amounts to mere data gathering under MPEP § 2106.05(g): Insignificant Extra-Solution Activity, which does not impose any meaningful limits on practicing the mental process (insignificant additional element). Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to insignificant additional elements under Step 2A Prong 2 and Step 2B. This judicial exception is not integrated into a practical application. In particular, the claims 1, 7 and 13 recite additional elements such as “receiving, by the processor set, a new input;”. Examiner would like to point out that with the broad reasonable interpretation, this element amounts to mere data gathering under MPEP § 2106.05(g): Insignificant Extra-Solution Activity, which does not impose any meaningful limits on practicing the mental process (insignificant additional element). Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to insignificant additional elements under Step 2A Prong 2 and Step 2B. This judicial exception is not integrated into a practical application. In particular, the claims 1, 7 and 13 recite additional elements such as “generating, by the processor set, a new output by applying the selected one of the transformations to the new input”. Examiner would like to point out that with the broad reasonable interpretation, this element amounts to mere data outputting under MPEP § 2106.05(g): Insignificant Extra-Solution Activity, which does not impose any meaningful limits on practicing the mental process (insignificant additional element). Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to insignificant additional elements under Step 2A Prong 2 and Step 2B. This judicial exception is not integrated into a practical application. In particular, the claims 2, 12 and 17 recite additional elements such as “each of the input/output pairs comprises: a string input provided to the program endpoint; and a string output returned from the program endpoint in response to the string input”. Examiner would like to point out that with the broad reasonable interpretation, this element amounts to field of use under MPEP § 2106.05(h): Field of Use and Technological Environment, which does not impose any meaningful limits on practicing the mental process. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea under Step 2A Prong 2 and 2B. The limitation of the claim 3 of “performs the generating the transformations and the generating the model without knowledge of source code of the program endpoint” as drafted, is a mental process that, under its broadest reasonable interpretation, covers mental processes but for the recitation of generic computer components. For example, but for the “generating the transformations and the model (designing or writing code)” in the context of this claim encompasses the user may perform the generating the transformations and the generating the model without knowledge of source code of the program endpoint with a pen and paper or in a human mind. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea under Step 2A Prong 1. This judicial exception is not integrated into a practical application. In particular, the claims 4, 13 and 18 recite additional elements such as “an application programming interface (API) endpoint that receives a string input and returns a string output”. Examiner would like to point out that with the broad reasonable interpretation, this element amounts to field of use under MPEP § 2106.05(h): Field of Use and Technological Environment, which does not impose any meaningful limits on practicing the mental process. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea under Step 2A Prong 2 and 2B. This judicial exception is not integrated into a practical application. In particular, the claim 5 recites additional elements such as “the model comprises an interpretable model”. Examiner would like to point out that with the broad reasonable interpretation, this element amounts to field of use under MPEP § 2106.05(h): Field of Use and Technological Environment, which does not impose any meaningful limits on practicing the mental process. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea under Step 2A Prong 2 and 2B. This judicial exception is not integrated into a practical application. In particular, the claim 6 recites additional elements such as “the generating the model comprises using decision tree learning”. Examiner would like to point out that with the broad reasonable interpretation, this element amounts to field of use under MPEP § 2106.05(h): Field of Use and Technological Environment, which does not impose any meaningful limits on practicing the mental process. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea under Step 2A Prong 2 and 2B. This judicial exception is not integrated into a practical application. In particular, the claim 7 recites additional elements such as “refining the model using active learning with the program endpoint”. Examiner would like to point out that with the broad reasonable interpretation, this element amounts to field of use under MPEP § 2106.05(h): Field of Use and Technological Environment, which does not impose any meaningful limits on practicing the mental process. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea under Step 2A Prong 2 and 2B. This judicial exception is not integrated into a practical application. In particular, the claims 8, 14 and 19 recite additional elements such as “generating additional inputs”. Examiner would like to point out that with the broad reasonable interpretation, this element amounts to mere data gathering under MPEP § 2106.05(g): Insignificant Extra-Solution Activity, which does not impose any meaningful limits on practicing the mental process (insignificant additional element). Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to insignificant additional elements under Step 2A Prong 2 and Step 2B. This judicial exception is not integrated into a practical application. In particular, the claims 8, 14 and 19 recite additional elements such as “obtaining additional outputs from the program endpoint using the additional inputs”. Examiner would like to point out that with the broad reasonable interpretation, this element amounts to mere data outputting under MPEP § 2106.05(g): Insignificant Extra-Solution Activity, which does not impose any meaningful limits on practicing the mental process (insignificant additional element). Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to insignificant additional elements under Step 2A Prong 2 and Step 2B. The limitation of the claims 8, 14 and 19 of “changing the model based on the additional inputs and the additional outputs” as drafted, is a mental process that, under its broadest reasonable interpretation, covers mental processes but for the recitation of generic computer components. For example, but for the “changing the model (designing or updating)” in the context of this claim encompasses the user may change the model based on the additional inputs and the additional outputs with a pen and paper or in a human mind. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea under Step 2A Prong 1. The limitation of the claims 9, 15 and 20 of “the additional inputs satisfy a constraint in the model; and the changing the model comprises adding a new constraint to the model” as drafted, is a mental process that, under its broadest reasonable interpretation, covers mental processes but for the recitation of generic computer components. For example, but for the “satisfying” and “adding” in the context of this claim encompasses the user may determine if the additional inputs satisfy a constraint in the model; and comprises adding a new constraint to the changing model with a pen and paper or in a human mind. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea under Step 2A Prong 1. This judicial exception is not integrated into a practical application. In particular, the claim 10 recites additional elements such as “the generating the additional inputs comprises using a satisfiability modulo theories solver”. Examiner would like to point out that with the broad reasonable interpretation, this element amounts to mere data outputting under MPEP § 2106.05(g): Insignificant Extra-Solution Activity, which does not impose any meaningful limits on practicing the mental process (insignificant additional element). Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to insignificant additional elements under Step 2A Prong 2 and Step 2B. Dependent claims 2-10, 12-15 and 17-20 are also similar rejected under same rationale as cited above wherein these claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. These claims are merely further elaborate the mental process itself or providing additional definition of process which does not impose any meaningful limits on practicing the abstract idea. Claims 2-10, 12-15 and 17-20 are also rejected for incorporating the deficiency of their independent claims 1, 11 and 16 respectively. Claim Rejections - 35 USC § 103 4. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. 5. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 6. Claims 1-3, 6, 11-12 and 16-17 are rejected under 35 U.S.C. 103 as being unpatentable over Anand (US Patent 11841889), in view of Strope (US PGPub 20170039174). As per Claim 1, Anand teaches of a method of providing a surrogate program for a program endpoint, comprising: obtaining, by a processor set, a set of plural input/output pairs generated using the program endpoint; (Col 1, lines 38-49, Programming by example (PBE) is a technique involving a computer generating code based on examples from a user. In the context of data transformation (e.g., string transformations), PBE may be used to generate transformation code for a data set based on user input-output examples. For example, a PBE system generates a transformation from a set of example input-output pairs. The PBE system then applies that transformation to all remaining inputs to generate the complete set of transformed outputs. In some circumstances, this approach is faster, easier, and more efficient than having the user write out the transformations in an expression language. Col 2, lines 50-52, In some implementations, the method is performed by a PBE application executing on the computing device [endpoint].) generating, by the processor set, transformations based on the input/output pairs; (Col 1, lines 38-44, Programming by example (PBE) is a technique involving a computer generating code based on examples from a user. In the context of data transformation (e.g., string transformations), PBE may be used to generate transformation code for a data set based on user input-output examples. For example, a PBE system generates a transformation from a set of example input-output pairs. Col 2, lines 59-64. The method further includes iteratively processing a plurality of user example transformations and a user hint corresponding to one of the example transformations, the user hint expressing a causal basis for the corresponding example transformation.) generating, by the processor set, a model that classifies inputs of the input/output pairs to ones of the transformations based on parameters of one or more strings of the inputs; (Col 17, lines 37-41, In some implementations, the transformation function is updated based on the tokenized input strings. In some implementations, the contiguous substrings (hints) are used by the computing system to score predicates chosen by input classifiers for their corresponding subprogram or domain. Col 18, lines 40-46, In some implementations, a classifier is generated to identify which subprogram to apply for a given input. In some implementations, a decision tree is generated based on the existence or absence of token matches. In some implementations, one or more operators are used in the identification, such as STARTSWITH( ), ENDSWITH( ), and EQUALS( ).) Anand does not specifically teach, however Strope teaches of receiving, by the processor set, a new input; (Par 20, A classification [model] of the input text is then produced based on a decoder language model, the generated vector stream, the input text [new input into a generated transformer/transformation code] and the author. The decoder language model stores distributions of words used by particular authors in the plurality of training texts that caused the encoder language model to produce particular vectors representing the words.) selecting, by the processor set and using the model, one of the transformations based on parameters of one or more strings of the new input; and (Par 17, Other transformations can also be performed using these techniques. For example, a user may request that the input text be transformed into a style common to a particular group of authors, e.g., based on text produced by employees of a particular company, text by authors writing in a particular field, text by authors published in a particular journal, or other groups. Par 38, Fig. 3 The system 300 includes an author transformation decoder 310, which is one of the decoder language models 164 configured to perform a transformation of input text 302 to the style [parameter] of a particular author.) generating, by the processor set, a new output by applying the selected one of the transformations to the new input. (Par 10, FIG. 3 shows an example system for transforming input text into an output text rewritten according to the style of a particular author. Par 15, From this information, the language models can predict the most likely words the particular author would use in the context of the input text, and produce an output text reflecting these predictions. The output text, therefore, is a transformation of the input text into the linguistic style of the particular author. For example, given an input text of “what is that light in the window,” and a requested author of “William Shakespeare,” the input text may be transformed into an output text representing how William Shakespeare would likely have written the input text based on language models generated from analysis of his work. In such a case, the input text of “what is that light in the window” could be transformed, for example, into “what light through yonder window breaks.”) Therefore, it would have been obvious for one of the ordinary skill in the art before the effective filing date of the claimed invention to add receiving, by the processor set, a new input; selecting, by the processor set and using the model, one of the transformations based on parameters of one or more strings of the new input; and generating, by the processor set, a new output by applying the selected one of the transformations to the new input, as conceptually seen from the teaching of Strope, into that of Anand because this modification can help the scalability of contextual embedding for the appropriate transformation and the model to be used for the particular parameter to generate the output string. As per Claim 2, Anand teaches of the method of claim 1, wherein each of the input/output pairs comprises: a string input provided to the program endpoint; and a string output returned from the program endpoint in response to the string input. (Col 1, lines 38-44, Programming by example (PBE) is a technique involving a computer generating code based on examples from a user. In the context of data transformation (e.g., string transformations), PBE may be used to generate transformation code for a data set based on user input-output examples. For example, a PBE system generates a transformation from a set of example input-output pairs. Col 5, Lines 51-58, For string transformations, the transformation examples include input strings 234, output strings 236, and, in some cases, hints 238. In some implementations, the expression generator 228 provides a generator user interface 230 for a user to construct programming expressions (e.g., by providing transformation examples 232 and/or transformation functions 240).) As per Claim 3, Anand teaches of the method of claim 1, wherein the processor set performs the generating the transformations and the generating the model without knowledge of source code of the program endpoint. (Col 1, lines 28-49 Additionally, some data visualization applications enable the user to transform the data sets by inputting code in a programming language (e.g., an expression or calculation language). However, this requires the users to learn the programming language, which can be difficult to use and hard for users to identify the appropriate function, or set of functions, for a desired data transformation. Programming by example (PBE) is a technique involving a computer generating code based on examples from a user. … this approach is faster, easier, and more efficient than having the user write out the transformations in an expression language. Col 3, line 66-Col 4, line 10, Users who are not familiar with expression programming can find it difficult to apply transformations to their data sets. Programming by example (PBE) systems enable users to describe their desired transformation by examples instead of requiring knowledge of the programming language. However, in some circumstances PBE systems require a large set of user examples to identify the desired transformation for the data set. The systems, methods, and user interfaces described herein enable users to supply hints and/or conditions along with their examples, thereby improving efficiency and alleviating the need for a large set of user examples.) As per Claim 6, Anand teaches of the method of claim 1, wherein the generating the model comprises using decision tree learning. (Col 18, lines 39-44, In some implementations, this approach results in a final transform graphs for each domain. In some implementations, a classifier is generated to identify which subprogram to apply for a given input. In some implementations, a decision tree is generated based on the existence or absence of token matches.) Re Claim 11, it is the product claim, having similar limitations of claim 1. Thus, claim 11 is also rejected under the similar rationale as cited in the rejection of claim 1. Re Claim 12, it is the product claim, having similar limitations of claim 2. Thus, claim 12 is also rejected under the similar rationale as cited in the rejection of claim 2. Re Claim 16, it is the system claim, having similar limitations of claim 1. Thus, claim 16 is also rejected under the similar rationale as cited in the rejection of claim 1. Re Claim 17, it is the system claim, having similar limitations of claim 2. Thus, claim 17 is also rejected under the similar rationale as cited in the rejection of claim 2. 7. Claims 4, 13 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Anand (US Patent 11841889), in view of Strope (US PGPub 20170039174), and further in view of Leliwa (US PGPub 20170017635). As per Claim 4, neither Anand nor Strope specifically teaches, however Leliwa teaches of the method of claim 1, wherein the program endpoint comprises an application programming interface (API) endpoint that receives a string input and returns a string output. (Par 120, Typically, a generated API takes a text or a set of texts as an input and delivers the results of extraction process as an output. Par 36, Other systems and applications 106 are systems, including commercial systems and associated software applications that have the capability to access and use the output of the NLP system 102 through one or more application programming interface (APIs) as further described below. Par 39-45, In above example, X is the output of extraction process, e.g. a word, a phrase, a clause or a combination of them.) Therefore, it would have been obvious for one of the ordinary skill in the art before the effective filing date of the claimed invention to add an application programming interface (API) endpoint that receives a string input and returns a string output, as conceptually seen from the teaching of Leliwa, into that of Anand and Strope because this modification can help facilitate communication and data exchange while enabling integration, scalability and modularity by simplifying the development and maintenance of using API. Re Claim 13, it is the product claim, having similar limitations of claim 4. Thus, claim 13 is also rejected under the similar rationale as cited in the rejection of claim 4. Re Claim 18, it is the system claim, having similar limitations of claim 4. Thus, claim 18 is also rejected under the similar rationale as cited in the rejection of claim 4. 8. Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Anand (US Patent 11841889), in view of Strope (US PGPub 20170039174), and further in view of Dalli (US PGPub 20220198254). As per Claim 5, neither Anand nor Strope specifically teaches, however Dalli teaches of the method of claim 1, wherein the model comprises an interpretable model. (Par 308, Explainable Transformer architectures allow interpretable models to be created in a flexible manner. They may be trained in one iteration without the need to have an external induction step, as well as the possibility to train it in phases or by incorporating induction for parts of the model.) Therefore, it would have been obvious for one of the ordinary skill in the art before the effective filing date of the claimed invention to add an interpretable model, as conceptually seen from the teaching of Dalli, into that of Anand and Strope because this modification can help enhance transparency with rapid debugging. 9. Claims 7-9, 14-15 and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Anand (US Patent 11841889), in view of Strope (US PGPub 20170039174), and further in view of Kasai (US PGPub 20200394511). As per Claim 7, neither Anand nor Strope specifically teaches, however Kasai teaches of the method of claim 1, further comprising refining the model using active learning with the program endpoint. (Par 20, As also depicted in FIG. 2, in at least one embodiment, an amount of labeled data 210 from S2 208 can be generated automatically via active learning, in which at least one active learning algorithm actively identifies relevant examples from S2 208 to be labeled by a user and used to refine the model M 206.) Therefore, it would have been obvious for one of the ordinary skill in the art before the effective filing date of the claimed invention to add refining the model using active learning with the program endpoint, as conceptually seen from the teaching of Kasai, into that of Anand and Strope because this modification can help reduce data requirements and improve model performance. As per Claim 8, Anand does not specifically teach, however Strope teaches of the method of claim 7, wherein the active learning comprises: generating additional inputs; obtaining additional outputs from the program endpoint using the additional inputs; and changing the model based on the additional inputs and the additional outputs. (Par 27, In some cases, the text processing application 114 may also allow the user to specify a particular portion of the entered text as input text 130, for example, by allowing the user to select the input text 130 using an input device. The input text 130 may also be entered directly by the user.) As per Claim 9, Anand teaches of the method of claim 8, wherein: the additional inputs satisfy a constraint in the model; and the changing the model comprises adding a new constraint to the model. (Col 7, line 58-Col 8, line 6, Thus, the user interface 230 allows the user to provide examples 232 of transforms for individual data values, and the expression generator 228 infers a function 240 based on the examples 232 provided. A user can assist with the generation of the function 240 by providing hints 238 for some (or all) of the examples. A hint 238 identifies a portion of an input data value 234 that is relevant to making the transformation. In some implementations, the hints are treated as a soft constraint and the computing device generates, and may propose, options that don't match the hints. For example, if a user mistakenly supplies inaccurate hint information, or the hint results in a suboptimal transformation function, the computing device may identify a transformation function that does not use the hint information (e.g., the hint information is not included in a conditional statement of the transformation function).) Re Claim 14, it is the product claim, having similar limitations of claim 8. Thus, claim 14 is also rejected under the similar rationale as cited in the rejection of claim 8. Re Claim 15, it is the product claim, having similar limitations of claim 9. Thus, claim 15 is also rejected under the similar rationale as cited in the rejection of claim 9. Re Claim 19, it is the system claim, having similar limitations of claim 8. Thus, claim 19 is also rejected under the similar rationale as cited in the rejection of claim 8. Re Claim 20, it is the system claim, having similar limitations of claim 9. Thus, claim 20 is also rejected under the similar rationale as cited in the rejection of claim 9. 10. Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Anand (US Patent 11841889), in view of Strope (US PGPub 20170039174), in view of Kasai (US PGPub 20200394511), and further in view of Prasad (US PGPub 20110283147). As per Claim 10, none of Anand, Strope and Kasai specifically teaches, however Prasad teaches of the method of claim 9, wherein the generating the additional inputs comprises using a satisfiability modulo theories solver. (Claim 4, wherein determining the user-input data that satisfy all the user-input constraints comprises solving the user-input constraints using a satisfiability-modulo-theories solver to obtain the user-input data.) Therefore, it would have been obvious for one of the ordinary skill in the art before the effective filing date of the claimed invention to add using a satisfiability modulo theories solver, as conceptually seen from the teaching of Kasai, into that of Anand and Strope because this modification can help provide higher abstraction and logical consistency to the input for the transformation and the model. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to JAE UK JEON whose telephone number is (571)270-3649. The examiner can normally be reached 9am-6pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chat Do can be reached at 571-272-3721. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JAE U JEON/Primary Examiner, Art Unit 2193
Read full office action

Prosecution Timeline

Sep 20, 2022
Application Filed
Oct 18, 2023
Response after Non-Final Action
Nov 06, 2025
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602216
SCHEMA REGISTRY FOR CLIENT-SERVER ENVIRONMENTS
2y 5m to grant Granted Apr 14, 2026
Patent 12596549
METHOD AND SYSTEM FOR ACCELERATION OF SLOWER DATA PROCESSING CODES IN MACHINE LEARNING PIPELINES
2y 5m to grant Granted Apr 07, 2026
Patent 12591433
COMPILER ALGORITHM FOR GPU PREFETCHING
2y 5m to grant Granted Mar 31, 2026
Patent 12586006
DEPLOYMENT OF SELF-CONTAINED DECISION LOGIC
2y 5m to grant Granted Mar 24, 2026
Patent 12579053
CONTEXTUAL TEST CODE GENERATION
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
75%
Grant Probability
99%
With Interview (+47.4%)
2y 8m
Median Time to Grant
Low
PTA Risk
Based on 395 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month