DETAILED ACTION
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This office action is in response to applicant’s response filed July 21, 2025.
The instant application having application No. 17/687,577 filed on March 4, 2022, is the parent application to the PCT/US22/52604 filed 12/12/2022.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 6/12/2025 was filed before the mailing date of the Non-Final Office Action. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Status of the Claims
Claim 20 was previously canceled, claims 1-19, and 21 are currently pending in the application.
Response to Amendment
Regarding 101 rejections: Applicant's arguments are not persuasive; the rejections are maintained.
Examiner Notes
Examiner cites particular columns, paragraphs, figures and line numbers in the references as applied to the claims below for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the applicant fully consider the references in their entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the examiner.
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-19 and 21 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
With respect to claim 9, This claim is within at least one of the four categories of patent eligible subject matter as it is directing to a method claim under Step 1.
Under Prong 1, Step 2A:
However, the limitations of claim 9,
…, teaching a machine learning algorithm to synthesize the computer program …;
identifying a significant input …;
modify the computer program …
replacing a plurality of instances of a non-semantically-meaningful variable …;
freezing the computer program;
performing a plurality of calls to the machine learning algorithm that request renaming of respective instances of … variables;
causing the machine learning algorithm to stop renaming …and
causing the textual representation of the computer program, ….”
as drafted, are functions that, under its broadest reasonable interpretation, recite the abstract idea of a mental process. The limitations encompass a human mind carrying out the functions through observation, evaluation, judgment and /or opinion, or even with the aid of pen and paper. E.g. the limitation “teaching a machine learning algorithm to synthesize the computer program”, other than reciting “teach a machine learning algorithm”, nothing in the claim element precludes the step from practically
being performed in the mind. For example, but for the “teach a machine learning algorithm”, synthesize the computer program encompasses the user/programmer manually synthesize/create the computer program based on given data. Similarly, “identifying a significant input” encompasses the user manually identify the significant input from a plurality of inputs based on the given information (definition of significant input). The limitation “modify the computer program using the machine learning algorithm”, other than reciting “the machine learning algorithm”, nothing in the claim element precludes the step from practically being performed in the mind. For example, but for the “the machine learning algorithm”, modify the computer program encompasses the user/programmer manually modify the computer program based on given data. “replacing a plurality of instances of a non-semantically-meaningful variable” encompasses the user/programmer manually replace non-semantically-meaningful variables. “freezing the computer program” encompasses the user keep the computer program intact. “performing a plurality of calls to the machine learning algorithm that request renaming of respective instances of … variables”, other than reciting “calls to the machine learning algorithm”, nothing in the claim element precludes the step from practically being performed in the mind. For example, but for the “calls to the machine learning algorithm”, renaming variables encompasses the user/programmer manually rename the variables based on given data. “causing the machine learning algorithm to stop renaming”, other than reciting “the machine learning algorithm”, nothing in the claim element precludes the step from practically being performed in the mind. For example, but for the “the machine learning algorithm”, stop renaming encompasses the user/programmer manually stop the renaming based on the condition, such as a stop word. The “causing the textual representation of the computer program” encompasses the user manually present the computer program. Thus, these limitations recite and fall within the “Mental Processes” grouping of abstract ideas under Prong 1 Step 2A
Under Prong 2, Step 2A:
The judicial exception is not integrated into a practical application. The claim recites the following additional elements
“a computing system;”
“a machine learning algorithm”;
“receiving information, …;”
“causing a user interface element to be displayed to the user …;”
“receiving the ground truth output…;”
“wherein the semantically-meaningful variable has a name that is derived from a vocabulary of a language and that is based at least on a context …;”
wherein the “causing a user interface element to be displayed …” and the “receiving …” limitations are insignificant extra-solution activity, such as gathering and transmitting data. “wherein the semantically-meaningful variable has a name that is derived from a vocabulary of a language and that is based at least on a context …;” as drafted, is merely indicating a field of use or technological environment in which to apply a judicial exception, and does not amount to significantly more than the exception itself, and cannot integrate a judicial exception into a practical application. See MPEP § 2106.05(h). The “computer system” and “machine learning algorithm” are cited so generically (no details whatsoever are provided other than that it is a “computer system” or “machine learning algorithm”) that it represents no more than mere instructions to apply the judicial exceptions on a computer. Even when viewed in combination, these additional elements do not integrate the recited judicial exception into a practical application, according to MPEP 2106.05(f), thus, not indicative of an integration into a practical application.
Under Step 2B:
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements, “causing a user interface element to be displayed …” and the “receiving …” limitations are insignificant extra-solution activity, such as gathering and transmitting data, which are recognized as well-understood, routine, conventional activity, See MPEP 2106.05(d). “wherein the semantically-meaningful variable has a name that is derived from a vocabulary of a language and that is based at least on a context …;” as drafted, is merely indicating a field of use or technological environment in which to apply a judicial exception, and does not amount to significantly more than the exception itself, and cannot integrate a judicial exception into a practical application. See MPEP § 2106.05(h). The “computer system” and “machine learning algorithm” are cited so generically (no details whatsoever are provided other than that it is a “computer system” or “machine learning algorithm”) that it represents no more than mere instructions to apply the judicial exceptions on a computer, thus, not an inventive concept, Accordingly, the claim does not appear to be patent eligible under 35 USC 101. See MPEP 2106.05(d).
With respect to claim 1, This claim is within at least one of the four categories of patent eligible subject matter as it is directing to a system claim under Step 1.
This claim recites a system to implement the method disclosed in claim 9 and therefore recites the same abstract idea as claim 9, please see the office action analysis regarding claim 9.
Claim 1 recites one more additional element that is not recited in claim 9, i.e. a memory. But the memory is cited as generic computer component, and does not amount to significantly more.
With respect to claim 17, This claim is within at least one of the four categories of patent eligible subject matter as it is directing to a computer program product claim under Step 1.
This claim recites a computer program product to implement the method disclosed in claim 9 and therefore recites the same abstract idea as claim 9, please see the office action analysis regarding claim 9.
Claim 17 recites one more additional element that is not recited in claim 9, i.e. a computer-readable storage medium. But the medium is cited as generic computer component, and does not amount to significantly more.
With respect to claims 2, 10 and 18, “select the idiomatic function from a plurality of possible idiomatic functions by using a guarded context-free grammar;
wherein the guarded context-free grammar includes a plurality of ordered rules having a plurality of respective rankings in a hierarchical ranking order;
wherein the plurality of ordered rules is configured to generate the plurality of respective possible idiomatic functions; and
wherein the computer-executable instructions are executable by the processor system to select the idiomatic function based at least on a ranking corresponding to the idiomatic function relative to a ranking corresponding to each other possible idiomatic function in the plurality of possible idiomatic functions.” Wherein the “select the idiomatic function …”, other than reciting “the processor system”, nothing in the claim element precludes the step from practically being performed in the mind. For example, but for the “the processor system”, select the idiomatic function encompasses the user manually select the idiomatic function. Similarly, “wherein the computer-executable instructions are executable by the processor system to select the idiomatic function ….” other than reciting “the computer-executable instructions” and “the processor system”, nothing in the claim element precludes the step from practically being performed in the mind. For example, but for “the computer-executable instructions” and “the processor system”, select the idiomatic function encompasses the user manually select the idiomatic function. The other two where clauses are merely indicating a field of use or technological environment in which to apply a judicial exception, and does not amount to significantly more than the exception itself, and cannot integrate a judicial exception into a practical application. See MPEP § 2106.05(h). The additional element ,“the computer-executable instructions … the processor system”, is cited as generic component and is merely used as a tool to implement the abstract idea on a computer, or merely uses a computer, with instructions, as a tool to perform the abstract idea.
With respect to claims 3 and 19, “query a pre-trained language model with a query that includes a portion of the computer program that precedes the non-semantically-meaningful variable; and
replace the non-semantically-meaningful variable in the computer program with the semantically-meaningful variable based at least on receipt of the semantically-meaningful variable from the pre-trained language model.” Similar to claim 2, under its broadest reasonable interpretation, this covers performance of the limitations in the human mind with no more than pen and paper. Please refer to analysis of claims 2, 10 and 18 above..
With respect to claims 4 and 12, “configure the idiomatic function to perform the following operations:
extract date-time information, which indicates at least one of a date or a time, from a string;
select a date-time format from a plurality of date-time formats based at least
apply the selected date-time format to the date-time information that is extracted from the string; and
wherein the date-time information indicates at least one of a date or a time.” Similar to claim 2, under its broadest reasonable interpretation, this covers performance of the limitations in the human mind with no more than pen and paper. Please refer to analysis of claims 2, 10 and 18 above.
With respect to claims 5 and 13, “extract a number from a string;
select a number format from a plurality of number formats based at least on a determination that the sample output results from application of the selected number format to the sample input; and
apply the selected number format to the number that is extracted from the string.” Similar to claim 2, under its broadest reasonable interpretation, this covers performance of the limitations in the human mind with no more than pen and paper. Please refer to analysis of claims 2, 10 and 18 above.
With respect to claims 6 and 14, “assign a plurality of rankings to a plurality of respective possible computer programs that have a same functionality based at least on readability of the plurality of respective possible computer programs, the plurality of possible computer programs including the computer program, the same functionality being the functionality that is configured to generate the
select the computer program from the plurality of possible computer programs based at least on the ranking of the computer program being no less than the ranking of each other possible computer program that is capable of producing an expected result.” Similar to claim 2, under its broadest reasonable interpretation, this covers performance of the limitations in the human mind with no more than pen and paper. Please refer to analysis of claims 2, 10 and 18 above.
With respect to claims 7 and 15, “select the computer program from the plurality of possible computer programs further based at least on the computer program being capable of producing the expected result.” Similar to claim 2, under its broadest reasonable interpretation, this covers performance of the limitations in the human mind with no more than pen and paper. Please refer to analysis of claims 2, 10 and 18 above.
With respect to claims 8, and 16, “identify a set of possible computer programs from which the computer program is to be selected based at least on each possible computer program in the set having the functionality configured to generate the sample output from the sample input and further configured to generate the ground truth output, which is received from the user, from the significant input.” Similar to claim 2, under its broadest reasonable interpretation, this covers performance of the limitations in the human mind with no more than pen and paper. Please refer to analysis of claims 2, 10 and 18 above.
With respect to claim 11, “querying a pre-trained language model with a query that includes a portion of the computer program that precedes the non-semantically-meaningful variable; and
receiving the semantically-meaningful variable from the pre-trained language model as a response to the query;
wherein replacing the non-semantically-meaningful variable with the semantically-meaningful variable comprises:
replacing the non-semantically-meaningful variable in the computer program with the semantically-meaningful variable based at least in on receiving the semantically-meaningful variable from the pre-trained language model.” Similar to claim 2, under its broadest reasonable interpretation, this covers performance of the limitations in the human mind with no more than pen and paper. Please refer to analysis of claims 2, 10 and 18 above.
With respect to claim 21, “in response to renaming a first instance of the non-semantically-meaningful variable, replace a second instance of the non-semantically-meaningful variable with the semantically- meaningful variable by appending frozen text from the computer program to a prompt in a call of the plurality of calls, wherein the frozen text follows the first instance of the non-semantically- meaningful variable until the second instance of the non-semantically-meaningful variable.” Similar to claim 2, under its broadest reasonable interpretation, this covers performance of the limitations in the human mind with no more than pen and paper. Please refer to analysis of claims 2, 10 and 18 above.
Response to Arguments
Applicant's arguments filed 7/21/2025 have been fully considered but they are not persuasive.
At p13 second paragraph of the Remarks, Applicant argued that “First, Applicant submits that an evaluation of claim 1-19 and 21 in accordance with the first prong of Step 2A reveals that none of claims 1-19 and 21 recite a judicial exception. …”
Examiner respectfully disagrees, because, as set forth in the office action, analysis for step 2A prong 1 identified e.g. in claim 9, the processes “…, teaching a machine learning algorithm to synthesize the computer program …;
identifying a significant input …;
modify the computer program …
replacing a plurality of instances of a non-semantically-meaningful variable …;
freezing the computer program;
performing a plurality of calls to the machine learning algorithm that request renaming of respective instances of … variables;
causing the machine learning algorithm to stop renaming …and
causing the textual representation of the computer program, ….”
as mental processes, i.e. abstract ideas, as these processes encompass a human mind carrying out the functions through observation, evaluation, judgment and /or opinion, or even with the aid of pen and paper. Independent claims 1 and 17 recite these similar features, i.e. recite the same abstract idea as claim 9. Applicant’s arguments, such as “The invention is rooted in the field of programming by example, which is a computer programming technique. …” are like general remarks, not focused on claim language. Although claims are read in light of the spec, the limitations of the spec are not read into claims.
At p13 last to p14 first paragraph of the Remarks, Applicant argued that “… For example, the approach incorporates a machine learning algorithm to suggest semantically-meaningful variable names in context to replace generic placeholders. ... In another example, the approach introduces an interactive step, which includes identifying a "significant input" where the synthesizer is least certain and requesting the user's expected output (ground truth) for that input. This clever feedback loop enables the system to validate and correct the program with minimal user effort to improve correctness. In yet another example, the approach results in a program that is human-readable and that conforms to a convention of a target domain-specific language as a result of utilizing the idiomatic function and the semantically-meaningful variable.”
Examiner respectfully disagrees, because, as set forth in the office action, replacing non-semantically-meaningful variables is mental process, i.e. human can perform the process with aid of paper and pencil. Similarly, identifying “significant input” is mental process. The arguments such as “This clever feedback loop enables the system to validate and correct the program with minimal user effort to improve correctness” and “the approach results in a program that is human-readable …” are like general remarks, not focused on claim language. Again, although claims are read in light of the spec, the limitations of the spec are not read into claims.
At p14 second paragraph of the Remarks, Applicant argued that “… The improvement is concrete: the generated program is simpler, uses intuitive naming, and can be inspected and trusted by the user, unlike conventional outputs. This is not an abstract idea; rather, it is a "specific ... improvement in computer capabilities" Enfish, LLC v. Microsoft Corp., 822 F.3d 1327, 1336 (Fed. Cir. 2016)).”
Examiner respectfully disagrees, because, the improvement to the generated program such “simpler” and “intuitive” is not a result of any change in technology, but a result of mental processes. The improvement is not in computer capabilities, the computer is merely used as a tool, the computer still functions as it would before the application case.
At p14 third paragraph of the Remarks, Applicant cited McRO, Inc. v. Bandai Namco Games America Inc. and argued that “Our claims likewise recite specific rules (e.g., grammar rules and ranking metrics) and specific algorithms (e.g., machine learning-based renaming and targeted user feedback) to automate and improve program synthesis, replacing what used to be ad-hoc or impossible for humans. As in McRO, the focus of the claims in the present application is a technological improvement (namely, better program synthesis outcomes) achieved through concrete steps, not the mere idea of "automation" or "improvement" in the abstract.”
Examiner respectfully disagrees, because, the case McRO is not applicable as the instant claims do not recite any features of McRO case. Merely reciting rules and algorithms does not mean the claims do not recite abstract idea. as explained above, better program synthesis outcomes are not a result of any technological improvement, the concrete steps do not mean the claims are not abstract idea. As set forth in the office action above, and as explained above, the analysis for step 2A prong 1 identified abstract idea in the claims.
At p14 last to p15 first paragraph of the Remarks, Applicant argued that “In determining that the patent at issue in McRO is "focused on a specific asserted improvement in computer animation, i.e., the automatic use of rules of a particular type," the Federal Circuit noted that "[d]efendents provided no evidence that the process previously used by animators is the same as the process required by the claims." McRO at p. 1314. Similarly, the present Office Action provides no evidence that the process previously used by program synthesizers is the same as the process required by the claims. To the contrary, the present Office Action does not include any art-based rejection of the claims.”
Examiner respectfully disagrees, because, as explained above, McRO is not applicable here. Whether there is previous process that is the same as the process required by the instant claims is about obviousness, it is not a factor in determining whether the instant claims recite abstract idea or not.
At p15 first full paragraph of the Remarks, Applicant argued that “The Examiner's contention that the features in the claims are capable of being carried out by a person with pencil and paper is unrealistic. …. No human can internally replicate the behavior of a trained neural network or systematically produce the same results without the computer.”
Examiner respectfully disagrees, because, as set forth in the office action above, synthesizing a computer program to include an idiomatic function by providing a sample input and a sample output is mental process, i.e. human can perform the synthesizing with aid of paper and pencil. Machine learning technique is merely using a computer as a tool to implement the identified abstract idea. Even if human can not replicate the behavior of a trained neural network or systematically produce the same results, it does not mean that the instant claims do not recite an abstract idea.
At p15 second full paragraph of the Remarks, Applicant argued that “In another example, the independent claims recite that a significant input is identified. The system computes which input leads to a relatively high uncertainty (using information- theoretic entropy or similar). This is a calculation over the space of possible programs. A person cannot intuit which test case leads to substantial (e.g., maximum) entropy of a machine learning model's predictions. Such a determination requires analyzing probabilities across many potential programs, which is a highly computational task.”
Examiner respectfully disagrees, because, as set forth in the office action above, identifying significant input is mental process, i.e. human can perform the task with aid of paper and pencil. Calculating entropy and/or probabilities are mathematical concept of abstract idea. Further, entropy and analyzing probabilities across many potential programs are not in the claim language.
At p15 third full paragraph of the Remarks, Applicant argued that “In yet another example, the independent claims recite that multiple calls to the machine learning model are performed to rename variables. The machine learning model is iteratively prompted with "frozen text" to guide the machine learning model. This procedure - fixing parts of the code and generating names in sequence - is tailor-made for a computer. A human naming variables typically uses intuition, not an iterative constrained text-generation process. The need to do this arises because language models operate in a specific way; a person does not operate like a machine learning model, such as a GPT-3 model.”
Examiner respectfully disagrees, because, as set forth in the office action above, renaming variables is mental process, i.e. human can perform the task with aid of paper and pencil. Machine learning model is merely using computer with software as a tool to implement the identified abstract idea. Iterative constrained text-generation process and GPT-3 model are not in the instant claim language.
At p15 last to p16 first paragraph of the Remarks, Applicant argued that “Furthermore, it may be possible for a human to write a program from scratch that is consistent with sample inputs and outputs. However, that is what a programmer does. The claimed embodiments automate the process - the computer cannot be removed from the equation because then no automation exists.”
Examiner respectfully disagrees, because, the office action merely states that human can synthesize/write a program with given sample inputs and sample outputs, i.e. it is a mental process. The computer is merely used as a tool to implement the mental process. The office action does not mean to remove the computer from the equation. The process can be automated with a computer as a tool, the automation of the process by using a computer does not mean the that the process is not mental process, it is still mental process because human can perform it with aid of paper and pencil.
At p15 last to p16 first paragraph of the Remarks, Applicant also argued that “The Examiner repeatedly contends that but for the machine learning algorithm a user/programmer would manually perform the operations. See Office Action, pp. 4-5, 8, and 13. However, this logic would label any computer automation of a manual task as abstract, which is not the law. For instance, the Federal Circuit in McRO did not let the fact that humans animate by hand doom the automation claim. Instead, it looked at whether the claim's specific automated process was an improvement. Here, the specific process (with machine learning, grammar, etc.) yields an improved result that a human, working alone, would take much longer to achieve or might not achieve at all, especially naming consistency across thousands of lines of code or verifying on edge cases automatically.”
Examiner respectfully disagrees, because, the office action is for the instant claims. For the instant claims, human can synthesize/write a program with given sample inputs and sample outputs, i.e. the synthesizing is mental process. The office action did not conclude or mean that any computer automation of a manual task as abstract. In McRO case, the animation is not automation of a manual task, the improved animation process can not be performed by human. Thus, McRO case is not applicable here. In the instant case, using computer or machine learning algorithm to synthesize a program with given sample inputs and sample outputs is automation of a manual task.
At p16 second paragraph of the Remarks, Applicant argued that “As a practical matter, no human could replicate the claimed embodiments: …. To call it a "mental process" oversimplifies the claim beyond recognition, just as the court warned in McRO and Alice. The Examiner's analogies (i.e., but for the machine learning algorithm, a human could try to do X) ignore that the presence of the machine learning algorithm and computing system is what makes the solution possible at scale or in reasonable time.”
Examiner respectfully disagrees, because, even human can not replicate the claimed embodiments, it does not mean that the claims do not recite abstract idea. Enumerating possible programs in a DSL, ensuring that at one fits sample input(s) and output(s), and renaming variables can all be performed by human with aid of paper and pencil, i.e. they are mental processes, which is not oversimplification. These processes are not made possible by using a computing system and/or machine learning algorithm, human can perform them manually. As previously explained, McRO case is not applicable here.
At p16 third paragraph of the Remarks, Applicant argued that “Accordingly, claims 1-19 and 21 cannot possibly recite a judicial exception.”
Examiner respectfully disagrees, because, as set forth in the office action, and as explained above, analysis for step 2A prong 1 identified abstract idea in the claims. Applicant’s arguments do not provide any evidence why the identified mental processes are not mental processes.
At p16 fourth paragraph of the Remarks, Applicant argued that “…, Applicant submits that claims 1-19 and 21 recite additional elements that integrate the exception into a practical application of that exception. …, the claims do not seek to monopolize that idea but rather recite a specific implementation that integrates any abstract concept into a practical application.”
Examiner respectfully disagrees, because, as set forth in the office action above, analysis for step 2A prong 2 determined that the additional elements do not integrate the judicial exception into a practical application. The argument about monopolizing the idea is not relevant because the rejections are not based on monopolization of the abstract idea.
At p16 last to p17 first paragraph of the Remarks, Applicant argued that “… The claims improve the functioning of a computer-based tool (a program synthesizer). Applicant notes that a program synthesizer traditionally produces opaque code, whereas the claimed embodiments are capable of producing human-readable code. This is a technological improvement - the system's output is more in line with what a human programmer would write, which is a concrete benefit in computing (e.g., ease of debugging, integration, and maintenance).”
Examiner respectfully disagrees, because, human-readable code is not a result of any improvement of technology, human can manually produce human-readable code. A program synthesizer is software. The instant claims may involve a new piece of software for synthesizing a program, but the new software is not evidence of improvement to any technology, the computer functions the same as it would before the instant application, and software programming technology is the same as it was before the instant application.
At p17 second paragraph of the Remarks, Applicant argued that “The claimed embodiments are deeply tied to computer technology: it involves training and using a machine learning algorithm to do something that humans cannot readily do - generate context-appropriate code identifiers. This is not a mere use of a computer as a calculator; rather, it is leveraging machine intelligence in a specific way to improve software generation.”
Examiner respectfully disagrees, because, as set forth in the office action above, and as explained above, generating context-appropriate code that is human readable is mental process, i.e. human can perform the process with aid of paper and pencil. The computer and machine learning algorithm are merely used as a tool (not as a calculator) to implement the identified abstract idea.
At p17 third paragraph of the Remarks, Applicant argued that “The display and feedback elements of the claimed embodiments are integral, not insignificant. …. The interface steps of the claimed embodiments (e.g., display code and get user's ground truth on an input) likewise tie the process to a real-world use - a practical improvement in user interaction with an automated coding tool.”
Examiner respectfully disagrees, because, the display and feedback elements are like transmitting and receiving data which is insignificant extra-solution activity, which is also recognized as well-understood, routine, and conventional activity as analyzed in step 2B. The user interface is cited as generic computer software component, it does not improve any technology, it does not integrate the judicial exception into a practical application, and does not constitute an inventive concept.
At p17 last paragraph of the Remarks, Applicant argued that “The claims of the present application are narrow and do not preempt the field of programming by example. …”
Examiner would like to point out that preempting or monopolizing is not the bases the rejections are made, the argument is not relevant to the current office action.
At p18 first paragraph of the Remarks, Applicant argued that “To further illustrate integration into a practical application, Applicant submits that the claims do not merely "calculate something and display it" (which would be mere output). Instead, the display of the code and the reception of user verification are part of the improvement - they tighten the feedback loop. This is a hallmark of a practical application: the abstract idea (automatically writing a program) is harnessed in a particular process that actively involves a user to ensure correctness of the program. This user-in-the-loop design is a tangible implementation, not an abstract result.”
Examiner respectfully disagrees, because, whether user-in-the-loop design is an abstract result or not does not affect the abstract idea, i.e. synthesizing/writing a program that is identified in step 2A prong 1. As explained previously, the display of the code and the reception of user verification are insignificant extra-solution activity like data gathering and transmitting, and do not improve any technology.
At p18 second paragraph of the Remarks, Applicant argued that “…. In McRO, the Federal Circuit found the rules-based lip-sync animation non-abstract because it "focused on a specific asserted improvement in computer animation" and did not preempt all techniques. Similarly, the claimed embodiments focus on a specific improvement in computer programming technology: making synthesized code intelligible and confirmable. …. Also, the Federal Circuit in McRO cautioned not to oversimplify - here the Examiner oversimplified the claims by ignoring the specific interplay of components (e.g., treating machine learning and user interface as generic). Looking at the claim as a whole, as the Federal Circuit instructs in McRO, reveals a coordinated method that yields a new functional result, not just an abstract concept of "write a better program."”
Examiner respectfully disagrees, because, as explained previously, McRO case is not applicable here, see, e.g. paragraph 41 above. The instant claims are analyzed as reciting abstract idea without significantly, do not improve computer programming technology. The computer programming technology is the same as it was before the instant application. The analysis was done by following the MPEP guidelines, and the claims are not oversimplified in the analysis.
At p18 last to p19 first paragraph of the Remarks, Applicant cited BASCOM case, and argued that “In our case, the arrangement of DSL-based synthesis with an machine learning renamer and user validation is unconventional (i.e., no prior system we know did all of this).”
Examiner respectfully disagrees, because, first, the BASCOM case is not applicable here, as the instant claims do not recite any features of BASCOM case. In the instant case, the claim limitations are not all conventional, but the claims are analyzed as reciting abstract idea without significantly more.
At p19 first full paragraph of the Remarks, Applicant cited Berkheimer v. HP Inc. case, and argued that “The claimed embodiments are not merely using a computer as a passive tool; rather, the claimed embodiments actively improve the output and process. Under Berkheimer, the examiner cannot simply dismiss these features as routine without evidence; in fact, the evidence (e.g., the specification of the present application) is to the contrary - these features are improvements over the routine practice.”
Examiner respectfully disagrees, because, first Berkheimer case is not applicable here, as the instant claims do not recite any features of Berkheimer case. Whether the claimed embodiments improve the output and process or not, it does affect the fact that is does improve technology. All claim features are considered/analyzed in the office action, none of them improve technology, i.e. the identified abstract idea is not integrated into a practical application.
At p19 second full paragraph of the Remarks, Applicant argued that “In sum, the Examiner's Step 2A analysis should be revisited. The claims, when properly viewed, are directed to improving a computer-tool (a program synthesizer) to produce better outputs, and they recite a series of concrete steps to achieve that. This is not an attempt to claim a disembodied idea or fundamental economic practice; rather, it is software improving software development, firmly within the realm of eligible subject matter as seen in cases like McRO and Enfish.”
Examiner respectfully disagrees, because, as explained above, whether the claims improve a computer-tool (a program synthesizer) to produce better outputs or not, they do not improve any technology, because a program synthesizer is software, the software technology or software development is the same as it was before the instant application. The argument about fundamental economic practice is not relevant for the current office action.
At p19 third full paragraph of the Remarks, Applicant argued that “For at least these reasons, Applicant submits that claims 1-19 and 21 are not "directed to” a judicial exception.”
Examiner respectfully disagrees, because, as set forth in the office action , and as explained above, Applicant’s arguments are not persuasive, the additional elements do not integrate the identified judicial exception into a practical application, and are directed to the identified judicial exception.
At p20 first full paragraph of the Remarks, Applicant argued that “Applicant respectfully disagrees. The specific combination of features in the claims of the present application is not well-understood or routine. In fact, the combination represents a creative integration of techniques from different domains (e.g., formal grammars, machine learning, and user interface) to solve a long-standing problem in a new way.”
Examiner respectfully disagrees, because, as set forth in the office action, and as explained above, the claim features are not all well-understood or routine, but none amount to significantly more.
At p20 second full paragraph of the Remarks, Applicant argued that “First, one strong indication of an inventive concept is that the claim elements were not conventional in the field at the time. The Office Action itself did not cite any prior art teaching or suggesting the specific features of the claims. The one reference cited as "pertinent" (i.e., Kurabayashi) takes a different approach - natural language based code retrieval), confirming that the approach recited by the claimed embodiments - using a DSL with semantic variable renaming and user confirmation - was not the standard practice.”
Examiner respectfully disagrees, because, as explained previously, prior art is about obviousness, it is not a factor in determining 101 abstract idea rejections. The claim features are not all conventional, but none of the features amount to significantly more.
At p20 third full paragraph of the Remarks, Applicant argued that “None of the known programming by example systems combine grammar-guided synthesis with machine learning-based variable naming and active user verification. This synergy is novel. The Examiner writes that "[t]he analysis did not identify the argued about limitations as well-understood, routine, conventional activity." See Office Action, p. 14. Indeed, the Examiner provided no evidence on this point, essentially skipping the Berkheimer factual analysis. Given that no prior art of record shows these elements, it would be incorrect to call them routine.”
Examiner respectfully disagrees, because, as explained, novel or obvious is the issue to be considered for 103 rejections. Not all claim features are routine, but none of them amount to significantly more.
At p20 last to p21 first paragraph of the Remarks, Applicant argued that “…. Here, the examiner gave none. To the contrary, the specification of the present application can be used as evidence that these features were not conventional because the specification contrasts the features with "conventional techniques." For example, the specification notes that prior methods did not produce human-readable code or allow user feedback, whereas the claimed embodiments do - clearly an unconventional advancement.”
Examiner respectfully disagrees, because, as set forth in the office action, displaying a user interface element and receiving data are analyzed as insignificant extra-solution activity such as data gathering and transmitting which are recognized as well-understood, routine, conventional activity, See MPEP 2106.05(d). The office action provided evidence for the identified conventional activities, i.e. MPEP 2106.05(d) which defines well-understood, routine, conventional activities. As explained above, not all claim features are conventional, but none of them amount to significantly more.
At p21 second paragraph of the Remarks, Applicant argued about claims 2, 10, and 18 that “… Introducing a guarded grammar to bias the search toward idiomatic patterns is an innovation that improves efficiency and code quality. Introducing the guarded grammar is not a generic computer step; rather, it is a tailored algorithmic feature. Applicant submits that it is not a staple feature of conventional programming by example systems to have a weighted grammar. The approach recited in the claims is more sophisticated than prior heuristic programming.”
Examiner respectfully disagrees, because, as set forth in the office action, a guarded grammar merely indicating a field of use or technological environment in which to apply a judicial exception, it does not amount to significantly more than the exception itself, and cannot integrate a judicial exception into a practical application. Whether a guarded grammar is conventional, sophisticated or not, it does not amount to significantly more.
At p21 third paragraph of the Remarks, Applicant argued about claims 3, 11, and 19 that “… Using a pre-trained language model to name variables in code was unprecedented. That alone is a breakthrough application of machine learning - even the Examiner recognized this by focusing on that step. Prior art did not have GPT-like models in the loop. This machine learning integration is far from routine; it leverages cutting-edge Al (OpenAI Codex did not exist as a product until 2021, for context). The claimed embodiments harness a machine learning algorithm trained on source code to solve what was previously a thorny problem in program synthesis (lack of semantic naming). This is exactly the kind of "non-conventional arrangement" of components (programming by example and machine learning) that BASCOM says can confer eligibility.”
Examiner respectfully disagrees, because, as set forth in the office action, and as explained above, the machine learning is merely using computer and software as a tool to implement the identified abstract idea, whether it is routine or not, it does not integrate the judicial exception into a practical application, and does not amount to significantly more. The argument of GPT-like models in the loop is not relevant to the current office action, as the GPT-like models in the loop is not in the claims.
At p21 last to p22 first paragraph of the Remarks, Applicant argued about claims 8 and 16 that “The idea of identifying a "significant input" with highest uncertainty and asking the user for the correct output is a clever, non-obvious twist. Conventional programming by example techniques might have the user check all outputs or none; here, a critical test case is mathematically selected. This is reminiscent of active learning in Al - applying it in a programming context is innovative. It certainly wasn't routine to only involve the user in this targeted way. It yields a big efficiency gain: the user need not review every possible input, just an input carefully chosen by the system. Such integration of user feedback into the synthesis loop was not standard practice. It is an inventive integration of human insight with machine generation, improving confidence in the result. No prior art references teach or suggest this specific "prompt user on uncertain input" step.”
Examiner respectfully disagrees, because, as set forth in the office action, the identifying process is mental process, i.e. abstract idea, even if it is not routine, and brings benefits such as efficiency, it is still abstract idea, not an innovative concept.
At p22 second paragraph of the Remarks, Applicant argued that “Importantly, these aspects of the claimed embodiments reinforce each other in a non- trivial way. The machine learning model ensures readability; the grammar ensures correctness and conciseness; and the user check ensures validity. The ordered combination of the elements is unconventional – …. It is not a generic "apply Al" situation; rather, it is a constrained application of Al within a larger algorithm. The Federal Circuit in Amdocs (Israel) Ltd. v. Openet Telecom, Inc., 841 F.3d 1288 (Fed. Cir. 2016) found an inventive concept in a distributed architecture that combined components in a particular way. Similarly, the architecture in the claimed embodiments (synthesizer, re-namer, and verifier) is arranged in a novel manner to achieve a new result.”
Examiner respectfully disagrees, because, Even if the ordered combination of the elements is unconventional or novel, no element amounts to significantly more. The machine learning model is cited so generically (no details whatsoever are provided other than that it is a “machine learning model”) that it represents no more than mere instructions to apply the judicial exceptions on a computer, thus, not an inventive concept. The Openet Telecom case is not applicable, as the instant claims do not recite any features of the case.
At p22 third paragraph of the Remarks, Applicant argued that “Given all this, an objective observer would agree that the claims recite a technical solution that was not conventional. No evidence has been presented to the contrary, and it appears that none exists in the prior art, as the pending claims remain unchallenged on prior art grounds. Under Berkheimer, when the specification and lack of prior art indicate a feature is not well-understood, the Examiner cannot just label it conventional without evidence. Here, the specification indicates novelty (e.g., it specifically states that conventional programming by example techniques lack these features). Accordingly, the Examiner's burden is not met. This strongly supports the presence of an inventive concept.”
Examiner respectfully disagrees, because, as set forth in the office action, the office action identified displaying a user interface element and receiving data as insignificant extra-solution activities such as data gathering and transmitting which are recognized as recognized as well-understood, routine, conventional activity, See MPEP 2106.05(d). The office action provided evidence for the identified conventional activities, i.e. MPEP 2106.05(d) which defines well-understood, routine, conventional activities. The whole claims are analyzed as reciting abstract idea without significantly. Some claim features are not conventional or novel, do not necessarily mean they constitute an inventive concept. In fact, as set forth in the office action, no additional elements integrate the judicial exception into a practical application and no additional elements amount to significantly more than the judicial exception itself.
At p22 fourth paragraph of the Remarks, Applicant argued that “Let's assu