Prosecution Insights
Last updated: April 19, 2026
Application No. 18/386,160

COMPETITIONS AND PERSONALIZED SOFTWARE CHALLENGES UTILIZING LARGE LANGUAGE MODELS

Non-Final OA §101§103§112
Filed
Nov 01, 2023
Examiner
SOLTANZADEH, AMIR
Art Unit
2191
Tech Center
2100 — Computer Architecture & Software
Assignee
Renormalize Inc.
OA Round
1 (Non-Final)
81%
Grant Probability
Favorable
1-2
OA Rounds
2y 6m
To Grant
98%
With Interview

Examiner Intelligence

Grants 81% — above average
81%
Career Allow Rate
340 granted / 421 resolved
+25.8% vs TC avg
Strong +17% interview lift
Without
With
+16.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
35 currently pending
Career history
456
Total Applications
across all art units

Statute-Specific Performance

§101
17.7%
-22.3% vs TC avg
§103
60.4%
+20.4% vs TC avg
§102
3.4%
-36.6% vs TC avg
§112
10.1%
-29.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 421 resolved cases

Office Action

§101 §103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1-20 are presented for examination. Claims 21-40 have been canceled. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Claim 1 as drafted, recite a process that, under its broadest reasonable interpretation, covers steps that could reasonably be performed in the mind, including with the aid of pen and paper, but for the recitation of generic computer components. That is, the limitation “(c) generating a software development challenge based on one or more characteristics of the first user and the second user, (d) generating one or more software development challenge requirements based on the one or more characteristics of the first and the second user; (e) comparing a first listing of code generated by the first user with a second listing of code generated by the second user, (f) determining if the first listing of code generated by the first user complies with the challenge requirements; (g) determining if the second listing of code generated by the second user complies with the software development challenge requirements; and (h) determining a winner of the competition based on the comparing of (e), the determining of (f), and the determining of (g)” as drafted, is a process that, under its broadest reasonable interpretation, recite the abstract idea of mental processes. These limitations encompass a human mind carrying out these functions through observation, evaluation, judgment and /or opinion, or even with the aid of pen and paper. Thus, these limitations recite and fall within the “Mental Processes” grouping of abstract ideas. This judicial exception is not integrated into a practical application. The claims recites the following additional elements “wherein the generating of (c) is performed at least in part by a first Large Language Model (LLM);,” “wherein the comparing of (e) is performed at least in part by a second Large Language Model (LLM);,” and “(a) receiving a create a software development competition request from a first user; (b) receiving an accept software development competition request from a second user”. The additional elements “wherein the generating of (c) is performed at least in part by a first Large Language Model (LLM);,” “wherein the comparing of (e) is performed at least in part by a second Large Language Model (LLM),” are merely instructions to implement an abstract idea on a computer, or merely using a generic computer or computer components as a tool to perform the abstract idea. See MPEP 2106.05(f). The additional element “(a) receiving a create a software development competition request from a first user; (b) receiving an accept software development competition request from a second user” does nothing more than add insignificant extra solution activity to the judicial exception, such as data gathering and outputting the results of the abstract idea to perform a task. See MPEP 2106.05(g). Accordingly, the additional elements recited in the claims do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea, thus fail to integrate the abstract idea into a practical application. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element “wherein the generating of (c) is performed at least in part by a first Large Language Model (LLM);,” “wherein the comparing of (e) is performed at least in part by a second Large Language Model (LLM),” are generic computer components and instructions used as the tools to perform the abstract idea. See MPEP 2106.05(f). As to the additional element “(a) receiving a create a software development competition request from a first user; (b) receiving an accept software development competition request from a second user” the courts have identified gathering data and displaying the output of the abstract idea is well-understood, routine, conventional activity. See MPEP 2106.05(d). Accordingly, the additional elements recited in the claims cannot provide an inventive concept. Thus, the claims are not patent eligible. Claim 2 recites the additional elements of “receiving a first competition ante from the first user; and (bi) receiving a second competition ante from the second user, wherein the first competition ante and the second competition ante are both assigned to the winner determined in (h)” which does nothing more than add insignificant extra solution activity to the judicial exception, such as data gathering and outputting the results of the abstract idea to perform a task. See MPEP 2106.05(g). Accordingly, the additional elements recited in the claims do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea, thus fail to integrate the abstract idea into a practical application. Further, the claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the courts have identified gathering data and displaying the output of the abstract idea is well-understood, routine, conventional activity. See MPEP 2106.05(d). Accordingly, the additional elements recited in the claims cannot provide an inventive concept. Thus, the claims are not patent eligible. Claim 3 further define the “user characteristics” as part of the “generating” function set forth in the claims from which they depend, thus, are also considered to recite a mental process that can be reasonably carried out through observation, evaluation, judgment and /or opinion, or even with the aid of pen and paper. Claim 4 recites the additional elements “wherein the first Large Language Model (LLM) and the second Large Language Model (LLM) are the same,” which are merely instructions to implement an abstract idea on a computer, or merely using a generic computer or computer components as a tool to perform the abstract idea. See MPEP 2106.05(f). Accordingly, the additional elements recited in the claims do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea, thus fail to integrate the abstract idea into a practical application. Further, the claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element are generic computer components and instructions used as the tools to perform the abstract idea. See MPEP 2106.05(f). Accordingly, the additional elements recited in the claims cannot provide an inventive concept. Thus, the claims are not patent eligible. Claim 5 further define the “one or more characteristics” as part of the “generating” function set forth in the claims from which they depend, thus, are also considered to recite a mental process that can be reasonably carried out through observation, evaluation, judgment and /or opinion, or even with the aid of pen and paper. Claim 6 further define the “one or more characteristics of the first or second user” as part of the “generating” function set forth in the claims from which they depend, thus, are also considered to recite a mental process that can be reasonably carried out through observation, evaluation, judgment and /or opinion, or even with the aid of pen and paper. Claim 7 recites the additional elements “wherein steps (a) through (h) are performed by a computing system comprising: one or more processor circuits; and a non-transitory computer readable medium storing a program, the program instructing the one or more processor circuits to perform the steps (a) through (h),” which are merely instructions to implement an abstract idea on a computer, or merely using a generic computer or computer components as a tool to perform the abstract idea. See MPEP 2106.05(f). Accordingly, the additional elements recited in the claims do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea, thus fail to integrate the abstract idea into a practical application. Further, the claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element are generic computer components and instructions used as the tools to perform the abstract idea. See MPEP 2106.05(f). Accordingly, the additional elements recited in the claims cannot provide an inventive concept. Thus, the claims are not patent eligible. Claim 8 recites the additional elements “wherein the software development challenge and the one or more software development challenge requirements are communicated to the first user and the second user via a website interface, and wherein the method further comprises proving real-time feedback to the first user or second user during the software development challenge,” which are merely instructions to implement an abstract idea on a computer, or merely using a generic computer or computer components as a tool to perform the abstract idea. See MPEP 2106.05(f). Accordingly, the additional elements recited in the claims do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea, thus fail to integrate the abstract idea into a practical application. Further, the claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element are generic computer components and instructions used as the tools to perform the abstract idea. See MPEP 2106.05(f). Accordingly, the additional elements recited in the claims cannot provide an inventive concept. Thus, the claims are not patent eligible. Claim 9 recites the additional element “wherein the results of (e) through (h) are displayed to the first user and the second user via a website interface” and “wherein the results include a pass or fail indicator, an analysis of issues, a comparison of the degree to which each user met the software development challenge requirements, an example of improvements, and a comparison between the first user and second user”. The additional elements “wherein the results of (e) through (h) are displayed to the first user and the second user via a website interface” are merely instructions to implement an abstract idea on a computer, or merely using a generic computer or computer components as a tool to perform the abstract idea. See MPEP 2106.05(f). The additional element “wherein the results include a pass or fail indicator, an analysis of issues, a comparison of the degree to which each user met the software development challenge requirements, an example of improvements, and a comparison between the first user and second user” does nothing more than add insignificant extra solution activity to the judicial exception, such as data gathering and outputting the results of the abstract idea to perform a task. See MPEP 2106.05(g). Accordingly, the additional elements recited in the claims do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea, thus fail to integrate the abstract idea into a practical application. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element “wherein the results of (e) through (h) are displayed to the first user and the second user via a website interface” are generic computer components and instructions used as the tools to perform the abstract idea. See MPEP 2106.05(f). As to the additional element “wherein the results include a pass or fail indicator, an analysis of issues, a comparison of the degree to which each user met the software development challenge requirements, an example of improvements, and a comparison between the first user and second user” the courts have identified gathering data and displaying the output of the abstract idea is well-understood, routine, conventional activity. See MPEP 2106.05(d). Accordingly, the additional elements recited in the claims cannot provide an inventive concept. Thus, the claims are not patent eligible. Claim 10 recites the additional elements “wherein the determining of (f), (g) and (h) are performed by the first LLM, the second LLM, or a third LLM,” which are merely instructions to implement an abstract idea on a computer, or merely using a generic computer or computer components as a tool to perform the abstract idea. See MPEP 2106.05(f). Accordingly, the additional elements recited in the claims do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea, thus fail to integrate the abstract idea into a practical application. Further, the claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element are generic computer components and instructions used as the tools to perform the abstract idea. See MPEP 2106.05(f). Accordingly, the additional elements recited in the claims cannot provide an inventive concept. Thus, the claims are not patent eligible. Claim 11 as drafted, recite a process that, under its broadest reasonable interpretation, covers steps that could reasonably be performed in the mind, including with the aid of pen and paper, but for the recitation of generic computer components. That is, the limitation (a) generating a software development challenge based at least in part on a software ticket name, a software ticket description, or a user characteristic; (b) generating one or more software development challenge requirements based on the one or more characteristics of the user” “(e) determining when the software development challenge is completed by the user; (f) determining if a listing of code generated by the user complies with the software development challenge requirements; and (g) assigning an award to the user for completing the software development challenge and complying with the software development challenge requirements” as drafted, is a process that, under its broadest reasonable interpretation, recite the abstract idea of mental processes. These limitations encompass a human mind carrying out these functions through observation, evaluation, judgment and /or opinion, or even with the aid of pen and paper. Thus, these limitations recite and fall within the “Mental Processes” grouping of abstract ideas. This judicial exception is not integrated into a practical application. The claims recites the following additional elements “wherein the generating of (a) is performed at least in part by a first Large Language Model (LLM)” “wherein the generating of the one or more software development challenge requirements is performed at least in part by a second Large Language Model (LLM);” and “(c) communicating a description of the software development challenge to the user; (d) communicating a description of the one or more software development challenge requirements to the user”. The additional elements “wherein the generating of (a) is performed at least in part by a first Large Language Model (LLM)” “wherein the generating of the one or more software development challenge requirements is performed at least in part by a second Large Language Model (LLM);” are merely instructions to implement an abstract idea on a computer, or merely using a generic computer or computer components as a tool to perform the abstract idea. See MPEP 2106.05(f). The additional element “(c) communicating a description of the software development challenge to the user; (d) communicating a description of the one or more software development challenge requirements to the user” does nothing more than add insignificant extra solution activity to the judicial exception, such as data gathering and outputting the results of the abstract idea to perform a task. See MPEP 2106.05(g). Accordingly, the additional elements recited in the claims do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea, thus fail to integrate the abstract idea into a practical application. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element “wherein the generating of (a) is performed at least in part by a first Large Language Model (LLM)” “wherein the generating of the one or more software development challenge requirements is performed at least in part by a second Large Language Model (LLM);” are generic computer components and instructions used as the tools to perform the abstract idea. See MPEP 2106.05(f). As to the additional element “(c) communicating a description of the software development challenge to the user; (d) communicating a description of the one or more software development challenge requirements to the user” the courts have identified gathering data and displaying the output of the abstract idea is well-understood, routine, conventional activity. See MPEP 2106.05(d). Accordingly, the additional elements recited in the claims cannot provide an inventive concept. Thus, the claims are not patent eligible. Claim 12 further define the “user characteristics” as part of the “generating” function set forth in the claims from which they depend, thus, are also considered to recite a mental process that can be reasonably carried out through observation, evaluation, judgment and /or opinion, or even with the aid of pen and paper. Claim 13 recites the additional elements “wherein the first Large Language Model (LLM) and the second Large Language Model (LLM) are the same,” which are merely instructions to implement an abstract idea on a computer, or merely using a generic computer or computer components as a tool to perform the abstract idea. See MPEP 2106.05(f). Accordingly, the additional elements recited in the claims do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea, thus fail to integrate the abstract idea into a practical application. Further, the claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element are generic computer components and instructions used as the tools to perform the abstract idea. See MPEP 2106.05(f). Accordingly, the additional elements recited in the claims cannot provide an inventive concept. Thus, the claims are not patent eligible. Claim 14 further define the “award” as part of the “assigning” function set forth in the claims from which they depend, thus, are also considered to recite a mental process that can be reasonably carried out through observation, evaluation, judgment and /or opinion, or even with the aid of pen and paper. Claim 15 further define the “one or more characteristics of the user” as part of the “generating” function set forth in the claims from which they depend, thus, are also considered to recite a mental process that can be reasonably carried out through observation, evaluation, judgment and /or opinion, or even with the aid of pen and paper. Claim 16 further define the “one or more characteristics of the user” as part of the “generating” function set forth in the claims from which they depend, thus, are also considered to recite a mental process that can be reasonably carried out through observation, evaluation, judgment and /or opinion, or even with the aid of pen and paper. Claim 17 recites the additional elements “wherein steps (a) through (g) are performed by a system comprising: one or more processor circuits; and a non-transitory computer readable medium storing a program, the program instructing the one or more processor circuits to perform the steps (a) through (g),” which are merely instructions to implement an abstract idea on a computer, or merely using a generic computer or computer components as a tool to perform the abstract idea. See MPEP 2106.05(f). Accordingly, the additional elements recited in the claims do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea, thus fail to integrate the abstract idea into a practical application. Further, the claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element are generic computer components and instructions used as the tools to perform the abstract idea. See MPEP 2106.05(f). Accordingly, the additional elements recited in the claims cannot provide an inventive concept. Thus, the claims are not patent eligible. Claim 18 further define the “software development challenge” as part of the “generating” function set forth in the claims from which they depend, thus, are also considered to recite a mental process that can be reasonably carried out through observation, evaluation, judgment and /or opinion, or even with the aid of pen and paper. Claim 19 recites the additional elements “wherein the results of (f) and (g) are displayed to the user via a website interface” which are merely instructions to implement an abstract idea on a computer, or merely using a generic computer or computer components as a tool to perform the abstract idea. See MPEP 2106.05(f). Accordingly, the additional elements recited in the claims do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea, thus fail to integrate the abstract idea into a practical application. Further, the claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element are generic computer components and instructions used as the tools to perform the abstract idea. See MPEP 2106.05(f). Accordingly, the additional elements recited in the claims cannot provide an inventive concept. Thus, the claims are not patent eligible. Claim 20 recites the additional elements “wherein the determining of (e) and (f) are performed by the first LLM, the second LLM, or a third LLM,” which are merely instructions to implement an abstract idea on a computer, or merely using a generic computer or computer components as a tool to perform the abstract idea. See MPEP 2106.05(f). Accordingly, the additional elements recited in the claims do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea, thus fail to integrate the abstract idea into a practical application. Further, the claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element are generic computer components and instructions used as the tools to perform the abstract idea. See MPEP 2106.05(f). Accordingly, the additional elements recited in the claims cannot provide an inventive concept. Thus, the claims are not patent eligible. Claim Objections Claim 8 is objected to because of the following informalities: "proving real-time feedback" should be "providing." Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 3 and 11-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 3 recites "the first user characteristic and the second user characteristic." No prior introduction of "a first user characteristic" or "a second user characteristic." Claim 1(c) introduces "one or more characteristics of the first user and the second user," which is plural/collective, not specifically "first user characteristic" as a distinct term. Reasoning: This lacks antecedent basis because "the first user characteristic" implies a specific singular characteristic tied to the first user, but no such term was introduced. Claim 11 recites "the one or more characteristics of the user" in (b). Claim 11(a) introduces "a user characteristic" (singular, as an alternative in "or a user characteristic"). Referring to "the one or more characteristics" assumes plurality, but if only singular was selected, it lacks basis. This lacks antecedent basis because "one or more" expands beyond the potentially singular "a user characteristic" introduced. Claim 15 recites "the one or more characteristics of the user." Similar to Claim 11(b), antecedent from Claim 11(a)'s singular "a user characteristic," but "one or more" assumes possible plurality. Claim 16 has as similar to Claim 15, has same issue with "the one or more characteristics of the user." Claim 18 recites "a comparison of the degree to which each user met the software development challenge requirements." Claim 11 is for a single "the user," not multiple users. "each user" lacks antecedent as no plural users are introduced. Reasoning: This lacks antecedent basis because "each user" implies multiple participants, but the claim chain refers to a single user context, rendering it indefinite. Dependent Claims 12-14, 17, and 19-20 are also rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to cure the deficiencies of their independent claims. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1-4 and 7-10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Lydon (US 8,475,251 B2) in view of Chen (US 2024/0020096 A1) further in view of Pezaris (US 10,782,937 B2). Regarding Claim 1, Lydon (US 8,475,251 B2) teaches A method, comprising: (a) receiving a create a software development competition request from a first user (Col. 11, lines 1-17: " the interaction of contestants with the server 104 (via the client 108) can be summarized generally by four steps. The contestant optionally registers with the server 104 (STEP 404). Preferably, the contestant provides information about herself in the registration process" Col. 2, lines 30-39, “coding competitions are provided “on-line,” in the sense that they are provided using computers that communicate over a network. Coding competition participants, also referred to as contestants, use a client computer that communicates with a competition server”); Examiner Comments: This teaches receiving a request from a first user to initiate a software development competition, as Lydon's system processes user registrations to set up coding events, directly mapping to the limitation because it enables the creation of competitive coding sessions upon user input. (b) receiving an accept software development competition request from a second user (Col. 2, lines 40-50: “as part of a coding competition, a server computer provides to one or more contestants a computer coding problem (e.g. a test question) that requires each contestant to develop computer code”); Examiner Comments: This teaches receiving an acceptance from a second user to join the competition, as Lydon's platform supports multiple participants registering for the same event, providing a direct mapping for pairwise competitions. (c) generating a software development challenge based on one or more characteristics of the first user and the second user (Col. 3, lines 51-67: " a contestant is, assigned to a division, based on the contestant's rating. For example, the contestant may be assigned to one of a first division for contestants who have previously attained a first rating or greater, and a second division for contestants who have a rating substantially equal to zero or a rating below a predetermined division rating. There may be a third, fourth, and divisions, etc., depending on the number of contestants and the spread in skill level"); Examiner Comments: This teaches generating challenges tailored to user characteristics like ratings or skill levels, as Lydon's system assigns problems of varying difficulty based on contestant profiles, directly corresponding to the limitation for personalized challenge creation. (d) generating one or more software development challenge requirements based on the one or more characteristics of the first and the second user (Col. 13, lines 44-59: "The server 104 transmits several coding problems to the client 108, where each coding problem has a different level of difficulty with respect to other coding problems. If a contestant submits a correct solution to a coding problem having a higher level of difficulty, the more difficult choice can result in an awarding of more points relative to a correct solution to a coding problem having a lower level of difficulty" Col 18, lines 33-44, “the server 104 allocates two divisions, a first division for contestants who have previously attained a rating greater than a predetermined division rating and a second division for contestants who have either not attained a rating (i.e., who have never competed before or have no data associated with a competition) or have a rating below the predetermined division rating. In one embodiment, the server 104 provides coding problems having a lower degree of difficulty to the second division relative to the problems provided to the first division”); Examiner Comments: This teaches creating requirements such as time limits and correctness criteria adjusted by user divisions, as Lydon's problems include constraints scaled to skill levels, mapping to the limitation because it customizes evaluation standards based on participant characteristics. (f) determining if the first listing of code generated by the first user complies with the challenge requirements (Col. 16, lines 15-32: " the contestant receives this number of points (e.g. 52% of the total available) for the computer code if the computer code executes correctly with all test data, and the contestant receives no points if the code does not execute correctly with all test data."); Examiner Comments: This teaches compliance determination via automated testing, as Lydon's system verifies if code meets problem specifications, providing a direct mapping for evaluating the first user's submission. (g) determining if the second listing of code generated by the second user complies with the software development challenge requirements (Col. 16, lines 15-32: " the contestant receives this number of points (e.g. 52% of the total available) for the computer code if the computer code executes correctly with all test data, and the contestant receives no points if the code does not execute correctly with all test data."); Examiner Comments: This symmetrically teaches compliance checking for the second user, as Lydon's testing applies equally to all submissions, mapping to the limitation for multi-user evaluation. (h) determining a winner of the competition based on the comparing of (e), the determining of (f), and the determining of (g) (Claim 8, selecting a winner based on the response of the respective compiled code to test data) Examiner Comments: This teaches winner determination from code comparisons and compliance scores, as Lydon's ranking uses test results and performance, directly corresponding to the limitation for competition resolution. Lydon did not specifically teach wherein the generating of (c) is performed at least in part by a first Large Language Model (LLM); wherein the comparing of (e) is performed at least in part by a second Large Language Model (LLM) (e) comparing a first listing of code generated by the first user with a second listing of code generated by the second user. However, Chen (US 2024/0020096 A1) teaches wherein the generating of (c) is performed at least in part by a first Large Language Model (LLM) (Claim 17, " the trained machine learning model generates training data based on the result of the executing, wherein the trained machine learning model is further trained using the generated training data"); Examiner Comments: This teaches using an LLM to generate code or descriptions from inputs, as Chen's model creates programming tasks from natural language, directly mapping to challenge generation via LLM for personalized software problems. wherein the comparing of (e) is performed at least in part by a second Large Language Model (LLM) (Para. [0139]: "parameter selector engine 834 (e.g., configured to add, remove, and/or change one or more parameters of a model), and/or model generation engine 836 (e.g., configured to generate one or more machine learning models, such as according to model input data, model output data, comparison data, and/or validation data)"); Examiner Comments: This teaches LLM-based comparison and evaluation of code, as Chen's system assesses code against requirements, providing a direct mapping for automated code comparison in competitions. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Lydon teaching into Chen's in order to automate challenge creation and code evaluation using LLMs, improving efficiency and scalability in generating and assessing personalized software tasks as taught by Chen (abstract: "The present disclosure relates to systems and methods for generating computer code based on natural language input... trained on large datasets of code and natural language... to automate code generation and validation"). Lydon and Chen did not specifically teach (e) comparing a first listing of code generated by the first user with a second listing of code generated by the second user. However, Pezaris (US 10,782,937 B2) teaches comparing a first listing of code generated by the first user with a second listing of code generated by the second user in a Git repository (Col. 21, lines 1-7: " determining one or more differences. e.g., across live copies of the codeblock for users A and B, requires analyzing and comparing the live copy, the saved copy and/or the committed copy of the codeblock for user A and the live copy, the saved copy and/or the committed copy of the codeblock for user B”) Examiner Comments: This teaches directly comparing code listings (live, saved, committed copies) from two users in a code repository (e.g., Git), as Pezaris's difference engine performs line-by-line comparisons between versions generated by different developers, providing a one-to-one mapping because it handles code differences in collaborative environments like Git repositories. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Lydon and Chen's teaching into Pezaris's in order to enable detailed, line-level comparison of code submissions from multiple users in a repository context, facilitating collaborative review and conflict resolution in software development competitions as taught by Pezaris providing an instant communication channel within integrated development environments to allow developers to communicate about code changes in real-time without leaving the IDE (abstract, Summary). Regarding Claim 2, Lydon, Chen and Pezaris teach The method of Claim 1. Lydon further teaches (a1) receiving a first competition ante from the first user; and (b1) receiving a second competition ante from the second user, wherein the first competition ante and the second competition ante are both assigned to the winner determined in (h) (Col. 26, lines 27-34: " The server 104 may also provide “pick-up” competitions, where contestants communicate with the server 104 at any time and each contestant pays a competition fee to compete. The competition administrator can then distribute a portion of the total amount collected (e.g., total amount collected less a percentage as administration fee) to the winner of the pick-up competition."); Examiner Comments: This teaches collecting entry fees or stakes from users and awarding them to the winner, as Lydon's system includes prize pools from participants, mapping to the limitation for incentivized competitions. Regarding Claim 3, Lydon, Chen and Pezaris teach The method of Claim 1. Lydon further teaches wherein the first user characteristic and the second user characteristic includes a user experience level, a user knowledge of programming languages, a user Key Performance Indicators (KPIs), or a user performance metrics (Col. 3, lines 53-67: " A rating may also be assigned to the contestant based on the points awarded, and also based on prior competition performance"); Examiner Comments: This teaches user characteristics such as experience levels and performance ratings, as Lydon's divisions use KPIs like past scores, directly mapping to the limitation for profile-based personalization. Regarding Claim 4, Lydon, Chen and Pezaris teach The method of Claim 1. Chen further teaches wherein the first Large Language Model (LLM) and the second Large Language Model (LLM) are the same (Claim 17, " wherein the trained machine learning model generates training data based on the result of the executing, wherein the trained machine learning model is further trained using the generated training data"); Examiner Comments: This teaches using the same LLM for both generation and evaluation tasks, as Chen's single model performs multiple functions, mapping to the limitation for unified LLM usage. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Lydon teaching into Chen's in order to automate challenge creation and code evaluation using LLMs, improving efficiency and scalability in generating and assessing personalized software tasks as taught by Chen (abstract: "The present disclosure relates to systems and methods for generating computer code based on natural language input... trained on large datasets of code and natural language... to automate code generation and validation"). Regarding Claim 7, Lydon, Chen and Pezaris teach The method of Claim 1. Lydon further teaches wherein steps (a) through (h) are performed by a computing system comprising: one or more processor circuits; and a non-transitory computer readable medium storing a program, the program instructing the one or more processor circuits to perform the steps (a) through (h) (Col. 10, lines 53-67: " The contest server 216 from the operation of the other servers allows for scalability of the competition system. Any combination of the servers, processors, or engines (e.g., web server 208, application server 212, message queue processor 324, rating engine 348) can be implemented on the same or independent processors or computers relative to the other servers, processors, or engines. Several instances of the contest server 216 can execute in parallel (on the same or different computers) to respond to requests from the message queue processor 324"); Examiner Comments: This teaches a computing system with processors and memory for executing the method, as Lydon's platform is software-based, directly mapping to the limitation for implementation. Regarding Claim 8, Lydon, Chen and Pezaris teach The method of Claim 1. Lydon further teaches wherein the software development challenge and the one or more software development challenge requirements are communicated to the first user and the second user via a website interface, and wherein the method further comprises proving real-time feedback to the first user or second user during the software development challenge (Col. 5, lines 14-30: " In one aspect, an apparatus for providing a coding competition includes a web server communicating with a web browser. The web browser is used by a contestant to receive a competition request and enable the contestant to enter the coding competition using client software. The apparatus also includes a client interface server communicating with the client software. The client interface server enables transmission of a coding problem to the client software and reception of computer code in response to the coding problem"); Examiner Comments: This teaches web-based communication and feedback, as Lydon's online platform provides live updates, mapping to the limitation for interactive user experience. Regarding Claim 9, Lydon, Chen and Pezaris teach The method of Claim 1. Lydon further teaches wherein the results of (e) through (h) are displayed to the first user and the second user via a website interface, and wherein the results include a pass or fail indicator, an analysis of issues, a comparison of the degree to which each user met the software development challenge requirements, an example of improvements, and a comparison between the first user and second user (Claim 12, “a testing system for, after compilation, determining at the server the response of the compiled code to test data, a system-measured time to produce the response to the test data and a score, comparing the response of the compiled code to test data with a response of a reference program to the test data, wherein submissions with a shorter measured time to determine the response of the compiled code to the test data relative to the reference program receive a higher score; and a results communication system for providing to registered contestants who submitted source code the response of their compiled code to test data”; Col 14, lines 55-67, “the server 104 evaluates the response by comparing the response to an acceptable response or a range of acceptable responses”); Examiner Comments: This teaches displaying results with indicators and analyses via web, as Lydon's system shows rankings and feedback, providing a mapping including comparisons and suggestions. Regarding Claim 10, Lydon, Chen and Pezaris teach The method of Claim 1. Chen further teaches wherein the determining of (f), (g) and (h) are performed by the first LLM, the second LLM, or a third LLM (Claim 17, " wherein the trained machine learning model generates training data based on the result of the executing, wherein the trained machine learning model is further trained using the generated training data") Examiner Comments: This teaches using LLMs for determinations, as Chen's model handles evaluation, mapping to the limitation for flexible LLM assignment in assessments. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Lydon teaching into Chen's in order to automate challenge creation and code evaluation using LLMs, improving efficiency and scalability in generating and assessing personalized software tasks as taught by Chen (abstract: "The present disclosure relates to systems and methods for generating computer code based on natural language input... trained on large datasets of code and natural language... to automate code generation and validation"). Claim(s) 5-6 is/are rejected under 35 U.S.C. 103 as being unpatentable over Lydon (US 8,475,251 B2) in view of Chen (US 2024/0020096 A1) and Pezaris (US 10,782,937 B2) further in view of Amit (US 2022/0122025 A1). Regarding Claim 5, Lydon, Chen and Pezaris teach The method of Claim 1. Lydon, Chen and Pezaris did not specifically teach further teaches wherein the one or more characteristics of the first or second user is a meeting break time, a positive impact effective time indicator, a positive impact division indicator, an efficiency indicator, a task reaction time indicator, a pull request reaction time indicator, an involvement indicator, a influence indicator, a linked data, an unlinked data, a feedback score, or an industry insight mark indicator. However, Amit (US 2022/0122025 A1) teaches wherein the one or more characteristics of the first or second user is a meeting break time, a positive impact effective time indicator, a positive impact division indicator, an efficiency indicator, a task reaction time indicator, a pull request reaction time indicator, an involvement indicator, a influence indicator, a linked data, an unlinked data, a feedback score, or an industry insight mark indicator (Para. [0015]: "computing a given task completion duration for a given task includes identifying a most recent previous software task completed by the given developer, and computing an amount of time between the given task and the identified most recent task"; Para. [0016]: "modeling the retrieved information and the received request so as to compute the time estimate for the new software task includes determining a corrective commit probability (CCP) quality metric for a given developer, and wherein the time estimate is based on the CCP metric"; Para. [0106]: "Each productivity metric 142 can indicate an average duration or a mean duration of tasks performed by the developer"; Para. [0125]: "A percentage metric 196 indicating a percentage or ratio of modifications of the given component that were performed by the given developer"; Para. [0125]: "A context switch metric 198 that can be used to identify any components in the task referenced by the given task record that match components in a given task most recently completed ... by the given developer"; Para. [0209]: "The average durations 124 were 36 minutes for a developer in the top 10%, 88 minutes for the median and 193 minutes for the slowest 10%"; Para. [0212]: "Given an improvement of 10 percentage points in CCP, the probability of improving the same day duration in 10 minutes was 38%, a lift of 6%"); Examiner Comments: This teaches characteristics such as task completion duration (task reaction time indicator, efficiency indicator), corrective commit probability (feedback score, positive impact indicator), average durations (positive impact effective time indicator), percentage metric (involvement indicator, influence indicator), and context switch metric (meeting break time, linked/unlinked data), as Amit 's system uses these developer-specific metrics to estimate task efforts and personalize assignments, directly mapping to the limitation by providing detailed performance indicators for software developers. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Lydon, Chen and Pezaris 's teaching into Amit 's in order to incorporate detailed developer performance metrics like task durations and quality indicators into user characteristics for challenge generation, enabling more accurate personalization and effort estimation in software tasks as taught by Amit (Para. [0003]: "Effort estimation is the process used to predict the amount of effort (e.g., developer hours) needed to develop a software application. The predicted amount of effort can then be used as basis for predicting project costs and for determining an optimal allocation of software developer time"). Regarding Claim 6, Lydon, Chen and Pezaris teach The method of Claim 1. Lydon, Chen and Pezaris did not teach wherein the one or more characteristics of the first or second user is a focused time, a poor time indicator, a working days indicator, a hours overtime indicator, a code churn indicator, a coding days indicator, a time usage by app indicator, a commits indicator, a pull requests merged indicator, a pull requests reviewed indicator, a large pull requests indicator, an inactive pull requests indicator, a cycled pull requests indicator, an overcommented pull requests indicator, an average pull request open time indicator, a pull request review time indicator, a pull request merged time indicator, a pull request closed time indicator, a task done indicator, a deployment frequency indicator, a lead time for changes indicator, a mean time to recovery indicator, a change failure rate indicator, a bugs closed indicator, a positive impact indicator, a task ratio indicator, a pull request ratio indicator, a jobs ratio indicator, a velocity indicator, a task late indicator, a task in time indicator, an epic indicator, a lead time indicator, a bugs detected indicator, a bugs resolved indicator, a bug cycle time indicator, a bug detected time indicator, a bug fix time indicator, a bug tested time indicator a bug closed time indicator, a pull request commented indicator, a task commented indicator, a time to reply indicator, a time to reply to pull request indicator, an industry insight mark indicator, a tech debt indicator, a following best practices indicator, an average server downtime indicator, an outdates dependencies indicator, an average server load indicator, an average database load indicator, a budget spent indicator, and engineers involved indicator, a profitability indicator, an infrastructure cost indicator, a budget spend on type of work indicator, a total time spent indicator, a task progress indicator, an average velocity indicator, an average sprint length indicator, a successful sprint indicator, a total sprints indicator, an active engineers indicator, or a tasks planned indicator. However, Amit teaches wherein the one or more characteristics of the first or second user is a focused time, a poor time indicator, a working days indicator, a hours overtime indicator, a code churn indicator, a coding days indicator, a time usage by app indicator, a commits indicator, a pull requests merged indicator, a pull requests reviewed indicator, a large pull requests indicator, an inactive pull requests indicator, a cycled pull requests indicator, an overcommented pull requests indicator, an average pull request open time indicator, a pull request review time indicator, a pull request merged time indicator, a pull request closed time indicator, a task done indicator, a deployment frequency indicator, a lead time for changes indicator, a mean time to recovery indicator, a change failure rate indicator, a bugs closed indicator, a positive impact indicator, a task ratio indicator, a pull request ratio indicator, a jobs ratio indicator, a velocity indicator, a task late indicator, a task in time indicator, an epic indicator, a lead time indicator, a bugs detected indicator, a bugs resolved indicator, a bug cycle time indicator, a bug detected time indicator, a bug fix time indicator, a bug tested time indicator a bug closed time indicator, a pull request commented indicator, a task commented indicator, a time to reply indicator, a time to reply to pull request indicator, an industry insight mark indicator, a tech debt indicator, a following best practices indicator, an average server downtime indicator, an outdates dependencies indicator, an average server load indicator, an average database load indicator, a budget spent indicator, and engineers involved indicator, a profitability indicator, an infrastructure cost indicator, a budget spend on type of work indicator, a total time spent indicator, a task progress indicator, an average velocity indicator, an average sprint length indicator, a successful sprint indicator, a total sprints indicator, an active engineers indicator, or a tasks planned indicator (Para. [0015]: "computing a given task completion duration for a given task includes identifying a most recent previous software task completed by the given developer, and computing an amount of time between the given task and the identified most recent task"; Para. [0016]: "modeling the retrieved information and the received request so as to compute the time estimate for the new software task includes determining a corrective commit probability (CCP) quality metric for a given developer, and wherein the time estimate is based on the CCP metric"; Para. [0018]: "a given parameter includes an identity of a component to be modified in the new software task, and wherein modeling the retrieved information and the received request so as to compute the time estimate for the new software task includes identifying a time when a most recent task including the given component that was completed by the developer, and wherein the time estimate is based on the identified time"; Para. [0020]: "a given parameter includes an estimated task size, and wherein modeling the retrieved information and the received request so as to compute the time estimate for the new software task includes computing respective task completion durations and corresponding task sizes for the completed tasks, and wherein the time estimate is based on the estimated task size, the computed task completion durations and the corresponding task sizes"; Para. [0022]: "the new software task belongs to a project including one or more of the completed tasks, and wherein modeling the retrieved information and the received request so as to compute the time estimate for the new software task includes computing a code reuse metric for the one or more completed tasks, and wherein the time estimate is based on the computed code reuse metric"; Para. [0023]: "the new software task belongs to a project including a subset of the completed tasks, and wherein modeling the retrieved information and the received request so as to compute the time estimate for the new software task includes identifying, a number of the tasks in the subset that include bug fixes, and wherein the time estimate is based on the identified number of the tasks"; Para. [0106]: "Each productivity metric 142 can indicate an average duration or a mean duration of tasks performed by the developer"; Para. [0106]: "A task coupling quality metric 146"; Para. [0117]: "A project CCP quality metric 174 that can indicate a percentage of the task commits in the given project comprise bug fixes"; Para. [0117]: "A project coupling quality metric 176"; Para. [0185]: "The number of commits 34 is correlated with self-rated productivity and team lead perception of productivity"; Para. [0185]: "The personal duration is more stable than the number of commits, making it a more reliable metric"; Para. [0188]: "processor 50 can compute a given task duration 124 as the time since the previous commit of the same developer in the same repository"); Examiner Comments: This teaches characteristics such as task completion durations (focused time, poor time indicator, task done indicator, velocity indicator, average velocity indicator), corrective commit probability (commits indicator, change failure rate indicator, bugs resolved indicator, bug fix time indicator), code reuse metric (code churn indicator, tech debt indicator), number of bug fix tasks (bugs closed indicator, bugs detected indicator, bug cycle time indicator), average durations (working days indicator, hours overtime indicator, coding days indicator, time usage indicator), and coupling metrics (pull requests merged indicator, deployment frequency indicator, lead time for changes indicator), as Amit 's system tracks these metrics for developer evaluation and task estimation, directly mapping to the limitation by detailing code and time-based KPIs in software development contexts. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Lydon, Chen and Pezaris 's teaching into Amit 's in order to incorporate detailed developer performance metrics like task durations and quality indicators into user characteristics for challenge generation, enabling more accurate personalization and effort estimation in software tasks as taught by Amit (Para. [0003]: "Effort estimation is the process used to predict the amount of effort (e.g., developer hours) needed to develop a software application. The predicted amount of effort can then be used as basis for predicting project costs and for determining an optimal allocation of software developer time"). Claim(s) 11-13, and 17-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Lydon (US 8,475,251 B2) in view of Chen (US 2024/0020096 A1). Regarding Claim 11, Lydon (US 8,475,251 B2) teach A method, comprising: (a) generating a software development challenge based at least in part on a software ticket name, a software ticket description, or a user characteristic (Lydon, Col. 2, lines 40-50: “as part of a coding competition, a server computer provides to one or more contestants a computer coding problem (e.g. a test question) that requires each contestant to develop computer code”); Examiner Comments: This teaches generating a challenge based on problem descriptions or user-related inputs, as Lydon's system provides coding problems (analogous to ticket descriptions). (b) generating one or more software development challenge requirements based on the one or more characteristics of the user (Lydon, Col. 13, lines 44-59: "The server 104 transmits several coding problems to the client 108, where each coding problem has a different level of difficulty with respect to other coding problems. If a contestant submits a correct solution to a coding problem having a higher level of difficulty, the more difficult choice can result in an awarding of more points relative to a correct solution to a coding problem having a lower level of difficulty"); Examiner Comments: This teaches generating requirements like difficulty levels adjusted by user factors, as Lydon's problems include scaled constraints. (c) communicating a description of the software development challenge to the user (Lydon, Col. 5, lines 14-30: " In one aspect, an apparatus for providing a coding competition includes a web server communicating with a web browser. The web browser is used by a contestant to receive a competition request and enable the contestant to enter the coding competition using client software. The apparatus also includes a client interface server communicating with the client software. The client interface server enables transmission of a coding problem to the client software and reception of computer code in response to the coding problem"); Examiner Comments: This teaches communicating challenge descriptions to the user, as Lydon's web-based system transmits problems, directly mapping. (d) communicating a description of the one or more software development challenge requirements to the user (Lydon, Col. 18, lines 33-44: “the server 104 allocates two divisions, a first division for contestants who have previously attained a rating greater than a predetermined division rating and a second division for contestants who have either not attained a rating (i.e., who have never competed before or have no data associated with a competition) or have a rating below the predetermined division rating. In one embodiment, the server 104 provides coding problems having a lower degree of difficulty to the second division relative to the problems provided to the first division”); Examiner Comments: This teaches conveying requirements like difficulty and point values, as Lydon's system communicates scaled problem specs, mapping to the limitation. (e) determining when the software development challenge is completed by the user (Lydon, Col. 15, lines 18-39: " The server 104 then compares the two responses 638, 644 to determine the correctness of the computer code response 644 "); Examiner Comments: This teaches detecting completion via submission, as Lydon's system processes user code responses, mapping to the limitation. (f) determining if a listing of code generated by the user complies with the software development challenge requirements (Lydon, Col. 16, lines 15-32: " the contestant receives this number of points (e.g. 52% of the total available) for the computer code if the computer code executes correctly with all test data, and the contestant receives no points if the code does not execute correctly with all test data."); Examiner Comments: This teaches compliance checking, as Lydon's testing verifies code against requirements, directly mapping. (g) assigning an award to the user for completing the software development challenge and complying with the software development challenge requirements (Lydon, Claim 8: selecting a winner based on the response of the respective compiled code to test data); Examiner Comments: This teaches assigning awards/points for compliance, as Lydon's system awards based on successful completion, mapping to the limitation. Lydon did not specifically teach wherein the generating of (a) is performed at least in part by a first Large Language Model (LLM); wherein the generating of (b) is performed at least in part by a second Large Language Model (LLM). However, Chen further teaches wherein the generating of (a) is performed at least in part by a first Large Language Model (LLM) (Claim 17: "the trained machine learning model generates training data based on the result of the executing, wherein the trained machine learning model is further trained using the generated training data"); Examiner Comments: This teaches LLM for generating tasks from inputs, as Chen's model creates code/challenges, mapping to the limitation. wherein the generating of (b) is performed at least in part by a second Large Language Model (LLM) (Para. [0139]: "parameter selector engine 834 ... model generation engine 836"); Examiner Comments: This teaches LLM for requirement/parameter generation, mapping to the limitation. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Lydon's teaching into Chen's in order to automate challenge creation and requirements using LLMs, improving efficiency and scalability in generating and assessing personalized software tasks as taught by Chen (abstract: "The present disclosure relates to systems and methods for generating computer code based on natural language input... trained on large datasets of code and natural language... to automate code generation and validation"). Regarding Claim 12, Lydon and Chen teach The method of Claim 11. Lydon further teaches wherein the user characteristic includes a user experience level, a user knowledge of programming languages, a user Key Performance Indicators (KPIs), or a user performance metrics (Col. 3, lines 53-67: " A rating may also be assigned to the contestant based on the points awarded, and also based on prior competition performance"); Examiner Comments: This teaches user characteristics such as experience levels and performance ratings, as Lydon's divisions use KPIs like past scores, directly mapping to the limitation for profile-based personalization. Regarding Claim 13, Lydon and Chen teach The method of Claim 11. Chen further teaches wherein the first Large Language Model (LLM) and the second Large Language Model (LLM) are the same (Claim 17: " wherein the trained machine learning model generates training data based on the result of the executing, wherein the trained machine learning model is further trained using the generated training data"); Examiner Comments: This teaches using the same LLM for both generation and evaluation tasks, as Chen's single model performs multiple functions, mapping to the limitation for unified LLM usage. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Lydon's teaching into Chen's in order to automate challenge creation and code evaluation using LLMs, improving efficiency and scalability in generating and assessing personalized software tasks as taught by Chen (abstract: "The present disclosure relates to systems and methods for generating computer code based on natural language input... trained on large datasets of code and natural language... to automate code generation and validation"). Regarding Claim 17, Lydon and Chen teach The method of Claim 11. Lydon further teaches wherein steps (a) through (g) are performed by a system comprising: one or more processor circuits; and a non-transitory computer readable medium storing a program, the program instructing the one or more processor circuits to perform the steps (a) through (g) (Col. 10, lines 53-67: " The contest server 216 from the operation of the other servers allows for scalability of the competition system. Any combination of the servers, processors, or engines (e.g., web server 208, application server 212, message queue processor 324, rating engine 348) can be implemented on the same or independent processors or computers relative to the other servers, processors, or engines. Several instances of the contest server 216 can execute in parallel (on the same or different computers) to respond to requests from the message queue processor 324"); Examiner Comments: This teaches a computing system with processors and memory for executing the method, as Lydon's platform is software-based, directly mapping to the limitation for implementation. Regarding Claim 18, Lydon and Chen teach The method of Claim 11. Lydon further teaches wherein the software development challenge and the one or more software development challenge requirements are communicated to the user via a website interface, wherein one of the software development challenge requirements is an amount of time allotted to complete the software development challenge, a user's metric performance improvement related to a metric, and a comparison of the degree to which each user met the software development challenge requirements (Col. 5, lines 14-30: " In one aspect, an apparatus for providing a coding competition includes a web server communicating with a web browser. The web browser is used by a contestant to receive a competition request and enable the contestant to enter the coding competition using client software. The apparatus also includes a client interface server communicating with the client software. The client interface server enables transmission of a coding problem to the client software and reception of computer code in response to the coding problem"); Examiner Comments: This teaches web communication with time limits and improvements, as Lydon's interface includes timed challenges and feedback, mapping to the limitation. Regarding Claim 19, Lydon and Chen teach The method of Claim 11. Lydon further teaches wherein the results of (f) and (g) are displayed to the user via a website interface (Claim 12: “a testing system for, after compilation, determining at the server the response of the compiled code to test data, a system-measured time to produce the response to the test data and a score, comparing the response of the compiled code to test data with a response of a reference program to the test data, wherein submissions with a shorter measured time to determine the response of the compiled code to the test data relative to the reference program receive a higher score; and a results communication system for providing to registered contestants who submitted source code the response of their compiled code to test data”; Col 14, lines 55-67: “the server 104 evaluates the response by comparing the response to an acceptable response or a range of acceptable responses”); Examiner Comments: This teaches displaying results online, as Lydon's web shows evaluations, mapping to the limitation. Regarding Claim 20, Lydon and Chen teach The method of Claim 11. Chen further teaches wherein the determining of (e) and (f) are performed by the first LLM, the second LLM, or a third LLM (Claim 17: " wherein the trained machine learning model generates training data based on the result of the executing, wherein the trained machine learning model is further trained using the generated training data") Examiner Comments: This teaches LLM for determinations, as Chen's model verifies completion and compliance, mapping to the limitation. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Lydon's teaching into Chen's in order to automate challenge creation and code evaluation using LLMs, improving efficiency and scalability in generating and assessing personalized software tasks as taught by Chen (abstract: "The present disclosure relates to systems and methods for generating computer code based on natural language input... trained on large datasets of code and natural language... to automate code generation and validation"). Claim(s) 14-16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Lydon (US 8,475,251 B2) in view of Chen (US 2024/0020096 A1) further in view of Amit (US 2022/0122025 A1). Regarding Claim 14, Lydon and Chen teach The method of Claim 11. Lydon and Chen did not specifically teach wherein the award is based at least in part on the user's Key Performance Indicators (KPIs) or user's performance metrics. However, Amit (US 2022/0122025 A1) teaches wherein the award is based at least in part on the user's Key Performance Indicators (KPIs) or user's performance metrics (Para. [0106]: "Each productivity metric 142 can indicate an average duration or a mean duration of tasks performed by the developer"; Para. [0089]: "Given an improvement of 10 percentage points in CCP, the probability of improving the same day duration in 10 minutes was 38%, a lift of 6%"); Examiner Comments: This teaches awards or progress based on metrics like durations and CCP (quality KPI), as Amit's system uses performance indicators for evaluation, mapping to the limitation. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Lydon and Chen's teaching into Amit's in order to incorporate detailed developer performance metrics like task durations and quality indicators into user characteristics for challenge generation, enabling more accurate personalization and effort estimation in software tasks as taught by Amit (Para. [0003]: "Effort estimation is the process used to predict the amount of effort (e.g., developer hours) needed to develop a software application. The predicted amount of effort can then be used as basis for predicting project costs and for determining an optimal allocation of software developer time"). Regarding Claim 15, Lydon and Chen teach The method of Claim 11. Lydon and Chen did not specifically teach wherein the one or more characteristics of the user is a meeting break time, a positive impact effective time indicator, a positive impact division indicator, an efficiency indicator, a task reaction time indicator, a pull request reaction time indicator, an involvement indicator, a influence indicator, a linked data, an unlinked data, a feedback score, or an industry insight mark indicator. However, Amit (US 2022/0122025 A1) teaches wherein the one or more characteristics of the user is a meeting break time, a positive impact effective time indicator, a positive impact division indicator, an efficiency indicator, a task reaction time indicator, a pull request reaction time indicator, an involvement indicator, a influence indicator, a linked data, an unlinked data, a feedback score, or an industry insight mark indicator (Para. [0015]: "computing a given task completion duration for a given task includes identifying a most recent previous software task completed by the given developer, and computing an amount of time between the given task and the identified most recent task"; Para. [0016]: "modeling the retrieved information and the received request so as to compute the time estimate for the new software task includes determining a corrective commit probability (CCP) quality metric for a given developer, and wherein the time estimate is based on the CCP metric"; Para. [0106]: "Each productivity metric 142 can indicate an average duration or a mean duration of tasks performed by the developer"; Para. [0125]: "A percentage metric 196 indicating a percentage or ratio of modifications of the given component that were performed by the given developer"; Para. [0125]: "A context switch metric 198 that can be used to identify any components in the task referenced by the given task record that match components in a given task most recently completed ... by the given developer"; Para. [0209]: "The average durations 124 were 36 minutes for a developer in the top 10%, 88 minutes for the median and 193 minutes for the slowest 10%"; Para. [0212]: "Given an improvement of 10 percentage points in CCP, the probability of improving the same day duration in 10 minutes was 38%, a lift of 6%"); Examiner Comments: This teaches characteristics such as task completion duration (task reaction time indicator, efficiency indicator), corrective commit probability (feedback score, positive impact indicator), average durations (positive impact effective time indicator), percentage metric (involvement indicator, influence indicator), and context switch metric (meeting break time, linked/unlinked data), as Amit's system uses these developer-specific metrics to estimate task efforts and personalize assignments, directly mapping to the limitation by providing detailed performance indicators for software developers. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Lydon and Chen's teaching into Amit's in order to incorporate detailed developer performance metrics like task durations and quality indicators into user characteristics for challenge generation, enabling more accurate personalization and effort estimation in software tasks as taught by Amit (Para. [0003]: "Effort estimation is the process used to predict the amount of effort (e.g., developer hours) needed to develop a software application. The predicted amount of effort can then be used as basis for predicting project costs and for determining an optimal allocation of software developer time"). Regarding Claim 16, Lydon and Chen teach The method of Claim 11. Lydon and Chen did not teach wherein the one or more characteristics of the user is a focused time, a poor time indicator, a working days indicator, a hours overtime indicator, a code churn indicator, a coding days indicator, a time usage by app indicator, a commits indicator, a pull requests merged indicator, a pull requests reviewed indicator, a large pull requests indicator, an inactive pull requests indicator, a cycled pull requests indicator, an overcommented pull requests indicator, an average pull request open time indicator, a pull request review time indicator, a pull request merged time indicator, a pull request closed time indicator, a task done indicator, a deployment frequency indicator, a lead time for changes indicator, a mean time to recovery indicator, a change failure rate indicator, a bugs closed indicator, a positive impact indicator, a task ratio indicator, a pull request ratio indicator, a jobs ratio indicator, a velocity indicator, a task late indicator, a task in time indicator, an epic indicator, a lead time indicator, a bugs detected indicator, a bugs resolved indicator, a bug cycle time indicator, a bug detected time indicator, a bug fix time indicator, a bug tested time indicator a bug closed time indicator, a pull request commented indicator, a task commented indicator, a time to reply indicator, a time to reply to pull request indicator, an industry insight mark indicator, a tech debt indicator, a following best practices indicator, an average server downtime indicator, an outdates dependencies indicator, an average server load indicator, an average database load indicator, a budget spent indicator, and engineers involved indicator, a profitability indicator, an infrastructure cost indicator, a budget spend on type of work indicator, a total time spent indicator, a task progress indicator, an average velocity indicator, an average sprint length indicator, a successful sprint indicator, a total sprints indicator, an active engineers indicator, or a tasks planned indicator. However, Amit teaches wherein the one or more characteristics of the user is a focused time, a poor time indicator, a working days indicator, a hours overtime indicator, a code churn indicator, a coding days indicator, a time usage by app indicator, a commits indicator, a pull requests merged indicator, a pull requests reviewed indicator, a large pull requests indicator, an inactive pull requests indicator, a cycled pull requests indicator, an overcommented pull requests indicator, an average pull request open time indicator, a pull request review time indicator, a pull request merged time indicator, a pull request closed time indicator, a task done indicator, a deployment frequency indicator, a lead time for changes indicator, a mean time to recovery indicator, a change failure rate indicator, a bugs closed indicator, a positive impact indicator, a task ratio indicator, a pull request ratio indicator, a jobs ratio indicator, a velocity indicator, a task late indicator, a task in time indicator, an epic indicator, a lead time indicator, a bugs detected indicator, a bugs resolved indicator, a bug cycle time indicator, a bug detected time indicator, a bug fix time indicator, a bug tested time indicator a bug closed time indicator, a pull request commented indicator, a task commented indicator, a time to reply indicator, a time to reply to pull request indicator, an industry insight mark indicator, a tech debt indicator, a following best practices indicator, an average server downtime indicator, an outdates dependencies indicator, an average server load indicator, an average database load indicator, a budget spent indicator, and engineers involved indicator, a profitability indicator, an infrastructure cost indicator, a budget spend on type of work indicator, a total time spent indicator, a task progress indicator, an average velocity indicator, an average sprint length indicator, a successful sprint indicator, a total sprints indicator, an active engineers indicator, or a tasks planned indicator (Para. [0015]: "computing a given task completion duration for a given task includes identifying a most recent previous software task completed by the given developer, and computing an amount of time between the given task and the identified most recent task"; Para. [0016]: "modeling the retrieved information and the received request so as to compute the time estimate for the new software task includes determining a corrective commit probability (CCP) quality metric for a given developer, and wherein the time estimate is based on the CCP metric"; Para. [0018]: "a given parameter includes an identity of a component to be modified in the new software task, and wherein modeling the retrieved information and the received request so as to compute the time estimate for the new software task includes identifying a time when a most recent task including the given component that was completed by the developer, and wherein the time estimate is based on the identified time"; Para. [0020]: "a given parameter includes an estimated task size, and wherein modeling the retrieved information and the received request so as to compute the time estimate for the new software task includes computing respective task completion durations and corresponding task sizes for the completed tasks, and wherein the time estimate is based on the estimated task size, the computed task completion durations and the corresponding task sizes"; Para. [0022]: "the new software task belongs to a project including one or more of the completed tasks, and wherein modeling the retrieved information and the received request so as to compute the time estimate for the new software task includes computing a code reuse metric for the one or more completed tasks, and wherein the time estimate is based on the computed code reuse metric"; Para. [0023]: "the new software task belongs to a project including a subset of the completed tasks, and wherein modeling the retrieved information and the received request so as to compute the time estimate for the new software task includes identifying, a number of the tasks in the subset that include bug fixes, and wherein the time estimate is based on the identified number of the tasks"; Para. [0106]: "Each productivity metric 142 can indicate an average duration or a mean duration of tasks performed by the developer"; Para. [0106]: "A task coupling quality metric 146"; Para. [0117]: "A project CCP quality metric 174 that can indicate a percentage of the task commits in the given project comprise bug fixes"; Para. [0117]: "A project coupling quality metric 176"; Para. [0185]: "The number of commits 34 is correlated with self-rated productivity and team lead perception of productivity"; Para. [0185]: "The personal duration is more stable than the number of commits, making it a more reliable metric"; Para. [0188]: "processor 50 can compute a given task duration 124 as the time since the previous commit of the same developer in the same repository"); Examiner Comments: This teaches characteristics such as task completion durations (focused time, poor time indicator, task done indicator, velocity indicator, average velocity indicator), corrective commit probability (commits indicator, change failure rate indicator, bugs resolved indicator, bug fix time indicator), code reuse metric (code churn indicator, tech debt indicator), number of bug fix tasks (bugs closed indicator, bugs detected indicator, bug cycle time indicator), average durations (working days indicator, hours overtime indicator, coding days indicator, time usage indicator), and coupling metrics (pull requests merged indicator, deployment frequency indicator, lead time for changes indicator), as Amit's system tracks these metrics for developer evaluation and task estimation, directly mapping to the limitation by detailing code and time-based KPIs in software development contexts. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Lydon and Chen's teaching into Amit's in order to incorporate detailed developer performance metrics like task durations and quality indicators into user characteristics for challenge generation, enabling more accurate personalization and effort estimation in software tasks as taught by Amit (Para. [0002]: "Effort estimation is the process used to predict the amount of effort (e.g., developer hours) needed to develop a software application. The predicted amount of effort can then be used as basis for predicting project costs and for determining an optimal allocation of software developer time"). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to AMIR SOLTANZADEH whose telephone number is (571)272-3451. The examiner can normally be reached M-F, 9am - 5pm ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Wei Mui can be reached at (571) 272-3708. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /AMIR SOLTANZADEH/Examiner, Art Unit 2191 /WEI Y MUI/Supervisory Patent Examiner, Art Unit 2191
Read full office action

Prosecution Timeline

Nov 01, 2023
Application Filed
Jan 30, 2026
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602225
IDENTIFYING THE TRANLATABILITY OF HARD-CODED STRINGS IN SOURCE CODE VIA POS TAGGING
2y 5m to grant Granted Apr 14, 2026
Patent 12591414
CENTRALIZED INTAKE AND CAPACITY ASSESSMENT PLATFORM FOR PROJECT PROCESSES, SUCH AS WITH PRODUCT DEVELOPMENT IN TELECOMMUNICATIONS
2y 5m to grant Granted Mar 31, 2026
Patent 12561134
Function Code Extraction
2y 5m to grant Granted Feb 24, 2026
Patent 12561136
METHOD, APPARATUS, AND SYSTEM FOR OUTPUTTING SOFTWARE DEVELOPMENT INSIGHT COMPONENTS IN A MULTI-RESOURCE SOFTWARE DEVELOPMENT ENVIRONMENT
2y 5m to grant Granted Feb 24, 2026
Patent 12561118
SYSTEM AND METHOD FOR AUTOMATED TECHNOLOGY MIGRATION
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
81%
Grant Probability
98%
With Interview (+16.9%)
2y 6m
Median Time to Grant
Low
PTA Risk
Based on 421 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month