DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Notice to Applicant
Claims 1, 11 and 18 are presently amended.
Claims 5-6, 8-9 and 15-16 are cancelled.
Claims 1-4, 7, 10-14 and 17-18 are pending.
Response to Amendment
Applicant’s amendments are acknowledged.
Response to Arguments
Applicant's arguments filed 9/18/2025 have been fully considered in view of further consideration of statutory law, Office policy, precedential common law, and the cited prior art as necessitated by the amendments to the claims, but are not persuasive for the reasons set forth below.
35 USC § 101 Rejections
First, Applicant argues that the “Claims Do Not Recite a Judicial Exception…
the amended claims now recite with specificity how the machine learning model functions, basis which assessment is performed by the first pre-trained machine learning model to proceed with the further steps…
Importantly, the neural network passable format allows to train an embedding layer of the first pre-trained neural network using the one hot representations to generate low dimensional embeddings. The one hot representations, thus, represent a low dimensional embedding which on being fed to hidden layers of the first pre-trained machine learning model handles a much smaller size of processed input data as compared to the input data with ordinal values…
Further, the Examiner characterizes the claims as being directed to an abstract idea, specifically identifying the steps in the claims as falling under the judicial exception of "certain methods of organizing human activity," namely employee performance evaluation and categorization, which the Examiner associated with general human resource practices.
However, this characterization significantly overgeneralizes the claimed subject matter and fails to recognize the technical nature of the claimed invention, the advanced computational steps involved, and the specific improvements to the functioning of AI based systems for developer performance evaluation…
the claims are not directed to a judicial exception or an abstract idea in the conventional sense, but to a specific improvement in the functioning of Al models and performance evaluation systems, particularly in high-variability, data-rich development environments.
The claimed invention yields a practical application that meaningfully improves developer evaluation by automating data ingestion, feature extraction, classification, and ranking all using advanced AI techniques. Accordingly, the claims should not be rejected under §101 as being directed to an abstract idea” [Arguments, pages 14-22].
In response, Applicant’s arguments have been considered but are not persuasive. Examiner respectfully maintains that the present claims recite an abstract idea in the grouping of certain methods of organizing human activity. First, with regard to the assertion that the previous office action “overgeneralizes the claimed subject matter and fails to recognize the technical nature of the claimed invention, the advanced computational steps involved, and the specific improvements to the functioning of AI based systems for developer performance evaluation”, Examiner disagrees and observes that the invention, when considered as a whole, is not directed to an improved AI modeling, but rather to the management and performance evaluation of software developers. These performance evaluation concepts are not meaningfully different than the following concepts identified by the MPEP:
Concepts relating to certain methods of organizing human activity. The aforementioned limitations describe steps for managing personal behavior or relationships or interactions between people, including social activities, teaching, and following rules or instructions. Specifically, classifying developers into performance categories is considered to describe steps for managing personal behavior as well as interactions between people. The AI elements of the present invention are considered to be techniques for performing the performance evaluations, and are not considered to be the core idea of the invention itself. Thus, claims 1, 11 and 18 are directed to concepts identified as abstract ideas. As such, Examiner remains unpersuaded.
Second, Applicant argues that the “claims are integrated into practical application…
Applicant respectfully traverses the assertion that the claimed invention is directed merely to gathering and analyzing information using conventional techniques, or to no more than generic instructions to apply an exception on a general purpose computer…
the claimed invention presents a technological solution to a specific problem, objectively and intelligently evaluating the performance of software developers using AI driven models that assess structured performance data, rank developers within performance categories, and adapt evaluation criteria in real time based on feedback and evolving project needs.
This approach addresses challenges in development environments, where manual performance assessments are subjective, error-prone, and unable to scale across dynamic, data-intensive software projects…
The claimed method is performed by a specifically configured AI based evaluation system, which includes at least one pre-trained machine learning model, a feature vector generation pipeline, and modules for classification, ranking, re-evaluation, and model transfer…” [Arguments, pages 22-29].
In response, Applicant’s arguments have been considered but are not persuasive. Examiners evaluate integration into a practical application by: (1) identifying whether there are any additional elements recited in the claim beyond the judicial exception(s); and (2) evaluating those additional elements individually and in combination to determine whether they integrate the exception into a practical application, using one or more of the considerations introduced in subsection I supra, and discussed in more detail in MPEP §§ 2106.04(d)(1), 2106.04(d)(2), 2106.05(a) through (c) and 2106.05(e) through (h).
With regard to the assertion that the present invention demonstrates a practical application through the use of “a specifically configured AI based evaluation system, which includes at least one pre-trained machine learning model, a feature vector generation pipeline, and modules for classification, ranking, re-evaluation, and model transfer…”, Examiner respectfully disagrees and first observes that the independent claims only recite the following additional elements –
…using Artificial Intelligence (AI) … by an AI based evaluation system … ; …the Al based evaluation system…; … training a first machine learning model of the Al based evaluation system… the first machine learning model corresponds to a first pre-trained machine learning model…; the first machine learning model…; …the first pre-trained machine learning model…; …by the Al based evaluation system… the first pre-trained machine learning model…; …by the Al based evaluation system… the first pre-trained machine learning model…; …by the Al based evaluation system…; …by the Al based evaluation system…; …by the Al based evaluation system…; …by the Al based evaluation system…;…by the Al based evaluation system… [Claim 1],
A system for evaluating performance of developers using Artificial Intelligence (AI), the system comprising :a processor; and a memory communicatively coupled to the processor, wherein the memory stores processor executable instructions, which, on execution, causes the processor to: …; … train a first machine learning model of the Al based evaluation system… the first machine learning model corresponds to a first pre-trained machine learning model…; the first machine learning model is trained…; …the first pre-trained machine learning model…; …by the Al based evaluation system… the first pre-trained machine learning model…; …by the Al based evaluation system… the first pre-trained machine learning model…; …by the Al based evaluation system…; …by the Al based evaluation system…; …by the Al based evaluation system…; …by the Al based evaluation system…;…by the Al based evaluation system…; the first pre-trained machine learning model; … the first pre-trained machine learning model… [Claim 11],
A non-transitory computer-readable medium storing computer-executable instructions for contextually aligning a title of an article with content within the article, the stored instructions, when executed by a processor, cause the processor to perform operations comprising…; … training a first machine learning model of the Al based evaluation system… the first machine learning model corresponds to a first pre-trained machine learning model; …the Al based evaluation system…; … the first pre-trained machine learning model…; … the first pre-trained machine learning model…; the first machine learning model…; …the first pre-trained machine learning model…; …by the Al based evaluation system…; …by the Al based evaluation system…;…by the Al based evaluation system…; the first pre-trained machine learning model; … the first pre-trained machine learning model… [Claim 18].
Examiner respectfully maintains that the pre-trained machine learning model, which is amended to include the use of performance parameters in a neural network passable format, as well as the remaining additional elements including an apparatus and executable instructions remains recited at a high-level of generality (see MPEP § 2106.05(a)), like the following MPEP example:
iii. Gathering and analyzing information using conventional techniques and displaying the result, TLI Communications, 823 F.3d at 612-13, 118 USPQ2d at 1747-48;
Particularly, Examiner observes that the specifying the format of the input parameters is not sufficient to demonstrate a practical application of the machine learning elements into the claimed developer evaluation system. Furthermore, the AI, machine learning and computer-implemented elements are considered to amount to no more than mere instructions to apply the exception using a generic computer component (see MPEP 2106.05(f)), similar to Claim 2 of Example 47 of the July 2024 Subject Matter Eligibility Examples and to the following MPEP examples:
i. A commonplace business method or mathematical algorithm being applied on a general purpose computer, Alice Corp. Pty. Ltd. V. CLS Bank Int’l, 573 U.S. 208, 223, 110 USPQ2d 1976, 1983 (2014); Gottschalk v. Benson, 409 U.S. 63, 64, 175 USPQ 673, 674 (1972); Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015);
In particular, Examiner observes that merely claiming the use of an unspecified machine learning model which is pre-trained outside of the scope of the claimed invention fails to demonstrate significantly more than the judicial exception. Further still, Examiner observes that the present invention claims no actions other than the analysis of data and presentation thereof. Accordingly, these additional elements do not integrate the abstract idea into a practical application.
The remaining dependent claims do not recite any new additional elements, and thus do not integrate the abstract idea into a practical application. Thus, Examiner respectfully maintains that the present claims recite a judicial exception without significantly more. As such, Examiner remains unpersuaded.
Third, Applicant argues that the “Claims Amount to "Significantly More" than an Abstract Idea…
The claims specifically require the use of a non-generic Al based evaluation system comprising a first pre-trained machine learning model configured for classification tasks, and a second machine learning model (e.g., RankNet) trained using weighted evaluation features to compute developer rankings. The system is further configured to perform structured data processing operations, including processing ordinal values associated with the plurality of performance parameters of developers into neural network passable format using one-hot representations…
Additionally, the claims recite real time performance re-evaluation based on dynamic adjustments to evaluation criteria, which are informed by real time stakeholder feedback and evolving project requirements which are capabilities that go far beyond any mental process or routine business method. The claims also incorporate transfer learning mechanisms, allowing the system to adapt previously trained models to new environments, thereby enabling cross domain application and improving system efficiency.
The claimed invention transforms raw developer performance data and evolving contextual information into automated, actionable outcomes, such as reclassification, reranking, and performance feedback, without human intervention. These technical elements collectively provide a technological improvement in the functioning of performance evaluation systems by enhancing accuracy, fairness, adaptability, and scalability, solving real world technical problems that cannot be addressed manually or through conventional evaluation methods…
The claims qualify as "significantly more" than the abstract idea asserted by the Office because they add specific limitations other than what is well-understood, routine, and conventional in the field, and add unconventional steps that confine the claims to a particular useful application. See Mayo Collaborative serv. V. Prometheus Labs, Inc, 132 S. Ct. 1289, 1302 (2012).
The claimed invention also qualifies as "significantly more" as the claimed invention improves the functioning of image processing devices and database systems. See Alice Corp. Pty. Ltd. VS. CLS Bank Int'l 134 S. Ct. 2347, 2359 (2014). The technology of the Application provides a technical solution to the technical problem. See e.g., Amdocs (Israel) Ltd. VS. Opener Telecom, Inc. 841 F.3d 1288, 1300 (Fed. Cir. 2016) …” [Arguments, pages 29-34].
In response, Applicant’s arguments have been considered but are not persuasive. Examiner respectfully disagrees and maintains that the present invention demonstrates neither an improvement to any particular field of technology nor to the functioning of a computer. In Step 2B, examiners should:
• Carry over their identification of the additional element(s) in the claim from Step 2A Prong Two as well as the conclusions from Step 2A Prong Two on the considerations discussed in MPEP §§ 2106.05(a) - (c), (e) (f) and (h):
• Re-evaluate any additional element or combination of elements that was considered to be insignificant extra-solution activity per MPEP § 2106.05(g), because if such re-evaluation finds that the element is unconventional or otherwise more than what is well-understood, routine, conventional activity in the field, this finding may indicate that the additional element is no longer considered to be insignificant; and
• Evaluate whether any additional element or combination of elements are other than what is well-understood, routine, conventional activity in the field, or simply append well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception, per MPEP § 2106.05(d).
Examiner maintains that the additional elements, when considered in combination and in the context of the claims as a whole, fail to demonstrate an improvement to any particular field of technology. First, Examiner observes that the claimed additional elements, as cited in response to the above argument, are centered on an AI-based evaluation system, do not demonstrate an improvement to any particular field of technology. Examiner maintains that the claimed additional elements relate directly to the abstract idea itself, namely certain methods of human activity (i.e. performance evaluations) rather than any particular field of technology. Similarly, the claimed additional elements are not considered to be recited at a level of specificity which could be considered to demonstrate an improvement to computer functionality. Instead, Examiner respectfully maintains that the broadly claimed AI-based evaluation system merely invokes the computer as a tool in the performance of the abstract idea itself. Thus, Examiner respectfully maintains that the present claims recite a judicial exception without significantly more. As such, Examiner remains unpersuaded.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-4, 7, 10-14 and 17-18 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more.
Step 1: Claims 1-4, 7, 10-14 and 17-18 are directed to statutory categories, namely a process (claims 1-4, 7 and 10), a machine (claims 11-14 and 17) and an article of manufacture (claim 18).
Step 2A, Prong 1: Claims 1, 11 and 18 in part, recite the following abstract idea:
…A method for evaluating performance of developers …the method comprising: receiving, …, each of a plurality of performance parameters associated with a set of developers, wherein the plurality of performance parameters comprise ordinal values; processing, by… , the ordinal values of the plurality of performance parameters into a neural network passable format using one hot representations; …with exposure to a new environment during an initial training for a classification task associated with the evaluating of performance of developers, wherein…; wherein …is trained using the plurality of performance parameters in the neural network passable format; and wherein training an embedding layer of … using the one hot representations generate a low dimensional embedding; creating… one or more feature vectors corresponding to each of the plurality of performance parameters, based on one or more features determined for each of the plurality of performance parameters, wherein the one or more feature vectors are created based on…, wherein the one or more performance parameters comprise efficiency of a developed product associated with a module developed for a product, complexity of the developed product, types of support received from peers, feedback or rating received from managers, quality of the module developed for the product, and technical skills of each of the set of developers; assessing, …, the one or more feature vectors, based on …; classifying, …, the set of developers into one of a set of performance categories based on the assessing of the one or more feature vectors, wherein the set of performance categories includes an excellent performer category, a good performer category, an average performer category, and a bad performer category; and evaluating, …, the performance of at least one of the set of developers, based on an associated category in the set of performance categories, in response to the classifying; dynamically adjusting … a performance evaluation criterion, based on real-time feedback from multiple stakeholders and evolving project requirements; re-evaluating… the performance of the at least one of the set of developers using the dynamically adjusted performance evaluation criterion; and updating … performance assessment of the at least one of the set of developers to the stakeholders, in response to re-evaluating; modifying … with transferable knowledge for a target system to be evaluated, wherein the transferable knowledge corresponds to optimal values associated with the one or more feature vectors corresponding to each of the plurality of performance parameters; tuning … using specific characteristics of the target system to create a target model; and evaluating the target system performance using the target model to predict system performance of the target system [Claim 1],
…receive each of a plurality of performance parameters associated with a set of developers, wherein the plurality of performance parameters comprise ordinal values; process the ordinal values of the plurality of performance parameters into a neural network passable format using one hot representations; …with exposure to a new environment during an initial training for a classification task associated with the evaluating performance of developers, wherein…; wherein … using the plurality of performance parameters in the neural network passable format; and wherein training an embedding layer of … using the one hot representations generate a low dimensional embedding; create one or more feature vectors corresponding to each of the plurality of performance parameters, based on one or more features determined for each of the plurality of performance parameters, wherein the one or more feature vectors are created based on …, wherein the one or more performance parameters comprise efficiency of a developed product associated with a module developed for a product, complexity of the developed product, types of support received from peers, feedback or rating received from managers, quality of the module developed for the product, and technical skills of each of the set of developers; assess the one or more feature vectors, based on …; classify the set of developers into one of a set of performance categories based on the assessing of the one or more feature vectors, wherein the set of performance categories includes an excellent performer category, a good performer category, an average performer category, and a bad performer category; and evaluate the performance of at least one of the set of developers, based on an associated category in the set of performance categories, in response to the classifying; dynamically adjust… a performance evaluation criterion, based on real-time feedback from multiple stakeholders and evolving project requirements; re-evaluate… the performance of the at least one of the set of developers using the dynamically adjusted performance evaluation criterion; and updating … performance assessment of the at least one of the set of developers to the stakeholders, in response to re-evaluating; modify … with transferable knowledge for a target system to be evaluated, wherein the transferable knowledge corresponds to optimal values associated with the one or more feature vectors corresponding to each of the plurality of performance parameters; tune … using specific characteristics of the target system to create a target model; and evaluate the target system performance using the target model to predict system performance of the target system [Claim 11],
…receiving each of a plurality of performance parameters associated with a set of developers, wherein the plurality of performance parameters comprise ordinal values; processing, by … the ordinal values of the plurality of performance parameters into a neural network passable format using one hot representations; …with exposure to a new environment during an initial training for a classification task associated with the evaluating performance of developers, wherein…; wherein … using the plurality of performance parameters in the neural network passable format; and wherein training an embedding layer of … using the one hot representations generate a low dimensional embedding; creating one or more feature vectors corresponding to each of the plurality of performance parameters, based on one or more features determined for each of the plurality of performance parameters, wherein the one or more feature vectors are created based on…, wherein the one or more performance parameters comprise efficiency of a developed product associated with a module developed for a product, complexity of the developed product, types of support received from peers, feedback or rating received from managers, quality of the module developed for the product, and technical skills of each of the set of developers; assessing the one or more feature vectors, based on…; classifying the set of developers into one of a set of performance categories based on the assessing of the one or more feature vectors, wherein the set of performance categories includes an excellent performer category, a good performer category, an average performer category, and a bad performer category; and evaluating the performance of at least one of the set of developers, based on an associated category in the set of performance categories, in response to the classifying; dynamically adjusting … a performance evaluation criterion, based on real-time feedback from multiple stakeholders and evolving project requirements; re-evaluating… the performance of the at least one of the set of developers using the dynamically adjusted performance evaluation criterion; and updating … performance assessment of the at least one of the set of developers to the stakeholders, in response to re-evaluating; modifying … with transferable knowledge for a target system to be evaluated, wherein the transferable knowledge corresponds to optimal values associated with the one or more feature vectors corresponding to each of the plurality of performance parameters; tuning … using specific characteristics of the target system to create a target model; and evaluating the target system performance using the target model to predict system performance of the target system [Claim 18].
These concepts are not meaningfully different than the following concepts identified by the MPEP:
Concepts relating to certain methods of organizing human activity. The aforementioned limitations describe steps for managing personal behavior or relationships or interactions between people, including social activities, teaching, and following rules or instructions. Specifically, classifying developers into performance categories is considered to describe steps for managing personal behavior as well as interactions between people. As such, claims 1, 11 and 18 are directed to concepts identified as abstract ideas.
The dependent claims recite limitations relative to the independent claims, including, for example:
…wherein evaluating the performance comprises: computing, for each of the set of performance categories, ranks for each developer from the set of developers categorized within an associated performance category…; and ranking each developer from the set of developers for each set of performance categories, based on the computed ranks, to evaluate the performance of each developer from the set of developers [Claim 2],
… further comprising…, wherein training comprises assigning weights to the one or more features associated with the each of the plurality of performance parameters based on a predefined evaluation criterion [Claim 3],
…wherein the predefined evaluation criterion comprises one or more of a technical skill in demand and an efficiency of a developed product with respect to bugs identified in the developed product, and wherein high weights are assigned to one or more features associated with at least one of the high demand technical skill as compared to a low demand technical skill and bug-free developed product as compared to the developed product with a plurality of bugs [Claim 4],
…further comprising: identifying a plurality of bugs associated with a module of a product developed by each of the set of developers; generating a feedback for each of the set of developers, wherein the feedback is generated in response of identifying the plurality of bugs associated with the product developed by each of the set of developers; and evaluating the performance of at least one of the set of developers, based on the feedback [Claim 7],
…further comprising: modifying the … with transferrable knowledge for a target system to be evaluated, wherein the transferrable knowledge corresponds to optimal values associated with the one or more feature vectors corresponding to each of the plurality of performance parameters; tuning … using specific characteristics of the target system to create a target model; and evaluating the target system performance using the target model to predict system performance of the target system [Claim 9],
…wherein …is configured to receive as input an input observation and an input action and to generate an estimated future reward from the input in accordance with each of the plurality of performance parameters associated with the set of developers [Claim 10].
The limitations of these dependent claims are merely narrowing the abstract idea identified in the independent claims, and thus, the dependent claims also recite abstract ideas.
Step 2A, Prong 2: This judicial exception is not integrated into a practical application. In particular, claims 1, 11 and 18 only recite the following additional elements –
…using Artificial Intelligence (AI) … by an AI based evaluation system … ; …the Al based evaluation system…; … training a first machine learning model of the Al based evaluation system… the first machine learning model corresponds to a first pre-trained machine learning model…; the first machine learning model…; …the first pre-trained machine learning model…; …by the Al based evaluation system… the first pre-trained machine learning model…; …by the Al based evaluation system… the first pre-trained machine learning model…; …by the Al based evaluation system…; …by the Al based evaluation system…; …by the Al based evaluation system…; …by the Al based evaluation system…;…by the Al based evaluation system…; [Claim 1],
A system for evaluating performance of developers using Artificial Intelligence (AI), the system comprising :a processor; and a memory communicatively coupled to the processor, wherein the memory stores processor executable instructions, which, on execution, causes the processor to: …; … train a first machine learning model of the Al based evaluation system… the first machine learning model corresponds to a first pre-trained machine learning model…; the first machine learning model is trained…; …the first pre-trained machine learning model…; …by the Al based evaluation system… the first pre-trained machine learning model…; …by the Al based evaluation system… the first pre-trained machine learning model…; …by the Al based evaluation system…; …by the Al based evaluation system…; …by the Al based evaluation system…; …by the Al based evaluation system…;…by the Al based evaluation system…; the first pre-trained machine learning model; … the first pre-trained machine learning model… [Claim 11],
A non-transitory computer-readable medium storing computer-executable instructions for contextually aligning a title of an article with content within the article, the stored instructions, when executed by a processor, cause the processor to perform operations comprising…; … training a first machine learning model of the Al based evaluation system… the first machine learning model corresponds to a first pre-trained machine learning model; …the Al based evaluation system…; … the first pre-trained machine learning model…; … the first pre-trained machine learning model…; the first machine learning model…; …the first pre-trained machine learning model…; …by the Al based evaluation system…; …by the Al based evaluation system…;…by the Al based evaluation system…; the first pre-trained machine learning model; … the first pre-trained machine learning model… [Claim 18].
The dependent claims only recite the following new additional elements –
…based on a second machine learning model… [Claims 2 and 12],
…training the second machine learning model… [Claims 3 and 13],
… the first pre-trained machine learning model corresponds to a Q network, and wherein the Q network… [Claim 10].
The machine learning model, apparatus and executable instructions are recited at a high-level of generality (see MPEP § 2106.05(a)), like the following MPEP example:
iii. Gathering and analyzing information using conventional techniques and displaying the result, TLI Communications, 823 F.3d at 612-13, 118 USPQ2d at 1747-48;
Furthermore, the AI, machine learning and computer-implemented elements are considered to amount to no more than mere instructions to apply the exception using a generic computer component (see MPEP 2106.05(f)), similar to Claim 2 of Example 47 of the July 2024 Subject Matter Eligibility Examples and to the following MPEP examples:
i. A commonplace business method or mathematical algorithm being applied on a general purpose computer, Alice Corp. Pty. Ltd. V. CLS Bank Int’l, 573 U.S. 208, 223, 110 USPQ2d 1976, 1983 (2014); Gottschalk v. Benson, 409 U.S. 63, 64, 175 USPQ 673, 674 (1972); Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015);
Accordingly, these additional elements do not integrate the abstract idea into a practical application.
The remaining dependent claims do not recite any new additional elements, and thus do not integrate the abstract idea into a practical application.
Step 2B: Claims 1, 11 and 18 and their underlying limitations, steps, features and terms, considered both individually and as a whole, do not include additional elements that are sufficient to amount to significantly more than the judicial exception for the following reasons:
Independent claims 1, 11 and 18 only recite the following additional elements –
…using Artificial Intelligence (AI) … by an AI based evaluation system … ; …the Al based evaluation system…; … training a first machine learning model of the Al based evaluation system… the first machine learning model corresponds to a first pre-trained machine learning model…; the first machine learning model…; …the first pre-trained machine learning model…; …by the Al based evaluation system… the first pre-trained machine learning model…; …by the Al based evaluation system… the first pre-trained machine learning model…; …by the Al based evaluation system…; …by the Al based evaluation system…; …by the Al based evaluation system…; …by the Al based evaluation system…;…by the Al based evaluation system…; [Claim 1],
A system for evaluating performance of developers using Artificial Intelligence (AI), the system comprising :a processor; and a memory communicatively coupled to the processor, wherein the memory stores processor executable instructions, which, on execution, causes the processor to: …; … train a first machine learning model of the Al based evaluation system… the first machine learning model corresponds to a first pre-trained machine learning model…; the first machine learning model is trained…; …the first pre-trained machine learning model…; …by the Al based evaluation system… the first pre-trained machine learning model…; …by the Al based evaluation system… the first pre-trained machine learning model…; …by the Al based evaluation system…; …by the Al based evaluation system…; …by the Al based evaluation system…; …by the Al based evaluation system…;…by the Al based evaluation system…; the first pre-trained machine learning model; … the first pre-trained machine learning model… [Claim 11],
A non-transitory computer-readable medium storing computer-executable instructions for contextually aligning a title of an article with content within the article, the stored instructions, when executed by a processor, cause the processor to perform operations comprising…; … training a first machine learning model of the Al based evaluation system… the first machine learning model corresponds to a first pre-trained machine learning model; …the Al based evaluation system…; … the first pre-trained machine learning model…; … the first pre-trained machine learning model…; the first machine learning model…; …the first pre-trained machine learning model…; …by the Al based evaluation system…; …by the Al based evaluation system…;…by the Al based evaluation system…; the first pre-trained machine learning model; … the first pre-trained machine learning model… [Claim 18].
These elements do not amount to significantly more than the abstract idea for the reasons discussed in 2A prong 2 with regard to MPEP 2106.05(a) and MPEP 2106.05(f). By the failure of the elements to integrate the abstract idea into a practical application there, the additional elements likewise fail to amount to an inventive concept that is significantly more than an abstract idea here, in Step 2B.
As such, both individually or in combination, these limitations do not add significantly more to the judicial exception.
The remaining dependent claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the dependent claims do not recite any new additional elements other than those mentioned in the independent claims, which amount to no more than mere instructions to apply the exception using a generic computer component (see MPEP 2106.05(f)). As such, these claims are not patent eligible.
Prior Art Considerations
Examiner conducted a thorough search of the body of available prior art (see attached documents regards PTO-892 Notice of Reference Cited and EAST Search History). Notably, Examiner discovered several patent documents that taught aspects of the invention, but no single disclosure taught “every element required by the claims under its broadest reasonable interpretation” [MPEP § 2131] to make a 35 USC § 102 rejection. Further, Examiner considered the individual elements of the recited claims taught across the prior art cited below, but did not find it obvious to combine such disclosures [MPEP § 2142] to make a 35 USC § 103 rejection.
In particular, Wright et al., U.S. Publication No. 2020/0225945 [hereinafter Wright], discloses “Methods, systems, and apparatus, including computer programs encoded on computer storage media, for receiving a source code change; computing a distribution of standard coding durations using a model that takes as input features of source code changes; and computing a representative duration for the code change using the distribution of standard coding durations, wherein the representative duration represents a measure of how long a standard developer defined by the model would take to make the code change” (Wright, Abstract) While Wright discloses several aspects of the present invention and focuses on optimizing likelihoods for starting, ending, and committing coding sessions. However, Wright is silent with respect to optimal feature vectors being transferred to modify an existing model.
Thus, Wright fails to teach, disclose, or suggest “.... modifying the first pre-trained machine learning model with transferable knowledge for a target system to be evaluated, wherein the transferable knowledge corresponds to optimal values associated with the one or more feature vectors corresponding to each of the plurality of performance parameters.....”.
Deshpande, U.S. Publication No. 2011/0173052 [hereinafter Deshpande], discloses an enhanced knowledge management system wherein “A knowledge management module associated with an organization hierarchically arranges knowledge relevant to the organization. The knowledge management module may communicate with databases internal and external to the organization to connect individuals to information regarding elements within the map. In certain embodiments, the management server also includes, for each element in the knowledge management map, information related to the levels of expertise of personnel associated with the organization.” (Deshpande, Abstract).
While Desphande discloses several aspects of the present invention such as performance parameters including feedback received by managers, technical skills of each of a set of developers and types of support received by peers, Desphande does not disclose “.... modifying the first pre-trained machine learning model with transferable knowledge for a target system to be evaluated, wherein the transferable knowledge corresponds to optimal values associated with the one or more feature vectors corresponding to each of the plurality of performance parameters.....”, as stated in the presently amended claims.
Alt et al., U.S. Publication No. 2021/0179118 [hereinafter Alt] discloses a method for determining control parameters for a control system, wherein “The method includes: providing a set of travel trajectories; deriving reward functions from the travel trajectories, using an inverse reinforcement learning method; deriving driver type-specific clusters based on the reward functions; determining control parameters for a particular driver type-specific cluster” (Alt, Abstract).
While Alt discloses some aspects of the present invention including evaluating performance using an inverse learning technique, Desphande does not disclose “.... modifying the first pre-trained machine learning model with transferable knowledge for a target system to be evaluated, wherein the transferable knowledge corresponds to optimal values associated with the one or more feature vectors corresponding to each of the plurality of performance parameters.....”, as stated in the presently amended claims.
Woulfe et al., U.S. Publication No. 2018/0276584 [hereinafter Woulfe] discloses a method of facilitating organizational management using bug data, wherein a “risk factor can be used to determine the quality of the developer's code. The risk factor associated with code produced by a particular developer can be provided to a manager or management system. The risk factor can be used to provide bug-based information to a corporate review and reward process.” (Woulfe, Abstract).
While Woulfe discloses some aspects of the present invention including identifying a plurality of bugs associated with a module of a product developed by each of the set of developers, Woulfe fails to disclose “.... modifying the first pre-trained machine learning model with transferable knowledge for a target system to be evaluated, wherein the transferable knowledge corresponds to optimal values associated with the one or more feature vectors corresponding to each of the plurality of performance parameters.....”, as stated in the presently amended claims.
Tiku et al., U.S. Publication No. 2021/0035013 [hereinafter Tiku], discloses refined user enablement utilizing reinforced learning wherein “A processor may receive profile data associated with a user. The processor may identify, from the profile data, a degree of proficiency of the user. The degree of proficiency may indicate an ability of the user to relay specific information to a second user. The processor may designate the degree of proficiency as a first level. The processor may generate a proposal to increase the first level to a second level. The increase may indicate an increase in the degree of proficiency. The processor may display the proposal to the user” (Tiku, Abstract).
While Tiku discloses some aspects of the present invention including the Q network elements of claim 10, Tiku does not disclose “.... modifying the first pre-trained machine learning model with transferable knowledge for a target system to be evaluated, wherein the transferable knowledge corresponds to optimal values associated with the one or more feature vectors corresponding to each of the plurality of performance parameters.....”, as stated in the presently amended claims.
For the above reasons, Examiner determined the currently pending claims novel and non-obvious given the current search. Amendment to the claims and further search in reaction to such amendment may yield the claims anticipated or obvious in future prosecution, determined at that time.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Strachan et al., U.S. Publication No. 2018/0088939 discloses timing estimations for application lifecycle management work items determined through machine learning.
Tornhill, U.S. Publication No. 2020/0249941 discloses ranking of software code parts.
Burton et al., U.S. Publication No. 2019/0026106 discloses associating software issue reports with changes to code.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to NICHOLAS D BOLEN whose telephone number is (408)918-7631. The examiner can normally be reached Monday - Friday 8:00 AM - 5:00 PM PST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Patty Munson can be reached on (571) 270-5396. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/NICHOLAS D BOLEN/ Examiner, Art Unit 3624
/HAMZEH OBAID/Primary Examiner, Art Unit 3624 February 3, 2026