DETAILED ACTION
Status of the Application
The following is a non-Final Office Action. In response to Examiner's communication of October 28, 2025, Applicant, on January 14, 2026, amended claims 1, 6, & 7 and added claim 8. Claim 3 was previously canceled. Claims 1, 2, & 4-8 are now pending in this application and have been rejected below.
The present application is being examined under the pre-AIA first to invent provisions. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on January 14, 2026 has been entered.
Response to Amendment
Applicant's amendments are not sufficient to overcome the 35 USC 101 rejections set forth in the previous action. Therefore, these rejections are updated in view of the amendments and maintained below.
Applicant's amendments are not sufficient to overcome the 35 USC 103 rejections set forth in the previous action. Therefore, these rejections are updated in view of the amendments and maintained below.
Response to Arguments - 35 USC § 101
Applicant’s arguments with respect to the 35 USC 101 rejections have been fully considered, but they are not persuasive.
Applicant argues that the claims do not recite a judicial exception because the claims recite the limitations of "track, by the processor, lines of code entered by a worker and determine a plurality of parameters based on the tracked information, wherein the lines of code is tracked by processing and analyzing raw information,” a human mind cannot practically perform "track codes by processing raw information,” and therefore, the claims do not fall into the grouping of mental process. Examiner respectfully disagrees.
The limitations referred to by Applicant of “the lines of code is tracked by processing and analyzing raw information” is a recitation of tracking human worker behavior performing their work, which is used to rank workers and evaluate quality of work performed by workers, and thus, this recitation referred to by Applicant manages human behavior and recites a certain method of organizing human activity. Further, the claims and specification do not require any specific technology to track this information nor require the raw information is in any particular format that cannot be observed and tracked by a human, and in view of the specification, the term “raw information” is nothing more than numerical parameters that represent the measured number of lines created and occurrence of bugs to allow calculation of the number of lines of code created per hour, the number of bugs per line of code, and the like (See Spec. [0023]-[0024], [0039], Fig. 5). Under the broadest reasonable interpretation, a human can observe and count the raw information of the number of lines created and occurrence of bugs either by observing the worker performing their work or observing a paper record of the numeric parameters, and thus, the limitations referred to by Applicant recite a mental process. Therefore, the limitations referred to by Applicant are recite an abstract idea.
Aside from generic computer components of the processors implementing the limitations referred to by Applicant, for the reasons detailed below, the limitations referred to by Applicant, not only recite a mental process because they can be performed by a human mentally observing information regarding workers’ behaviors, evaluating and using judgement regarding the observed information, and outputting the evaluation with a pen and paper, each of the features referred to by Applicant also recite a certain method of organizing human activity because the limitations referred to by Applicant manage the human behavior workers by collecting information regarding the amount of work performed by a person, modeling, tracking, weighting, ranking, evaluating, and outputting an evaluation of worker behavior of the quality of work performed by workers.
Pursuant to 2019 Revised Patent Subject Matter Eligibility Guidance, in order to determine whether a claim is directed to an abstract idea, under Step 2A, we first (1) determine whether the claims recite limitations, individually or in combination, that fall within the enumerated subject matter groupings of abstract ideas (mathematical concepts, certain methods of organizing human activity, or mental processes), and (2) determine whether any additional elements beyond the recited abstract idea, individually and as an ordered combination, integrate the judicial exception into a practical application. 84 Fed. Reg. 52, 54-55. Next, if a claim (1) recites an abstract idea and (2) does not integrate that exception into a practical application, in order to determine whether the claim recites an “inventive concept,” under Step 2B, we then determine whether any of the additional elements beyond the recited abstract idea, individually and in combination, are significantly more than the abstract idea itself. 84 Fed. Reg. 56.
Under Prong 1 of Step 2A, Claim 1, and similarly claims 2 & 4-8, recites “quality management …: collect work information of a plurality of workers who perform software development, wherein work information is collected …; create a base model serving as a base for evaluation of work quality on a basis of the work information of the plurality of workers. wherein the base model is created in parallel with creation of a rank for each of the plurality of workers; track … lines of code entered by a worker and determine a plurality of parameters based on the tracked information, wherein the lines of code are tracked by processing and analyzing raw information; assign a weight to each of parameters and the processor is configured to calculate the rank by multiplying, for each of the parameters, the weight by each of the parameters; create a base model serving as a base for evaluation of work quality on a basis of the work information of the plurality of workers; evaluate work quality of the worker based on a comparison of the base model and determined rank; and display the evaluation.” Claims 1, 2, & 4-8, in view of the claim limitations, recite the abstract idea of collecting work information of a plurality of workers, creating a base model of work quality for the plurality workers, tracking lines of code entered by a worker to determine parameters, assigning a weight to the parameters, calculating a rank by multiplying the weight by the parameters, evaluating work quality of a particular worker based on the base model and the rank, and outputting the evaluation.
Each of the above limitations manage personal human behavior and provide instructions or rules to follow to manage the human behavior workers by collecting information regarding work performed by workers and modeling, tracking, weighting, ranking, evaluating, and outputting worker behavior of the quality of work performed by workers; thus, the claims, including the limitations referred to by Applicant, recite certain methods of organizing human activity. In addition, as a whole, in view of the claim limitations, but for the computer components and systems performing the claimed functions, the broadest reasonable interpretation of the recited collecting work information of a plurality of workers, creating a base model of work quality for the plurality workers, tracking lines of code entered by a worker to determine parameters, assigning a weight to the parameters, calculating a rank by multiplying the weight by the parameters, evaluating work quality of a particular worker based on the base model and the rank, and outputting the evaluation could all be reasonably interpreted as a human making observations of information regarding work information of workers and lines of code entered by workers, a human mentally performing an evaluation and using judgment based on the observed information to model the workers quality, assign a weight, and calculate a rank, a human mentally performing an evaluation and using judgement by comparing the model of the workers quality and the rank to evaluate the quality of the particular worker, and a human outputting the results of the evaluation; therefore, the claims recite mental processes.
Applicant argues that the claims, as a whole, integrate the alleged judicial exceptions into a practical application because the claims recite specific improvements to the technical field of appropriately evaluating the work quality of a worker in software development by reciting limitations of "track, by the processor, lines of code entered by a worker and determine a plurality of parameters based on the tracked information, wherein the lines of code is tracked by processing and analyzing raw information.” Examiner respectfully disagrees.
As noted above, under Prong 2 of Step 2A, Examiners determine whether any additional elements beyond the recited abstract idea, individually and as an ordered combination, integrate the judicial exception into a practical application.
The allegedly improved “technical field of appropriately evaluate the work quality of a worker in software development” manages of human behavior, and thus, the alleged technical field is a certain method organizing human behavior. Therefore, the allegedly improved technology is an improvement to an abstract idea.
Further, for the reasons set forth above, aside from generic computer components of the processors implementing the limitations referred to by Applicant, the limitations referred to by Applicant are not additional elements beyond the recited abstract idea nor necessarily rooted in computer technology, but rather, the limitation referred to by Applicant recite abstract ideas since they recite a certain method of organizing human activity because the limitations manage the human behavior workers by collecting information regarding work performed by workers to rank and evaluate the quality of workers behaviors and also a mental process because they can be performed by a human mentally observing information regarding worker behavior manually and/or with a pen and paper.
In view of the specification, the term “raw information” is nothing more than numerical parameters that represent the measured number of lines created and occurrence of bugs to allow calculation of the number of lines of code created per hour, the number of bugs per line of code, and the like (See Spec. [0023]-[0024], [0039], Fig. 5), which does not evince this tracking is performed using any particular technology, let alone an improvement to technology.
The allegedly improved technology and the limitations reciting the alleged improvement to technology are improvements to an abstract idea and recitations of the recited abstract idea; however, the MPEP makes clear “an improvement in the abstract idea itself (e.g. a recited fundamental economic concept) is not an improvement in technology” and that “[m]ere automation of manual processes” is not an improvement in computer technology. See MPEP 2106.05(a).
With respect to the recitations of the processors, the elements are nothing more than generic computer components implementing the recited abstract idea, which is not sufficient to integrate an abstract idea into a practical application.
As in the claims at issue in Electric Power Group, the present claims are not focused on a specific improvement in computers or any other technology, but instead on certain independently abstract ideas that simply invokes computers as tools to implement the abstract idea. Electric Power Group, LLC v. Alstom S.A., et al., No. 2015-1778, slip op. at 8 (Fed. Cir. Aug. 1, 2016); MPEP 2106.05(a).
Under Prong 2 of Step 2A, the claims recite the additional elements beyond the recited abstract idea of “[a] … apparatus comprising a processor configured to,” “by a plurality of collection processors,” “by the processor,” and “on a graphical user interface” in claim 1, and similarly claims 6 and 7; however, individually and when viewed as an ordered combination, and pursuant to the broadest reasonable interpretation, each of the additional elements are computing elements recited at high level of generality implementing the abstract idea on a computer (i.e. apply it), and thus, are no more than applying the abstract idea with generic computer components, which is not sufficient to integrate an abstract idea into a practical application.
Applicant argues the claims are patent eligible because they recite additional elements that are "unconventional or otherwise more than what is well-understood, routine, conventional activity in the field” and a particular solution to address the computer-centric challenge of appropriately evaluate the work quality of a worker in software development by reciting " track, by the processor, lines of code entered by a worker and determine a plurality of parameters based on the tracked information, wherein the lines of code is tracked by processing and analyzing raw information.” Examiner respectfully disagrees.
As noted above, under Step 2B, Examiners determine whether any additional elements beyond the recited abstract idea, individually and as an ordered combination, are significantly more than the abstract idea itself.
For the reasons set forth above, aside from generic computer components of the processors implementing the limitations referred to by Applicant, the limitations referred to by Applicant are not additional elements beyond the recited abstract idea nor necessarily rooted in computer technology, but rather, the limitation referred to by Applicant recite abstract ideas since they recite a certain method of organizing human activity because the limitations manage the human behavior workers by collecting information regarding work performed by workers to rank and evaluate the quality of workers behaviors and also a mental process because they can be performed by a human mentally observing information regarding worker behavior manually and/or with a pen and paper.
Further, the alleged solution to a computer-centric challenge of “appropriately evaluate the work quality of a worker in software development” is a challenge in managing of human behavior of a worker, and thus, the challenge that is solved by the claims is a certain method organizing human behavior. Therefore, the alleged challenge that is solved by the claims is directed to an abstract idea.
The limitations referred to by Applicant reciting the alleged solution and the alleged challenge itself are recitations of the recited abstract idea and a challenge directed to an abstract idea; however, the MPEP makes clear that “an improvement in the abstract idea itself (e.g. a recited fundamental economic concept)” and “[m]ere automation of manual processes” are not improvement in computer technology or otherwise sufficient to transform an abstract idea into a patent eligible invention. See MPEP 2106.05(a).
With respect to the recitations of the processors, the elements are nothing more than generic computer components implementing the recited abstract idea, which is not sufficient to integrate an abstract idea into a practical application.
As in the claims at issue in Electric Power Group, the present claims are not focused on a specific improvement in computers or any other technology, but instead on certain independently abstract ideas that simply invokes computers as tools to implement the abstract idea. Electric Power Group, LLC v. Alstom S.A., et al., No. 2015-1778, slip op. at 8 (Fed. Cir. Aug. 1, 2016); MPEP 2106.05(a).
Under Step 2B, like in Prong 2 of Step 2A, the claims recite the additional elements beyond the recited abstract idea of “[a] … apparatus comprising a processor configured to,” “by a plurality of collection processors,” “by the processor,” and “on a graphical user interface” in claim 1, and similarly claims 6 and 7; however, individually and when viewed as an ordered combination, and pursuant to the broadest reasonable interpretation, each of the additional elements are computing elements recited at high level of generality implementing the abstract idea on a computer (i.e. apply it), and thus, are no more than applying the abstract idea with generic computer components, which is not sufficient to amount to significantly more than the abstract idea itself. Further, the aforementioned additional elements beyond the recited abstract idea also generally link the abstract idea to a field of use, namely a generic computer with a generic computer interface, which is not sufficient to amount to significantly more than an abstract idea; therefore, the additional elements are not sufficient to amount to significantly more than an abstract idea. Additionally, these recitations as an ordered combination, simply append the abstract idea to recitations of generic computer structure performing generic computer functions that are well-understood, routine, and conventional in the field as evinced by Applicant’s Specification at [0042]-[0043] (describing the present invention can be implemented as a computer program executed by hardware resources such as a CPU and a memory built in a computer, which describes the computer components with such high level of generality of well-known computer components that the Specification does not support an implementation using anything other well-understood, routine, and conventional components). Furthermore, as an ordered combination, these elements amount to generic computer components performing repetitive calculations, receiving or transmitting data over a network, which, as held by the courts, are well-understood, routine, and conventional. See MPEP 2106.05(d); July 2015 Update, p. 7.
Looking at these limitations as an ordered combination adds nothing additional that is sufficient to amount to significantly more than the recited abstract idea because they simply provide instructions to use a generic arrangement of generic computer components and recitations of generic computer structure that perform well-understood, routine, and conventional computer functions that are used to “apply” the recited abstract idea. Thus, the elements of the claims, considered both individually and as an ordered combination, are not sufficient to ensure that the claims as a whole amount to significantly more than the abstract idea itself.
Response to Arguments - Prior Art
Applicant’s arguments with respect to claims have been fully considered, but they are now moot in view of new grounds for rejection necessitated by Applicant’s amendments.
Applicant argues that the Office Action failed to state a prima face case of obviousness and/or the current amendments to the claims now render arguments in the Office Action moot because, claim 1, and similarly claims 6 and 7, recites "create a base model serving as a base for evaluation of work quality on a basis of the work information of the plurality of workers, wherein the base model is created in parallel with creation of a rank for each of the plurality of workers[...]" and Bonmassar still fails to teach or suggest that "the base model is created in parallel with creation of a rank for each of the plurality of workers." Examiner respectfully disagrees.
Bonmassar (US 20130290207 A1), hereinafter Bonmassar teaches the base model is created in parallel with creation of a rank for each of the plurality of workers in paragraphs [0121]-[0124], wherein scores/ranks can be provided for each individual in a group as well as the group as whole (e.g., an average score for the named individuals in a group can be provided), wherein a recruitment service request 210 for ranking/scoring of individuals can include groups such as Group 218 of possible job candidates, Group 220 of individuals who applied in the last month, Group 230 of individuals who have been offered a job, Group 232 of individuals who have been hired, Group 228 of current employees performing the job, and in response to a recruitment service request including a number of named individuals (210 or 214), scores/ranks can be returned to the requester in 212 and or 216, and the system 12 can be configured to only return information on individuals whose score or rank is above the score (e.g., average) of the comparison group. Here, the claimed “base model” is the returned rank/score (e.g., average) of the comparison group returned in response to the request taught by Bonmassar and the claimed “rank for each of the plurality of workers” is the rank/score for individuals taught by Bonmassar (e.g., with rank/score above the average score of the comparison group) that are returned along with the rank/score (e.g., average) of the comparison group. These ranks/score of the individual (i.e., rank of the plurality of workers) and of the score comparison group (i.e., base model) are created “in parallel” because both of the rank/score for individuals and the score (e.g., average) are generated together in response to the same request and both the rank/score for individuals and the score (e.g., average) are returned together in the same response.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1, 2, & 4-8 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Claim 1, and similarly claims 2 & 4-8, recites “quality management …: collect work information of a plurality of workers who perform software development, wherein work information is collected …; create a base model serving as a base for evaluation of work quality on a basis of the work information of the plurality of workers. wherein the base model is created in parallel with creation of a rank for each of the plurality of workers; track … lines of code entered by a worker and determine a plurality of parameters based on the tracked information, wherein the lines of code are tracked by processing and analyzing raw information; assign a weight to each of parameters and the processor is configured to calculate the rank by multiplying, for each of the parameters, the weight by each of the parameters; create a base model serving as a base for evaluation of work quality on a basis of the work information of the plurality of workers; evaluate work quality of the worker based on a comparison of the base model and determined rank; and display the evaluation.” Claims 1, 2, & 4-8, in view of the claim limitations, recite the abstract idea of collecting work information of a plurality of workers, creating a base model of work quality for the plurality workers, tracking lines of code entered by a worker to determine parameters, assigning a weight to the parameters, calculating a rank by multiplying the weight by the parameters, evaluating work quality of a particular worker based on the base model and the rank, and outputting the evaluation.
Each of the above limitations manage personal human behavior and provide instructions or rules to follow to manage the human behavior workers by collecting, modeling, tracking, weighting, ranking, evaluating, and outputting worker behavior of the quality of work performed by workers; thus, the claims recite certain methods of organizing human activity. In addition, as a whole, in view of the claim limitations, but for the computer components and systems performing the claimed functions, the broadest reasonable interpretation of the recited collecting work information of a plurality of workers, creating a base model of work quality for the plurality workers, tracking lines of code entered by a worker to determine parameters, assigning a weight to the parameters, calculating a rank by multiplying the weight by the parameters, evaluating work quality of a particular worker based on the base model and the rank, and outputting the evaluation could all be reasonably interpreted as a human making observations of information regarding work information of workers and lines of code entered by workers, a human mentally performing an evaluation and using judgment based on the observed information to model the workers quality, assign a weight, and calculate a rank, a human mentally performing an evaluation and using judgement by comparing the model of the workers quality and the rank to evaluate the quality of the particular worker, and a human outputting the results of the evaluation; therefore, the claims recite mental processes. Further, with respect to the dependent claims, aside from the additional elements beyond the recited abstract idea addressed below under the second prong of Step 2A and 2B, the limitations of dependent claims 2, 4, 5, & 8 recite similar further abstract limitations to those discussed above that narrow the abstract idea recited in the independent claims because, aside from the computer components and systems performing the claimed functions the limitations of claims recite mental processes that can be practically performed mentally by observing, evaluating, and judging information mentally and/or with a pen and paper and recite a certain method of organizing human activity that manages human behavior of workers. Accordingly, since the claims recite a certain method of organizing human activity and mental processes, the claims recite an abstract idea under the first prong of Step 2A.
This judicial exception is not integrated into a practical application under the second prong of Step 2A. In particular, the claims recite the additional elements beyond the recited abstract idea of “[a] … apparatus comprising a processor configured to,” “by a plurality of collection processors,” “by the processor,” and “on a graphical user interface” in claim 1, “[a] … method executed by a processor,” “by a plurality of collection processors,” “by the processor,” and “on a graphical user interface” in claim 6, and “[a] non-transitory computer-readable recording medium storing a program for causing a computer to,” “by a plurality of collection processors,” “by the processor,” and “on a graphical user interface” in claim 7; however, individually and when viewed as an ordered combination, and pursuant to the broadest reasonable interpretation, each of the additional elements are computing elements recited at high level of generality implementing the abstract idea on a computer (i.e. apply it), and thus, are no more than applying the abstract idea with generic computer components. Moreover, aside from the aforementioned additional elements discussed above, the remaining elements of dependent claims 2, 4, 5, & 8 do not integrate the abstract idea into a practical application because these claims merely recite further limitations that provide no more than simply narrowing the recited abstract idea.
The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception under Step 2B. As noted above, the aforementioned additional elements beyond the recited abstract idea, as an order combination, are no more than mere instructions to implement the idea using generic computer components (i.e. apply it), and further, generally link the abstract idea to a field of use, which is not sufficient to amount to significantly more than an abstract idea; therefore, the additional elements are not sufficient to amount to significantly more than an abstract idea. Additionally, these recitations as an ordered combination, simply append the abstract idea to recitations of generic computer structure performing generic computer functions that are well-understood, routine, and conventional in the field as evinced by Applicant’s Specification at [0042]-[0043] (describing the present invention can be implemented as a computer program executed by hardware resources such as a CPU and a memory built in a computer, which describes the computer components with such high level of generality of well-known computer components that the Specification does not support an implementation using anything other well-understood, routine, and conventional components). Furthermore, as an ordered combination, these elements amount to generic computer components performing repetitive calculations, receiving or transmitting data over a network, which, as held by the courts, are well-understood, routine, and conventional. See MPEP 2106.05(d); July 2015 Update, p. 7. Moreover, aside from the aforementioned additional elements discussed above, the remaining elements of dependent claims 2, 4, 5, & 8 do not transform the recited abstract idea into a patent eligible invention because these claims merely recite further limitations that provide no more than simply narrowing the recited abstract idea.
Looking at these limitations as an ordered combination adds nothing additional that is sufficient to amount to significantly more than the recited abstract idea because they simply provide instructions to use a generic arrangement of generic computer components and recitations of generic computer structure that perform well-understood, routine, and conventional computer functions that are used to “apply” the recited abstract idea. Thus, the elements of the claims, considered both individually and as an ordered combination, are not sufficient to ensure that the claims as a whole amount to significantly more than the abstract idea itself. Since there are no limitations in these claims that transform the exception into a patent eligible application such that these claims amount to significantly more than the exception itself, claims 1, 2, & 4-8 are rejected under 35 U.S.C. 101 as being directed to non-statutory subject matter.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 4, 6, & 7 are rejected under 35 U.S.C. 103 as being unpatentable over Hicks, et al. (US 20210224064 A1), hereinafter Hicks, in view of Bonmassar (US 20130290207 A1), hereinafter Bonmassar.
Regarding claim 1, Hicks discloses a quality management apparatus comprising a processor configured to ([0021], [0020]):
collect work information of a plurality of workers who perform software development (Abstract, [0003], a method includes maintaining a plurality of metrics in an expertise score vector corresponding to a developer and identifying a subset of the plurality of metrics that are relevant to a work item corresponding to a software component, [0013]-[0014], respective expertise score vectors may be maintained for each developer in an organization, an expertise score vector may include any appropriate metrics regarding a developer, including but not limited to time spent using a technology or skill (i.e., five years using Java, 6 months doing front-end development, etc.), certifications, awards, and/or badges earned, time spent working on a software component, number of lines of code written using a technology, and number of lines of code written in a software component, an expertise score may be determined by tracking the various metrics in the expertise score vector, [0016], an expertise score vector may include a problem records metric that tracks a number of problem records that have been opened for code written by an individual developer, [0042], the software component management system 600 includes an expertise score vector module 601, which may maintain a respective expertise score vector of expertise score vectors 602A-N for each developer across various teams in the organization, [0051], expertise score vector 602N may include respective component mastery metrics 631A-N for each software component that the developer has contributed work to, including an amount of time required by the developer to produce a unit of contribution to the associated software component, wherein the unit of contribution may be measured in any appropriate manner (e.g. task completed, or lines of code), [0053], expertise score vector 602N may also include code quality metrics 633, problem records metrics 634), wherein work information is collected by a plurality of collection processors ([0020]-[0021], [0023], computer system 100 may be practiced in distributed cloud computing environments where tasks are performed by remote processing devices that are linked through a communications network, program modules may be located in both local and remote computer system storage media including memory storage devices, and the computer system 100 has one or more central processing units (CPU(s)) 101a, 101b, 101c, etc. (collectively or generically referred to as processor(s) 101), wherein the software 111 is stored as instructions for execution by the processors 101 to cause the computer system 100 to operate, such as is described herein);
create [vector of metrics] serving as a base for evaluation of work quality on a basis of the work information of the plurality of workers (Abstract, an expertise score is determined based on the weighted subset of the plurality of metrics, wherein determining the expertise score comprises determining a magnitude of a vector comprising the weighted subset of the plurality of metrics [0013]-[0014], respective expertise score vectors may be maintained for each developer in an organization to identify levels of skills and component mastery for individual developers, and an expertise score may be determined by applying respective weights to the metrics that are relevant to the particular software component or skill, adding the weighted metrics to a subset vector, and calculating the magnitude of the subset vector, [0042], [0046], the software component management system 600 includes an expertise score vector module 601, which may maintain a respective expertise score vector of expertise score vectors 602A-N for each developer across various teams in the organization, and the expertise score vector module 601 may determine an expertise score using a selected set of weighted metrics from an expertise score vector of expertise score vectors 602A-N in block 205 of method 200 of FIG. 2, and a work item from work item queue 606 may be assigned to a developer by work item management module 605 based on the determined expertise score in block 206 of method 200 of FIG. 2, [0031]-[0032], in block 205, respective weights may be applied to selected metric fields in the expertise score vector of the developer, and the selected weighted metrics may be combined to determine an expertise score corresponding to the developer, and in block 206 to assign a work item to the developer, an expertise score may be determined for each developer on a team corresponding to the software component in block 206);
track, by the processor, lines of code entered by a worker ([0013]-[0014], respective expertise score vectors may be maintained for each developer in an organization include appropriate metrics regarding a developer, including number of lines of code written using a technology, and number of lines of code written in a software component) and determine a plurality of parameters based on the tracked information ([0014], an expertise score may be determined by tracking the various metrics in the expertise score vector), wherein the lines of code are tracked by processing and analyzing raw information ([0014], a tracking the various metrics in the expertise score vector may include, but not limited to number of lines of code written using a technology, and number of lines of code written in a software component, etc., [0051], expertise score vector 602BN includes component mastery metrics 631A-N for each software component that the developer has contributed work to, component mastery metrics 631A-N may include an amount of time required by the developer to produce a unit of contribution to the associated software component, e.g., the unit of contribution may be measured in lines of code (i.e., the lines of code are tracked by processing and analyzing raw information - measuring lines of code from the software written/contributed by the developer);
assign a weight to each of parameters and the processor is configured to calculate the rank by multiplying, for each of the parameters, the weight by each of the parameters ([0014], an expertise score may be determined by tracking the various metrics in the expertise score vector, applying respective weights to the metrics that are relevant to the particular software component or skill, adding the weighted metrics to a subset vector, and calculating the magnitude of the subset vector, [0019], [0031], respective weights may be applied to selected metric fields in the expertise score vector of the developer, and the selected weighted metrics may be combined to determine an expertise score corresponding to the developer, wherein metrics in an expertise score vector may be weighted such that selected metrics may carry different weights in determining an expertise score of a developer, e.g., a metric may be multiplied by a smaller weight or lager weight);
evaluate work quality of the worker on a comparison of the [vector of metrics of other users] and determined rank ([0032], in block 206, a work item is assigned to the developer based on the expertise score by determining an expertise score for each developer on a team corresponding to the software component and assigning to a developer from the team based on the calculated expertise scores, e.g., a developer having a highest expertise score, Abstract, techniques for an expertise score vector for software component management include determining the expertise score comprises determining a magnitude of a vector comprising the weighted subset of the plurality of metrics and assigning the work item to the developer based on the expertise score, [0046], the expertise score vector module 601 may determine an expertise score using a selected set of weighted metrics from an expertise score vector of expertise score vectors 602A-N in block 205 of method 200 of FIG. 2, and a work item from work item queue 606 may be assigned to a developer by work item management module 605 based on the determined expertise score in block 206 of method 200 of FIG. 2).
While Hicks discloses all of the above, including create [vector of metrics] serving as a base for evaluation of work quality on a basis of the work information of the plurality of workers; …
evaluate work quality of the worker on a comparison of the [vector of metrics of other users] and determined rank (as above), Hicks does not expressly disclose the remaining elements of the following limitations, which however, are taught by further teachings in Bonmassar.
Bonmassar teaches create a base model serving as a base for evaluation of work quality on a basis of the work information of the plurality of workers ([0121]-[0124], scores/ranks can be provided for each individual in a group as well as the group as whole (e.g., an average score for the named individuals in a group can be provided), wherein a recruitment service request 210 for ranking/scoring of individuals can include groups such as group 218 of possible job candidates, Group 220 of individuals who applied for a job in the last month, Group 230 of individuals who have been offered a job, Group 232 of individuals who have been hired, Group 228 of current employees of a company that perform a job for which the named individuals in group 228 are applying), wherein the base model is created in parallel with creation of a rank for each of the plurality of workers ([0121]-[0124], the recruitment service request 210 for ranking/scoring of individuals can include groups 218, 220, 230 and 232, scores/ranks can be provided for each individual in a group as well as the group as whole (e.g., an average score for the named individuals in a group can be provided), and in response to a recruitment service request including a number of named individuals (210 or 214), scores/ranks can be returned to the requester in 212 and or 216, and the system 12 can be configured to only return information on individuals whose score or rank is above the average of the comparison group (i.e. created in parallel - the individual score/ranks as well as the average for the group are provided in response to the same request and both are returned for individuals whose scores/ranks are above the average for the group)); …
evaluate work quality of the worker on a basis based on a comparison of the base model and determined rank ([0123]-[0124], the scores/ranks can be used for various comparison purposes, e.g., the score of a named individual in the recent 220 group can be compared to the average score or rank of the hired individuals in group 232 or the average score or rank of the individuals that were offered jobs in group 230, wherein this comparison feature can be used as a filter, e.g., the system 12 can be configured to only return information on named individuals in a group whose score or rank is above the average of the named individuals of a comparison group, [00144], in 412, the system can perform any requested comparisons and scorings, such as a team scoring or individual to group comparisons); and
display the evaluation on a graphical user interface ([0133], [0137], in response to a recruitment service request including a number of named individuals, such as 210 or 214, results which may include scores and/or ranks can be returned to the requester in 212 and or 216, e.g., as in FIG. 4B, wherein an interface for outputting the information and example of a profile that can be returned is described in more detail in U.S. patent application Ser. No. 13/499,791, entitled RECRUITING SERVICE GRAPHICAL USER INTERFACE, [0144], in 416, the system can compile and send a response to the recruitment service request including the results, such as scores, e.g., a report can be generated and a link to the report can be sent to a recruiter).
Hicks and Bonmassar are analogous fields of invention because both address the problem of scoring the work of developers to determine their ability to perform tasks. At the time the invention was effectively filed, it would have been obvious to one of ordinary skill in the art to include in the system of Hicks the ability to create a base model serving as a base for evaluation of work quality on a basis of the work information of the plurality of workers, evaluate work quality of the worker on a basis based on a comparison of the base model and determined rank, and display the evaluation on a graphical user interface, as taught by Bonmassar, since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the combination would produce the predictable results of creating a base model serving as a base for evaluation of work quality on a basis of the work information of the plurality of workers, evaluating work quality of the worker on a basis based on a comparison of the base model and determined rank, and displaying the evaluation on a graphical user interface, as claimed. Further, it would have been obvious to one of ordinary skill in the art to have modified Hicks with the aforementioned teachings of Bonmassar in order to produce the added benefit of better assisting management of information regarding job candidates and aiding in reaching out to the candidate. [0003], [0007].
Regarding claim 4, the combined teachings of Hicks and Bonmassar teaches the quality management apparatus according to claim 1 (as above). Further, Hicks discloses wherein the processor is configured to calculate work quality of the worker ([0046], the expertise score vector module 601 may determine an expertise score using a selected set of weighted metrics from an expertise score vector of expertise score vectors 602A-N in block 205 of method 200 of FIG. 2, and a work item from work item queue 606 may be assigned to a developer by work item management module 605 based on the determined expertise score in block 206 of method 200 of FIG. 2) by comparing a rank based on the base model with a rank based on a model of the worker ([0032], in block 206, a work item is assigned to the developer based on the expertise score by determining an expertise score for each developer on a team corresponding to the software component and assigning to a developer from the team based on the calculated expertise scores, e.g., a developer having a highest expertise score).
Regarding claim 6, this claim is substantially similar to claim 1, and is, therefore, rejected on the same basis as claim 1. While claim 6 is directed to a method executed by a processor, Hicks discloses a method, as claimed. [0003], [0071].
Regarding claim 7, this claim is substantially similar to claim 1, and is, therefore, rejected on the same basis as claim 1. While claim 7 is directed to a computer readable recording medium storing a program causing a computer to perform functions, Hicks discloses a computer readable recording medium, as claimed. [0003]-[0004], [0023], [0065]-[0066].
Claims 2, 5, & 8 are rejected under 35 U.S.C. 103 as being unpatentable over Hicks, et al. (US 20210224064 A1), hereinafter Hicks, in view of Bonmassar (US 20130290207 A1), hereinafter Bonmassar, and in further view of Grant, et al. (US 20210011712 A1), hereinafter Grant.
Regarding claim 2, the combined teachings of Hicks and Bonmassar teaches the quality management apparatus according to claim 1 (as above). Further, while Hicks discloses all of the above and wherein the software development by the workers is performed ([0037]-[0038], deployed code of a particular software component was written by a developer and deployed to the field, [0042], software component management system 600 is in communication with software component code bases 610A-N, which each include computer code written by one or more developers on teams corresponding to various software components, [0044], new code is committed by a developer into any of software component code bases 610A-N), Hicks does not necessarily disclose that this is performed by terminals placed in remote environments; however, this remaining feature is taught by further teachings in Grant.
Grant teaches is performed by terminals placed in remote environments ([0069], code 226A may be downloaded over network 201A from remote system 201B, where similar code 201C is stored on a storage device 201D).
Hicks and Grant are analogous fields of invention because both address the problem of scoring the work of developers to determine their ability to perform tasks. At the time the invention was effectively filed, it would have been obvious to one of ordinary skill in the art to include in the system of Hicks the ability to perform by terminals placed in remote environments, as taught by Grant, since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the combination would produce the predictable results of software development by the workers being performed by terminals placed in remote environments, as claimed. Further, it would have been obvious to one of ordinary skill in the art to have modified Hicks with the aforementioned teachings of Grant in order to produce the added benefit of ensuring only users with adequate experience are able to change source code. [0002].
Regarding claim 5, the combined teachings of Hicks and Bonmassar teaches the quality management apparatus according to claim 1 (as above). Further, while Hicks discloses all of the above and wherein a plurality of parameters included in the model include a number of lines of code created per unit time ([0014], an expertise score vector may include, but not limited to, time spent using a technology (i.e., five years using Java, etc.), number of lines of code written using a technology, etc., [0051], expertise score vector 602 includes component mastery metrics 631A-N, component mastery metrics 631A-N may include an amount of time required by the developer to produce a unit of contribution to the associated software component, e.g., the unit of contribution may be measured in lines of code), a number of bugs per line ([0016], an expertise score vector may include a problem records metric that tracks a number of problem records , e.g., developer with a higher number of problem records per line of committed code may have a lower problem records metric value than a developer having a lower number of problem records per line of code [0051], component mastery metrics 631A-N may include a number of defects detected in code per unit of contribution (e.g., lines of code or number of tasks)), and a bug correction ... ([0015], an expertise score vector may include a regression testing metric that quantifies how quickly a developer's committed code passes regression testing, wherein if a developer writes code that fails regression, the developer may then fix the issues and resubmit the code for an additional round of regression testing, e.g., for code that fails regression testing repeatedly, the developer may be adjusting the code just to pass regression, which may result in relatively low quality code, while committed code that passes regression testing with a relatively low number of testing iterations may indicate a higher level of expertise regarding the software component by the developer), Hicks does not necessarily the remaining elements of the following limitation, which however, are taught by further teachings in Grant.
Grant teaches parameters included in the model include a number of lines of code created per unit time, …, and a bug correction time ([0027]-[0029], a history score includes, as a history score component, a score corresponding to a size of a developer's contributions, e.g., counting a number of lines of source code, a score corresponding to a frequency of a developer's contributions to a project, within a defined time period, [0032], history score includes, as a history score component, a score corresponding to an ability of a developer to correct other developers' contributions to a project within a defined time, e.g., a developer who fixes more defects in a defined time period is more valuable to a project, and hence has a greater ability, than another developer who fixes fewer defects in the same time period).
Hicks and Grant are analogous fields of invention because both address the problem of scoring the work of developers to determine their ability to perform tasks. At the time the invention was effectively filed, it would have been obvious to one of ordinary skill in the art to include in the system of Hicks the ability for parameters included in the model to include a number of lines of code created per unit time and a bug correction time, as taught by Grant, since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the combination would produce the predictable results of parameters included in the model including a number of lines of code created per unit time, a number of bugs per line, and a bug correction time, as claimed. Further, it would have been obvious to one of ordinary skill in the art to have modified Hicks with the aforementioned teachings of Grant in order to produce the added benefit of ensuring only users with adequate experience are able to change source code. [0002].
Regarding claim 8, the combined teachings of Hicks and Bonmassar teaches the quality management apparatus according to claim 1 (as above). Further, while Hicks discloses all of the above and wherein the raw information further comprising a number of lines of code created per hour ([0014], an expertise score vector may include, but not limited to, time spent using a technology (i.e., five years using Java, etc.), number of lines of code written using a technology, etc., [0051], expertise score vector 602 includes component mastery metrics 631A-N, component mastery metrics 631A-N may include an amount of time required by the developer to produce a unit of contribution to the associated software component, e.g., the unit of contribution may be measured in lines of code), number of bugs occurred per line of code ([0016], an expertise score vector may include a problem records metric that tracks a number of problem records , e.g., developer with a higher number of problem records per line of committed code may have a lower problem records metric value than a developer having a lower number of problem records per line of code [0051], component mastery metrics 631A-N may include a number of defects detected in code per unit of contribution (e.g., lines of code or number of tasks)), and … correct bugs ([0015], an expertise score vector may include a regression testing metric that quantifies how quickly a developer's committed code passes regression testing, wherein if a developer writes code that fails regression, the developer may then fix the issues and resubmit the code for an additional round of regression testing, e.g., for code that fails regression testing repeatedly, the developer may be adjusting the code just to pass regression, which may result in relatively low quality code, while committed code that passes regression testing with a relatively low number of testing iterations may indicate a higher level of expertise regarding the software component by the developer), Hicks does not necessarily the remaining elements of the following limitation, which however, are taught by further teachings in Grant.
Grant teaches wherein the raw information further comprising a number of lines of code created per hour, … and a time taken to correct bugs ([0027]-[0029], a history score includes, as a history score component, a score corresponding to a size of a developer's contributions, e.g., counting a number of lines of source code, a score corresponding to a frequency of a developer's contributions to a project, within a defined time period, [0032], history score includes, as a history score component, a score corresponding to an ability of a developer to correct other developers' contributions to a project within a defined time, e.g., a developer who fixes more defects in a defined time period is more valuable to a project, and hence has a greater ability, than another developer who fixes fewer defects in the same time period).
Hicks and Grant are analogous fields of invention because both address the problem of scoring the work of developers to determine their ability to perform tasks. At the time the invention was effectively filed, it would have been obvious to one of ordinary skill in the art to include in the system of Hicks the ability for parameters included in the model to include a number of lines of code created per unit time and a bug correction time, as taught by Grant, since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the combination would produce the predictable results of parameters included in the model including a number of lines of code created per unit time, a number of bugs per line, and a bug correction time, as claimed. Further, it would have been obvious to one of ordinary skill in the art to have modified Hicks with the aforementioned teachings of Grant in order to produce the added benefit of ensuring only users with adequate experience are able to change source code. [0002].
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHARLES A GUILIANO whose telephone number is (571)272-9859. The examiner can normally be reached Mon-Fri 10:00 am - 6:00 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Rutao Wu can be reached at 571-272-6045. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
CHARLES GUILIANO
Primary Examiner
Art Unit 3623
/CHARLES GUILIANO/Primary Examiner, Art Unit 3623