Prosecution Insights
Last updated: April 19, 2026
Application No. 18/327,587

METHODS AND SYSTEMS FOR MONITORING CONTRIBUTOR PERFORMANCE FOR SOURCE CODE PROGRAMMING PROJECTS

Non-Final OA §101§103
Filed
Jun 01, 2023
Examiner
VU, TUAN A
Art Unit
2193
Tech Center
2100 — Computer Architecture & Software
Assignee
Capital One Services LLC
OA Round
4 (Non-Final)
73%
Grant Probability
Favorable
4-5
OA Rounds
3y 5m
To Grant
95%
With Interview

Examiner Intelligence

Grants 73% — above average
73%
Career Allow Rate
718 granted / 980 resolved
+18.3% vs TC avg
Strong +21% interview lift
Without
With
+21.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
31 currently pending
Career history
1011
Total Applications
across all art units

Statute-Specific Performance

§101
10.4%
-29.6% vs TC avg
§103
54.1%
+14.1% vs TC avg
§102
10.2%
-29.8% vs TC avg
§112
10.5%
-29.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 980 resolved cases

Office Action

§101 §103
DETAILED ACTION This action is responsive to the Applicant’s response filed 12/09/25. As indicated in Applicant’s response, claims 1-2, 12 have been amended. Claims 1-20 are resubmitted and pending a next office action. Claim Rejections - 35 USC§ 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim 2 is rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Claim(s) 2 is/are directed to Abstract Idea. The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because of the reasons presented with the following 2-step Eligibility analysis. Step 1. Claim 2 recites a method with steps action; hence fulfills the requirement of a statutory category of subject matter Step 2, prong A: Claim 2 recites steps of determining (development scores), including that of determining (one of the contributors) and filtering (prior to presenting, the plurality of scores), and as broadly interpreted, these limitations can be construed as steps leading to the Abstract Idea of a mental process; as these limitations encompass functions carried out by a human mind through observation, evaluation, judgement and/or opinion forming such as filtering, reselecting, eliminating data; thus, they fall within the "Mental Processes" grouping of an Abstract Idea established as a type of Judicial Exception under MPEP 2106.04(a). Step 2, prong B: For establishing whether the "additional elements" would be able of put the elements identified as mental process type Exception into a practical application, it is found that elements such as contributions from "source code projects", "scores" for SW developers "stored in database" in relation to "plurality of weights", "receiving user input" (to request for DB access), "retrieving user profile ... and authorization" (in response to the receiving) are elements that either form extra-activity not being instrumental to the "determining" steps or represent pre-activity portion of the mental process identified as Abstract Idea, thus amount to no significant activities that indispensably upconvert the raised Judicial Exception into a Practical Application. Further, it is found that "generating for display a graphical representation" (as additional element) at best amount to insignificant extra activities otherwise characterized as post-activity items that fail to render the mental processes of claim 1 significantly much more than a Judicial Exception; nor can this additional element necessarily upconvert the steps of "determining", "filtering" into a Practical Application. Thus, per prong B, the "additional elements" fail to support the process steps of method claim 2 to amount much more than the Judicial Exception being raised. Accordingly, the "additional elements" do not integrate the recited judicial exception into a practical application and the claim is therefore directed to the judicial exception. See MPEP 2106.05(g). Claims 1 and 12 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Claim(s) 2 is/are directed to Abstract Idea. The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because of the following 2 step analysis For step 1, Claim 1 is directed to a system claim, and claim 12 directed to a medium claim, both satisfying the category requirement For step 2A: Claim 1 recites steps of determining (development scores), in terms of determining (one of the contributors) and filtering (plurality of scores), and as broadly interpreted, these limitations can be construed as steps leading to the Abstract Idea of a mental process. These limitations encompass functions carried out by a human mind through observation, evaluation, judgement, reorganizing and/or opinion forming, therefore are construed as activities that fall within the Judicial Exception that characterizes one mental processes group by the courts; § MPEP 2106.04 (a) Claim 12 also recites determining (subset of development scores), in terms of determining (one of the contributors) and filtering (plurality of scores), and likewise these limitations are construed as functions carried out by a human mind through observation, evaluation, judgement, reorganization and/or opinion forming, characterizing a Judicial Exception of an Abstract Idea type in form of a mental process under characterization by the courts. See MPEP 2106.04 (a) For step 2B: Claim 1 includes additional elements such as "version control system", "software development tools' "source code" contributions, "store source code" (for contributors), "cloud-based control circuitry", "receive" (user input and user profile), "I/0 circuitry", "generate a display" (graphical representation of subset of scores); but these elements, when interpreted in conjunction with the steps of "determining" are insufficient to amount to significantly more than the judicial exception as identified from prong A. The "display" and "circuitry" for example, amount to post-activity to the mental process and at best can be construed as part of the activities of displaying, storing, transmitting, notifying via use of a generic computer, thus cannot upconvert the Judicial Exception into a practical Application. Moreover, information stored or obtained from store, DB or user input are insignificant activities or features that are remotely able to convert the identified Judicial Exception(mental processes) into a Practical Application because, at best, the facts of storing and retrieving (of stored information from a source or interactive input) constitute pre-activity elements to the steps of determining being the crux of the Judicial Exception, and they cannot render the mental process much more than the Judicial Exception. Claim 12 also recites additional elements such as "database storing" of scores, "software version control system", "contributors" therefor, "scores with weighted metrics", "receiving" user input/request, "retrieving" user profile, when interpreted in conjunction with the “determining" steps are insignificant elements or activities that are not instrumental to the act of "determining", and at best can be interpreted as pre-activity for the mental process; thus, these "additional elements" cannot amount to significantly much more than the Judicial Exception as identified from Step2A. Further, the recited "one or more processors" and "display a graphical representation" all together amount to a "generic computer" employed in a post-activity to the mental process and at best can be construed as part of the activities of displaying, storing, transmitting, notifying via use of a generic computer, in that these additional elements are insignificant toward upconverting the Judicial Exception into a Practical Application. Accordingly, the "additional elements", per step2B, do not integrate the recited judicial exception of claims 1, 12 into a Practical Application. Claims 1, 12 are therefore directed to the judicial exception. See MPEP 2106.05(g). Eligibility of dependent claims. Claim 3 (dependent to claim 2) recites "receiving" (metrics), applying (a weight to the metrics), "calculating" (a score), and broadly interpreted with the Judicial Exception of claim 2, are mere pre-activity elements that cannot intrinsically upconvert the mental process of "determining" into a practical application, thus do not make the Judicial Except to amount to significantly much more. Claim 4 recites "determining" a subset of metrics which can be done by a human mind; and additional element of "display" ( a representation) amounts to a post-activity deemed insignificant toward converting the mental process of "determining" into a Practical Application. Claim 5 recites different examples of "metrics" and based on the determining of claim 3, the 'additional elements' of claim 5 amount to insignificant addition to claim 3, and fails to make the Judicial Exception in the steps of "determining" to amount to significantly much more. Claim 6 recites "receiving" (source code), "detecting' (a contribution) and "generating" (metrics) which constitute activities that are pre-activity that does not reinforce the Abstract Idea of "determining" in claim 2 so to make it much significantly more than a Judicial Exception. Claim 7, in terms the additional elements such as "receiving" and "generating" (for display) construed as pre-activity and post-activity to the mental process of "determining" fails to make the Abstract Idea of said mental process to amount to much more than a Judicial Exception. Claim 8 recites "receiving" and "generating" (for display); thus, these activities for the same reason as claim 7, cannot turn the Judicial Exception of claim 2 into a Practical application. Claim 9 with additional elements of determining, filtering and generating (for display) amounts to reciting of mental processes and post-activity that cannot make the Judicial Exception of claim 2 to amount to significantly much more. Claim 10 recites "receiving", "storing" as pre-activity elements and "generating" (for display), as post-activity element to the mental process steps, thus fails to cure to the Judicial Exception of claim 2. Claim 11 recites 'extrapolating' (a software score) and amounts to a computer use to derive a metric as a pre-activity step that cannot cure to the mental process of claim 2; nor does the "generating" (for display) arrive as significant teaching to upconvert what “determining” being done by mental process into a Practical Application. Claims 13-20 (dependent on claim 12) are respective replica of the limitations recited in claims 3-4, 6-11 as analyzed from above, hence would be deemed insufficient to make claim 12 to amount to significantly much more than the identified Judicial Exception. In all, claims 1-20 are all rejected for leading to a Judicial Exception of an Abstract Idea type categorized as mental processes by the courts. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-4, 6-10, 12-19 is/are rejected under § 35 U.S.C. 103 as being unpatentable over Stevens, USPubN: 2020/0005219 (herein Stevens) in view of Grant et al, 2021/0011712 (herein Grant), Gauger et al, USPubN: 2014/0279694 (herein Gauger) and Tibrewala et al, USPubN: 2021/0182767 (herein Tibrewala), further in view of Seto et al, USPubN: 2015/0066869 (herein Seto), Kaulgud et al, USPubN: 2011/0055799 (herein Kaulgud), and Richardson et al, USPubN: 2018/0253297 (herein Richardson) As per claim 1, Stevens discloses a software development version control system for monitoring contributions to software development version control systems (para 0066, 0070) for source code programming projects, comprising: cloud-based memory (para 0038-0039) configured to: store source code contributions (source code repository - para 0066; by developers – para 0018-0019; developers ... source code development and testing - para 0034), from a plurality of software development tools (para 0024; 0226-0027), for a first project (para 0023, 0059); and store (aggregate 330 - Fig. 3) a plurality of software development scores (scores for users, scores for a team, scores across an entire organization, scores across an industry - para 0059) for contributors (e.g. various users (i.e. developers) associated with a check-in - para 0038); cloud-based control circuitry (para 0033, 0038-0039) configured to: receive a first user input from a first user (receives ... information from developers – para 0061); determine a subset of the plurality of software development scores (initial code check-in 410, previous check-ins 450, score values for the previous check-in 460 - Fig. 4) to which the first user has access to (Note1: scores identified for a day, week, month highlight or from past user check-ins performed per a given time interval - activities during the time interval ... applications accessed by the user ... a metric indicating a level of quality or productivity can be determined - para 0004; score value for the time interval - claim 1, pg. 12 - or from previous check-ins - Fig. 4 - reads on subset of scores to which the current user has access), cloud-based I/O circuitry (see above) configured to generate for display on a local display device (e.g. interface 155 of the administration system - para 0040-0041; para 0088; interface for displaying scores – scores indicative of check-ins performed ... the time intervals - claim 1, pg. 12; displaying score 470 -Fig. 4; development scores associated with various check-ins performed by the user ... visualizing development scores for one user - para 0071) subset of the plurality of software development scores (see Note1 from above) Stevens does not explicitly disclose determination of subset of developer scores in terms cloud-based circuitry to retrieve a user profile for the first user in response to receiving the first user input, the user profile including a first authorization for accessing software development scores of the contributors; and determine, using an authorization level indicated by the first authorization, one or more contributors whose software development scores the first user has access to. User profile or registered credentials, personal identification associated therewith received at an authorization front end constitutes user information received in association with a user request for the front end to verify if the received data matches a corresponding back-end profile in order to grant or deny the user the requested access; and this is a well-known practice of authenticating a requesting user or validating the credentials of the user on basis of registered information or system stored personal data or user's profile. Accordingly, Tibrewala discloses a software engineering system operating as a service platform equipped with a performance scoring engine to provide quality metrics for requests by developers or contributors of the software (see Abstract, Fig. 4) where service for determining score on basis of performance of or activities completed by one user is driven by awarded badge/points associated with the user profile (Fig. 3; para 0013) as well as authentication such as (personal) login information by the user (para 0032); so that a score is determined for an activity of the user based on comparison to other developers completion of the same activity (para 0012) where the posted award badge or score of the user can be transmitted to other users or external platforms of the organization (posting the badge, icon, score or reward - para 0017). Hence, use of profile in conjunction with login information of the user for the score of the user to be calculated and posted as a service to the user and for view by other users entails processing a user request with retrieve of user profile information and processing of login credentials associated with the profile to provide the user with posting of a score as a indication of a reward (para 0016). Further, Gauger discloses an information report system in support for diligent analysts or users in communicating of report on a project dashboard based on preferences made by a authorized user (para 0036), the report including metrics on overall project readiness or state of the ongoing project (para 0034), enabling thereby users to make changes to the project (para 0038) including additional project monitoring preferences upon user login stage provided on the dashboard (para 0039), the reported metrics including an overall score of the project under management, where annotation can be allowed on the graphically represented score (para 0034), where contributors of the project can complete their profile and set preferences based thereon (para 0039). Hence processing authorization of a contributing user by a project reporting system in accordance to the user profile and processing of user credentials via a login to provide the user with posting of a report on metrics or score indicative of a project readiness, or state is recognized. Grant discloses change control system with evaluation of a existing developer's profile (para 0033) using restrictions established on basis of the developer's role or experience/expertise level (para 0025) based on pre-stored values in the profile in order to grant the developer with access to his/her own scores (para 0024) or with modification rights (para 0026) over the score, where the established score of a developer is based on his/her conformity to (or quality of) source code change (para 0036), the score being indicative of overall project (para 0037), and normalized based on weighted average thereof (para 0039); hence processing of a developer's profile in regard to pre-established role and/or expertise level based thereon a type of restrictions applies to permit the user to access score on his/her own development activity or quality of source code change entails determining a subset of software development scores which the first user has access to, based on the an authorization or restriction driven from the initial developer's profile information. Authorization check applied to credential of a developer in accordance with a request for metrics on reliability or readiness score for module being developed is shown in Seto tracer/DB server platform (Fig. 1-2); that is, popularity, robustness and reliability measure (reliability score - para 0105; summarized ratings … of the module’s reliability, popularity … metric derived from usage and performance, robustness … of the module – para 0098) in regard to release, performance of developed modules (para 0100-0102) can be presented for view on a UI as scores in response to authorized request of a developer/client upon proper checks (para 0095); e.g. either in more detail afforded by the very developer and his credentials, or otherwise in less details to other users (para 0022; more detailed than … data available to …developers or the general public – para 0061), the presentation of development metrics under control of a database platform by which application-specific data is afforded only to authorized users (para 0062)- i.e. DB statistics returned on basis of an authorized request (claim 1, pg. 10 ), via use of a access portal with authentication subsystem to validate user account and login effects(para 0094) to determine whether the specific data or metrics (para 0091) can be ready for view, manipulation or for being published in the DB (para 0160). Therefore, developer with authorship credentials allowing view of more detailed metrics of his work versus other authorized public users merely allowed statistic view in lesser details entails that an authorization level indicated by the first authorization, for access to a DB of metrics, enables a given level of view (less detailed view) on statistics or development scores pertinent to one or more contributors on basis of an access right associated with the first authorization Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the invention to implement processing of user input and profile for access to score report in Stevens so that the processing of user input is part of a initial authorization process - login stage as in Gauger and Seto - using retrieved information from the user pre-recorded profile - as set forth in Tibrewala and Grant – so to enable a front-end authorization/portal to back-end repository, to determine a subset of the plurality of software development scores - as in Tibrewala, Seto, and Grant or metrics including an overall score as in Gauger - to which the first user has access to – as per Seto where other public users upon a first authentication can be presented lesser detailed metrics from one or more developers - based on this first authorization of the public users; because project data stored in repository, works or source code performed by one or more corresponding developers as well as associated information attached therewith are to be protected against inadvertent accesses or modifications by external entities or sources not registered/authorized therefor, and using security protection to correlate proprietary, personal information or preregistered credentials at the time of the user initial profiling with a input received from the user at a login time process would enable identity of the user requesting access for project/development data to be validated against the pre-registered unique values or credentials of his/her profile (by a frond end login authority/portal that intercept requests and process user inputs) in order to grant access and provide the service requested by the user, including filtering of credential to distinguish between actual author of the software and public users as in Seto; so that only the entity registered as author or owner of the set of data/development work can be provided detailed view of development metrics as well as reconfigure, modify the presented data or resubmit for further publishing – as shown in Seto - and that non-developer users registered as public, can only be given but a less detailed view of development information or statistics, the dual mode of view thereof affording a larger type of public or generic users on the one hand, to have access to software development information presented in a more limited scale; and on the other hand, a more specific type of users authenticated in accordance with status of SW author/developer to be endowed with full-scale access and presentation of their specific work, their performance metrics, for them to further modify and select portions thereof to be stored for public use – as in Seto. Nor does Stevens explicitly disclose filtering, in accordance with the authorization level of the first authorization and before presenting the plurality of software development scores to the first user, the plurality of software development scores such that the subset of the plurality of software development scores comprises only software development scores of the one or more contributors whose software development scores the first user has access to Tibrewala discloses software engineering scoring platform including provider interfaces (para 0010, 0013) affording an user (or manager - para 0054) to access each developer profile (e.g. Fig. 2), and comparatively evaluate (para 0012, 0018, 0035; claim 10, pg. 14) performance of other developers (or multiple participants on a same work/task or project activity), to distribute point scores; or implement reward points based on degree of task achieved (para 0012), using scoring rules (para 0013-0014, 0053; claims 1-2, pg. 13) such as lines of code, deadlines, conciseness (para 0012) in order to award via add, subtract points (para 0025; para 0034) as part of providing metrics for the portion of work as quantifiable measurements of the degree of completion or achievement (para 0014), the UI enabling a manager access (para 0044-0045) and reviews of other developers' work so to recognize and manage their respective contributions to a goal or project in assigning the proper score points (para 0016) Hence determination, based on authorized access of a user/manager, of one or more contributors whose software development scores the management user has access to is recognized; and use of rules (lines of code, deadline, code conciseness) to properly allocate score/reward metrics (adding, removing - see para 0012; manager ... remove points, assign additional points - para 0016) for respective contribution by the other developers entails a form of filtering or refining of one or more developers' score using rules. A control system for authorizing and differentiating user profiles in support of database service whereby developer statistics can be administered, stored, and made accessible for further modification, publication and access/view by both the authoring developer as well as other common users is shown in Seto tracer, database server platform (Fig. 1-2; para 0085-0092), according to which a first user per an authorized request can be given access - for view (e.g. para 0091-0093) - to DB maintained metrics, scores (e.g. software development scores – para 0098, 0105) belonging to one or more contributors whose development data the first user has access to. That is, Seto discloses filtering at a authentication portal (para 0094-0095) of the tracer/DB server platform being such that authorization analytics thereof differentiates specific SW development authors from other/public users and determines a) that a less detailed level of information (less detailed view for other users - para 0022; application-specific data, which may be more detailed than … data available to …the general public – para 0061; access to application-specific data may be limited to authorized users – para 0062) on software development metrics/scores of one or more contributors be given or attributed to that first user (public or common user) whose access is authorized, or b) that a more detailed view of the one developer scores be granted to the actual developer of the SW himself, as second user different from the first (public) user. That is, filtering, at a front-end portal in accordance with analyzing credentials, authentication level or the first authorization by one of public users (prior to actual display of development scores) in the effect of distinguishing which plurality of software development scores or subset among software development scores belonging to the one or more contributors whose metrics the first authorization has access to is recognized. Therefore, based on the distinction between classes of user in Stevens (e.g. classes corresponding to different types of users - para 0061) it would have been obvious for one of ordinary skill in the art before the effective filing date of the invention to implement criteria (para 0072-0073) as part of evaluating metrics or quality of code done for proper ingestion thereof into a developer source code repository so that the management of user accessibility to quality metrics by SW contributors is following setting established on user category and credentials in line with effect of distinguishing and verifying credentials/role of a user endowed with specific (first) authorization for a specific level of information to be made accessible to the first user – e.g. complete detailed view or a mere subset of scores pertinent to one or more contributors made available to a category this first user – the distinction carried out by permitting how much among one or more contributors information the first user has access to, where evaluation as to the extent of scores or metrics to be provided is driven by a filtering that separates permission levels based on users’ role and credentials by which how much information can be made accessible to that user – e.g. complete detailed statistical view versus a mere subset of scores pertinent to one or more contributors made available to a identified category of user -- the distinction (i) as set forth in Tibrewala by which a manager role is authorized of re-allocating rewards or re-assigning performance metrics of any other developers, or (ii) based on distinguishing profile/credentials of actual author of SW work versus credentials of a non-developer, generic role, in that the generic user can only be accessible to a more restrictive set of contributors information, whereas a full detailed set of metrics is made available to the actual SW owner/developer; because rule-based and policy-based management software or tools provided with a Software development platform – e.g. a SaaS – coupled with SW development information divulging service as set forth above in terms of re-organization of distribution points/awards for developers and SW performance statistics service from above would necessitate a front end portal for filtering credentials associated with incoming requests to solicit the stored information as well as an administrative layer to revise development contributor’s metrics and adjust redistribution of relevant performance award or metrics for better re-assessing actual quality of such contribution and consolidating pertinent reliability metrics based on the assessment into the development repository; in that SaaS infrastructure and repository service as set forth above, when equipped with filtering portal so to distinguish credentials and permit both developers and the public to access information indicative of developers/engineers performance quality – score on reliability of their work in generating SW code/work - would enable a) developers in their ownership context to revise their work in function of the state, scale of quality statistics returned from consulting the repository/database, b) project managers to readjust reward on basis of individual quality score obtained from repository, c) non-developer (and registered) users to also be apprised or have an idea on a measure of reliability of one or more SW contributors via consulting the repository, thereby assisting these generic users with making decision when selecting a particular version of software or application provided as prestored assets by the SaaS aspect of the infrastructure, even though, in accordance to filtering rules by the service portal, only a partial, more restrictive set of SW contributors data - quality metrics as shown in Seto - commensurate with the user permission level is made accessible for view. Nor does Stevens explicitly disclose cloud-based I/O circuitry to generate for display, in response to filtering the plurality of software development scores, a graphical representation of a qualitative comparison of the subset of the plurality of software development scores. Stevens discloses use of time-period based histogram of development score (for one user) over an interval to determine how the user is performing compared to the user's aggregate development score value (para 0062) as part of reporting tool and display of information relevant to progress of a project or quality of code forming in terms of metrics or representation thereof in histograms (para 0023; Fig. 5), charts (para 0041,0079) at a front-end interface to establish comparative data relationships during certain productivity periods (para 0025) as a form of visualization of data or metrics that affords the viewer with assessing or perceiving an evolutional change or trend (Fig. 6) or time-based differential (para 0070), to comparatively evaluate performance between teams over periods (para 0084; Fig. 9; visual representation of comparison of productivity - para 0014) as well as permitting user with update action (update chart 470 - Fig. 4), the adjusting responsive to comparing a score respective to another score achieved in another time interval (para 0005). Hence, graphical histograms as a qualitative comparison structure (Stevens: para 0023, 0062) for a qualitative assessment of a given short-term metric compared to an aggregate representation thereof either discloses a qualitative comparison or would have rendered it obvious. Presenting information such as metrics in a display or graphical format that affords qualitative comparison between the numbers or scores presented is shown in Kaulgud, where reporting tool extract data based on specific requests whereby quality metrics or composite scores are returned based on user inputs via a display (para 0050) to enable analysis of a trend or forecasting of development information from a repository, the reporting provided for personal benchmarking by the user in regard to quality metrics and scores being compared with average team scored, with the comparison being visualized in percentile or similar presentation mode (para 0049), the scores being accompanied with a fail/success ratio reflective of quality metrics that are normalized to allow appropriate comparison and benchmarking of personal metrics (para 0049) for a given time interval. Hence, local display effecting a graphical representation of a qualitative comparison of the subset of the plurality of software development scores is recognized. Richardson also discloses a user interface of a online reviewing system (Fig. 2B) that displays a graph overlaying author code trend lines of participants that facilitate comparing of scores by one or more, multiple participants whereby a developer can identify a improvement trend or switching to another trend of author score among the various trends for comparison purposes (para 0027), the online reporting enabling the user to monitor the ups and downs of quality score within a trend or to be presented with a differential score between a current and a past score (para 0047). Hence, local display effecting a graphical representation of a qualitative comparison of the subset of the plurality of software development scores is recognized. Based on capability of graphical presentation in terms chart or histogram format in Stevens for enabling performance metrics or score to be correlated between various time interval or team contexts, it would have been obvious for one of ordinary skill in the art before the effective filing date of the invention to implement the score reporting or posting in Stevens so that, the performance information is reported or posted via local display device as a graphical representation of a qualitative comparison of the subset of the plurality of software development scores, as set forth above in Kaulgud and Richardson; because graphical representation in a chart, graph or histogram format as a way to report metrics or visualize scores indicative of overall progress of a project, quality of software development or source code generating by contributors of the project would enable variation to a measure of project progress or quality of individual or overall performance as well as trend to such quality or performance measure/metric to be perceived along a time frame, a historical or periodic context; and as such, comparison between the ups and downs of metrics observed from this representation would enable a viewer, a project manager or a developer to analyze a negative/positive pattern of certain significance, a quantitative deviation or similarity between the represented values or metrics reported over a given time frame, i.e. the analyzing achieved by comparing a metric or score at a given time point on the graphical representation with respect to metrics (or scores) from a diverse contributors of code or different time periods, thereby a measure of a quality degrading or improvement can be discerned, for a corrective or adjusting action to be made or put into effect by the user; e.g. by a developer to reconsider on way to improve quality in source code forming, or a manager to alter configuration of test or enlisting appropriate teams or project resources to rectify to a quality degradation of the SW development or process build. As per claim 2, Stevens discloses a method for monitoring contributions to software development version control systems for source code programming projects, comprising: storing, in a database, a plurality of software development scores (step 420 - Fig. 4) for contributors (by developers - para 0018-0019; developers ... source code development and testing - para 0034) to a software development version control system (para 0066, 0070); receiving a first user input from a first user requesting access to the database; in response to receiving the first user input; retrieving a user profile for the first user, wherein the user profile includes a first authorization for accessing software development scores of the contributors; determining a subset of the plurality of software development scores to which the first user has access to based on the first authorization, wherein determining the subset comprises: determining, using an authorization level indicated by the first authorization, one or more contributors whose software development scores the first user has access to; and filtering, in accordance with the authorization level and before presenting the plurality of software development scores to the first user, the plurality of software development scores such that the subset of the plurality of software development scores comprises only software development scores of the one or more contributors whose software development scores the first user has access to; and generating for display, on a local device and in response to filtering the plurality of software development scores, a graphical representation of a qualitative comparison the subset of the plurality of software development scores. (all of which having been addressed in claim 1) Stevens does not explicitly disclose wherein each software development score of the plurality of software development scores is based on a plurality of weighted metrics. Stevens discloses associating weights from activities and unrelated activities to determine a weighted aggregate value in computing a score pertinent to a time interval (para 0036; aggregating development scores for various check-ins ... associated with the user - para 0022) Setting weights in conjunction with computing score for developer is shown in Tibrewala (para 0025, 0034-0035; tasks may be scored based on one or more ... weights - para 0042; rules and factors may correspond to weights - para 0053) Kaulgud discloses generation of various composite quality scores (developer-centric assessment ... determine individual quality metrics/scores - para 0053; score is determined on a daily basis for each developer - para 0040) calculated for each developer from plurality of weighted package-level or sub-scores (para 0042), sum or average of number of violations or individual metrics (weighted sum of the violations, weighted average of PQS1, to the DQS score - para 0040; number of ... violations associated with the developer ... this number is weighted by the number of days remaining - para 0049). Therefore, based on use of aggregate weighted value to compute a score in Stevens, it would have been obvious for one of ordinary skill in the art before the effective filing date of the invention to implement the posting or reporting of developers' score in Stevens system so that each software development score of the plurality of software development scores as reported via a display interface is based on a plurality of weighted metrics - as shown in Kaulgud or setting of rules as weights in Tibrewala; because metric or score indicative of quality in developing code is subjected to influence of various factors or conditions under which assessment of a quality is being driven, and by accounting effect of the these factors/conditions in a form of quantitative measure or weight as part of calculating each quality score from the totality of weighted of sub-score as set forth above would enhance the reliability and trustworthiness of the score as calculated and presented to be considered by the viewer or developer, so that a deviation or negative pattern from standard performance measure in SW developing by project developers encompasses a) a normalized measure of impact from diverse influencing factors and/or b) relative measure of defect or improvement that characterizes quality of a individual or group-based SW development process in a given time context, in that a qualitative reporting for it will carry with it weight of all measured conditions or factors for the specific time context; e.g. according to which the developer can be given a guidance in considering a corrective action to mitigate effect of one or the other influencing factors respective to the time period in which the defect is discerned from the reporting tool. As per claim 3, Stevens does not explicitly disclose method of claim 2, further comprising: receiving a first plurality of quantitative software development metrics; applying a respective weight to each of the first plurality of quantitative software development metrics to generate the plurality of weighted metrics; and calculating a first software development score of the plurality of software development scores based on the plurality of weighted metrics. However, applying weight given from quantitative development metrics to provide weighted sub-metric or sub-scores into the calculation of a SW development score such as a composite metric derived from individual weighted metric has been addressed as obvious with rationale D in claim 2; hence applying a respective weight to each of the first plurality of quantitative software development metrics to generate the plurality of weighted metrics in order to calculate a first SW development score of the plurality of software development scores based on the plurality of weighted metrics would have been obvious for the same reasons set forth with said rationale. As per claim 4, Stevens discloses method of claim 3, further comprising: determining a subset of the first plurality of quantitative software development metrics corresponding to a first contribution category; and generating for display a graphical representation of a qualitative assessment of the subset of the first plurality of quantitative software development metrics corresponding to the first contribution category. In the system by Stevens, a developer contribution for which a metric can be representative includes one or more categories - referred herein as (N) - of quality assessing; that being a check-in type (para 0066-0067; Fig. 4), or a measure of defective check-ins (para 0070); a level of productive activity of the user (para 0049) or source code development category to be assessed within an activity/productivity period (para 0033-0036); a time spent as metric (time spent in text editor along with lines of code - para 0064) to determine a ratio as performance goal Based on effect of aggregating a score from applying weight to individualized metrics and the aggregating of composite score (as per rationale of claim 3) using weighted contribution of plural sub-metrics, it would have been obvious for one of ordinary skill in the art before the effective filing date of the invention to implement determination of a subset of the first plurality of quantitative software development metrics for use with calculating of weighted aggregated score so that graphical representation of a qualitative assessment of the subset of the first plurality of quantitative software development metrics is achieved from a corresponding to a first contribution category among other categories as set forth in (N); because awareness of one or more contributive categories of development activity towards forming the overall qualitative score of a developer or group thereof would enable a given developer or project manager to form an idea about the extent of the activity to improve, or a scope of his/her contribution associated therewith to make adjustment thereto, in order to help mitigate a degradation or negative deviation caused thereby so that any remedial action taken would be able remedy to the deficient representation of the quality of development or work being assessed and reported, e.g. a remedying action selected to diminish size of defects and errors affecting the quality of the developer's check-ins or source code writing, to induce selective adjustments to the time spent and amount of effort allocated for a period of time or length of productivity periods without degrading the overall throughput of a developer contribution; or to readjust distribution of tasks (or resources) to a group towards effect of raising group efficiency, attaining a more distinct progress by the project effort while reducing the percentage of technical debts or counterproductive cost caused by the group effort. As per claim 6, Stevens discloses method of claim 3, further comprising: receiving a first contribution of source code from a first software development tool (check-in 410 - Fig. 4, para 0004-0005, 0018); detecting an issue (defects, rework ... more than a threshold amount - para 0020; crash - para 0070) in the first contribution; and generating the first plurality of quantitative software development metrics (determines ... a development score representing a quality of code check-in - para 0021) based on the issue (e.g. flag indicating a type of ...check-in or whether the code check-in was ... rework of a previous added code - para 0021; decreases the score ... based on an amount of rework done - para 0022). As per claim 7, Stevens discloses method of claim 2, further comprising: receiving a second software development score (step 420 - Fig. 4) for a second user (Note2: control system receiving check-ins from developers or users - para 0018; determine a metric for a set of users ... different types of users - para 0061 - and register a score for each check-in instance reads on receiving a score for each user respective to his/her check-in, including that of a first or a second user) to the software development version control system (refer to claim 2), wherein the second software development score is based on a second plurality of weighted metrics (refer to rationale D of claim 2); determining the first user has access (lines provided previously by another user, lines ... previously checked-in by another user - para 0067-0068; indicated manually by another user - para 0020; action trigger sends an alert ... informing ... another user of the level of productive activity - para 0049) to the second software development score based on the first authorization (refer to rationale B of Claim 1 and user having access to other contributors score); and generating for display a graphical representation of (refer to claim 2; displaying score 470 - Fig. 4) the second software development score. As per claim 8, Stevens discloses method of claim 3, further comprising: receiving a second user input (refer to Note2 from above) and in response to the second user input, generating for display a list of the first plurality of quantitative software development metrics (development scores associated with various check-ins performed by the user ... visualizing development scores for one user - para 0071; determine a metric for a set of users ... different types of users - para 0061; histogram representing development scores for one user ... during various time intervals - para 0023; see histogram for Diane Smith - Fig. 5-6). As per claim 9, Stevens discloses method of claim 2, further comprising: determining a contributor category (refer to rationale of claim 4; classes corresponding to different types of users - para 0061) for the first user; filtering the subset of the plurality of software development scores for software development scores corresponding to contributors (refer to rationale B of claim 1) in the contributor category; and generating for display a graphical representation of a qualitative comparison (e.g. how they compare with the group, development score over an interval to determine how the user is performing compared to the ... aggregate development score value - para 0062) of a first software development score and the subset of (refer to claim 1) the plurality of software development scores. As per claim 10, Stevens discloses method of claim 2, further comprising: receiving an updated first software development score (score for a user ... based on an amount of code changes made by the user- para 0067) for a contributor; storing, in the database, the updated first software development score (step 430, 440, 450, 460 - Fig. 4; accordingly, the adjusted development score ... is decreased responsive to any rework ... directly proportionate to the number of lines ... reworked - para 0069); and generating for display a graphical representation of a relationship (para 0071; update chart displaying score values 470 - Fig. 4) of a first software development score to the updated first software development score. As per claim 12, Stevens discloses a non-transitory computer-readable medium comprising instructions that, when executed by one or more processors, cause operations comprising: storing, in a database, with a plurality of software development scores for contributors to a software development version control system, wherein each software development score of the plurality of software development scores is based on a plurality of weighted metrics; receiving a first user input from a first user requesting access to the database; retrieving a user profile for the first user in response to receiving the first user input, wherein the user profile includes a first authorization for accessing software development scores of the contributors; determining a subset of the plurality of software development scores to which the first user has access to, wherein determining the subset comprises: determining, using an authorization level indicated by the first authorization, one or more contributors whose software development scores the first user has access to; and filtering, in accordance with the authorization level and before presenting the plurality of software development scores to the first user, the plurality of software development scores such that the subset of the plurality of software development scores comprises only software development scores of the one or more contributors whose software development scores the first user has access to; and generating for display, in response to filtering the plurality of software development scores, a graphical representation of a qualitative comparison of the subset of the plurality of software development scores. (all of which having been addressed in claim 2) As per claim 13, refer to rejection of claim 3 As per claim 14, refer to rejection of claim 4. As per claim 15, refer to claim 5. As per claim 16, refer to claim 7. As per claim 17, refer to claim 8 As per claim 18, refer to claim 9. As per claim 19, refer to claim 10. Claims 5 is/are rejected under § 35 U.S.C. 103 as being unpatentable over Stevens, USPubN: 2020/0005219 (herein Stevens) in view of Grant et al, 2021/0011712 (herein Grant), Gauger et al, USPubN: 2014/0279694 (herein Gauger) and Tibrewala et al, USPubN: 2021/0182767 (herein Tibrewala), further in view of Seto et al, USPubN: 2015/0066869 (herein Seto), Kaulgud et al, USPubN: 2011/0055799 (herein Kaulgud), and Richardson et al, USPubN: 2018/0253297 (herein Richardson) and further of Nair et al, USPN: 10,877,869 (herein Nair) As per claim 5, Stevens discloses method of claim 3, wherein the first plurality of quantitative software development metrics include metrics relating to (i) contributions committed to the software development version control system, unique additions to the software development version control system, deletions submitted to the software development version control system, unique contributions added to the software development version control system, distinct organization contributions to the software development version control system, contribution statistics; (ii) average weekly contributions to the software development version control system, distinct project contributions to the software development version control system, (iii) number of contribution modifications submitted to the software development version control system, number of contributions added to the software development version control system, number of contributions removed from the software development version control system, number of contributions renamed at the software development version control system, or number of positive reviews of contributions. As for (i), Contributions to a version control system from contributors or developers of a development process or a SW project can be seen in Stevens from check-ins activities and scope of contributions by each developer in submitting his/her respective check-ins (para 0068; Fig. 4) from a version control system standpoint (para 0064) where the amount of source code added, changed underlying a project discloses contributions by the project or individuals for adding source code, for changing code or removing version thereof (see new code added, deletion of code, modification of previously checked in code - para 0020); where score module may aggregate scores for users onto team of users, or across the entire organization; hence contribution of quality score or development score into a version control system (para 0064, 0066, 0070) in terms of code added, deletion of code, modification of previously checked in code by distinct contribution from individual, from group or by aggregating an organization is recognized - the latter referred herein as(*). As for (ii), Contribution into a version control system (Fig. 4) using contributed works or check-ins recorded otherwise as qualitative metrics or development score set for a given time interval carried out in terms of daily or weekly basis is shown in Stevens presentation of metrics summary based on recording of productivity by developers for a specific period (para 0032; time interval corresponding to time of day ... day of the week - para 0045; Fig. 8), where the system determines statistics related to code check-ins over time (para 0027); hence contribution statistics and average weekly contributions to the software development version control system (see para 0064, 0066, 0070) is recognized- referred herein as (**). As for (iii), Check-in contribution by a developer to modifying a previously checked-in version entails renaming a version as consequence of the change that fixes the previous version (para 0020) and the amount of source code added, changed underlying a project from contributions by the overall group or individuals for adding source code, for changing code or removing version thereof (see new code added, deletion of code, modification of previously checked in code - para 0020) to be recorded in a version control system entails that recording of number of contributions added, number of contributions removed from, number of contributions renamed at the software development version control system (see above) is recognized - referred herein as (***) Nair further discloses a version control system (col. 2 li. 54 -65) to support display of review or comments via a lookup associated with a code review tool that fetches versions of source code from a version control system (Fig. l; col.3, li. 55-60); hence support of a version control system to present reviews on source code with a code review support entails contribution of code reviews via a version control system having a lookup tool to post or display negative or positive reviews of a code. Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the invention to implement recording of development scores or metrics associated with developers' source code check-ins by a version control system in Stevens so that the plurality of quantitative software development metrics for use in deriving a aggregate metric include metrics such as (1) contributions committed to the software development version control system, unique additions, deletions submitted, unique contributions added, distinct project contributions, or distinct organization contributions to the software development version control system - as set forth per (*) from above. (2) contributions such as average weekly contributions to the software development version control system, or relevant contribution statistics therein - as set forth per (**) from above; (3) number of contribution modifications submitted, number of contributions added to, number of contributions removed from, number of contributions renamed at the software development version control system - as per (***) from above, or number of positive reviews of contributions as in Nair; because metrics indicative of quality or performance state from various types of activities and operations by different types of contributors to the project or development of a software would enable the software management system to compute a aggregate representation of individual scores as set forth per rationale in claim 3, in the sense that visualization of the aggregate score or overall metric by the management tool would enable managers of the project or project analyzer, software product stakeholder, project team or group leaders to consider the percentile of the sub-metric that constitutes a likely impact or causality weight from by a given source of activity that most influences the overall score, whereby a management type decision can be determined to address impact by such contributing factors (or activities) on the overall quality of the SW development in order for one or more factors underlying the quality degrading causality to be mitigated. Claims 11, 20 is/are rejected under § 35 U.S.C. 103 as being unpatentable over Stevens, USPubN: 2020/0005219 (herein Stevens) in view of Grant et al, 2021/0011712 (herein Grant), Gauger et al, USPubN: 2014/0279694 (herein Gauger) and Tibrewala et al, USPubN: 2021/0182767 (herein Tibrewala), further in view of Seto et al, USPubN: 2015/0066869 (herein Seto), Kaulgud et al, USPubN: 2011/0055799 (herein Kaulgud), and Richardson et al, USPubN: 2018/0253297 (herein Richardson) and further of Dong et al, CN 111161029 (translation), 05-15-2020, 11 pgs (herein Dong) As per claim 11, Stevens does not explicitly disclose method of claim 10, further comprising: extrapolating a future first software development score for a first contributor on a future project based on a first software development score and the updated first software development score; and generating for display a recommendation for staffing the future project based on the future first software development score. Use of a predictive management technique to support acceptance inspection as part of a proactive planning of resources or staffing personnel via use of current set of values as part of a self-propel (self-push) inspection is shown in Dong's self-recommendation method, where a recommendation management module inspects qualitative information associated with users, merchants or contributors to a commerce platform (see Abstract, pg. 1) to ensure of commodity value of a given user (pg. 2) to benefit productivity and quality of the enterprise intended with the planning, the length of time used to extrapolate a value (see Dong, pg. 6) likely to generate a larger extrapolated amount (for a self-propelled candidate) used as one candidate recommendation for a prospected staffing -- where values associated with characteristics of a target user are subjected to an extrapolating process whereby the targeted user self-push value can be established (pg. 3) and marked as candidate among other self-propelled and inspected users to be considered as part of staff management endeavor (middle pg. 6), the self-propelled quantified data calculated from an extrapolated value of the user, using a correction coefficient applied to all learned values on the user (pg. 6-8). Hence use of an extrapolating technique to obtained a extrapolated score from individual metric contributive to a future endeavor or planning such as by a staff management module is recognized. Thus, as using a drill-down technique in Stevens (para 0061) is purported to create a more accurate matching between a set of developers or a type thereof and a task demand, other factors, time interval of development, or productivity framework to consider, it would have been obvious for one of ordinary skill in the art before the effective filing date of the invention to implement the mapping of development metrics with a selection of developer types or activity types as in Stevens so that an extrapolation technique is employed - as in Dong - to proactively create from a set of known values associated with a definite set of resources at hand, identify recommended candidate SW development contributors, the recommendation visually presented for consideration by a staffing administrator, and so, per result from the process of extrapolating a future software development score for a (first) contributor of a future project based on a first software development score and the updated first software development, the recommended SW score and developer resources - as in Dong – destined for staffing the future project based on the future first software development score; because scores, metrics or qualification data representative of a candidate resources or development personnel as intended in Stevens are either restrictive to one type of applicability or non-suitable for more complex or different of application scopes, which otherwise would cause restrictions to the capability of a project pertinent management entity in the prospect of staffing and/or assigning personnel for intended tasks, or scope of a development activities, one such difficulty incurred with a process of matching many functionalities purported by a project phase or a sizable organization of development tasks and otherwise causing impediment to a proactive action seeking by a management in staffing a future project; and by using of a extrapolating technique from a prerecorded set of metrics or existing qualification data on available resources or personnel as set forth above, an augmented and projected derivation of extended resources or qualification metrics can be obtained, thereby increasing the selection pool of candidate entities or resources based on which a recommendation can be posted on a visual interface for selective use by members of a staffing process as shown in Dong's approach, which in turn can improve an overall development efficiency in Stevens' s approach per effect of narrowing down to a proper set of development staff as one or more exact qualification matches designated to take on a intended phase of a project or meet/respond to demands of complex tasks of SW development. As per claim 20, refer to rationale of claim 11 from above. Response to Arguments Applicant's arguments filed 12/09/25 have been fully considered but they are not persuasive. Following are the Examiner’s observations in regard thereto. (A) Applicants have submitted that the claims rejected under 35 USC § 101 are eligible under step 2A because security profile enables the invention to protect user against both viruses and obfuscated code, greater virus filtering and user customization, as these benefits reflect improvement using computer functionality, in accordance to decision by the court, in that arrangement of elements found in the Specifications as well as recited in the claims such as filtering according the authorization level of the user reflects a specific technical improvement deemed eligible under the 35 USC § 101 rules (Applicants Remarks pg. 10-11). The subject matter as claimed does not include step actions to prevent security caused by viruses or obfuscation to SW, nor is it about the non-conventional technique of computer improvement exactly as alluded to by the Applicants citing the Specifications. Instead, the claims recite steps of a) storing information on development metric or quality, receiving request and credentials authorization for such information and b) determining subset of scores and one or more contributors whose scores such authorization has access to, and filtering out a subset of scores and c) displaying a graphical representation thereof. The analysis for eligibility matter is directed to the language of the claims, and has found that the steps b) of determining, filtering can be done by mental processes absent any details showing how these actions are being implemented, whereas steps a) of storing and receiving pertain to what is construed as pre-activities of low significance; and step c) of presenting visual information derived from step b) cannot upconvert the mental activities of b) for them to amount to much more than a Abstract Idea type Judicial Exception. Step 2B has navigated the additional elements of the rejected claims and has found that these elements fail to convert the judicial Exception into a practical Applications as none of these “additional elements” nearly describe how the determining/filtering actions are implemented in specific software or machine typical of and/or required of a Practical application. Therefore, step 2B analysis is unable to find that the 35 USC § 101 (Judicial Exception) deficiency found in step 2A amounts to much more than an Abstract Idea type of deficiency; i.e. the allegation provided by the applicant in regard to disclosed technical improvement or security protection deemed largely non-persuasive. (B ) Applicants have submitted the reference of Tibrewala in support for teaching about filtering and refining score for the purpose of trimming down and optimizing a repository cannot be same as “filtering … scores of a plurality of contributors” such that “the subset of … scores comprises only software development scores of the one or more contributors … the first user has access to” (Applicants Remarks pg. 12). The Office action has been adjusted to meet the additional changes to the claim language in regard to filtering in accordance with the authorization level of the first authorization, therefore, raise of patentability merits for a new amended claim language would be deemed largely MOOT. In all, the claims as submitted stand rejected as currently set forth in the Office Action. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Tuan A Vu whose telephone number is (571) 272-3735. The examiner can normally be reached on 8AM-4:30PM/Mon-Fri. If attempts to reach the examiner by telephone are unsuccessful, the examiner's supervisor, Chat Do can be reached on (571)272-3721. The fax phone number for the organization where this application or proceeding is assigned is (571) 273-3735 ( for non-official correspondence - please consult Examiner before using) or 571-273-8300 ( for official correspondence) or redirected to customer service at 571-272-3609. Any inquiry of a general nature or relating to the status of this application should be directed to the TC 2100 Group receptionist: 571-272-2100. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). /Tuan A Vu/ Primary Examiner, Art Unit 2193 January 18, 2026
Read full office action

Prosecution Timeline

Jun 01, 2023
Application Filed
Jan 17, 2025
Non-Final Rejection — §101, §103
Apr 21, 2025
Response Filed
May 20, 2025
Non-Final Rejection — §101, §103
Aug 25, 2025
Response Filed
Aug 26, 2025
Examiner Interview Summary
Aug 26, 2025
Applicant Interview (Telephonic)
Sep 10, 2025
Final Rejection — §101, §103
Dec 03, 2025
Interview Requested
Dec 09, 2025
Request for Continued Examination
Dec 18, 2025
Applicant Interview (Telephonic)
Dec 18, 2025
Examiner Interview Summary
Dec 20, 2025
Response after Non-Final Action
Jan 18, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596557
SYSTEM AND METHOD FOR GENERATING RECOMMENDATIONS FOR DATA TAGS
2y 5m to grant Granted Apr 07, 2026
Patent 12591718
Application Development Platform, Micro-program Generation Method, and Device and Storage Medium
2y 5m to grant Granted Mar 31, 2026
Patent 12585573
ASSEMBLING LOW-CODE APPLICATIONS WITH OBSERVABILITY POLICY INJECTIONS
2y 5m to grant Granted Mar 24, 2026
Patent 12582796
METHODS, DEVICES, AND SYSTEMS FOR IMPROVED OXYGENATION PATIENT MONITORING, MIXING, AND DELIVERY
2y 5m to grant Granted Mar 24, 2026
Patent 12541384
COMPONENT TESTING FRAMEWORK
2y 5m to grant Granted Feb 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

4-5
Expected OA Rounds
73%
Grant Probability
95%
With Interview (+21.4%)
3y 5m
Median Time to Grant
High
PTA Risk
Based on 980 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month