Prosecution Insights
Last updated: April 18, 2026
Application No. 17/873,957

INTELLIGENT KNOWLEDGE PLATFORM

Non-Final OA §102§103
Filed
Jul 26, 2022
Examiner
HWANG, MEGAN ELIZABETH
Art Unit
2143
Tech Center
2100 — Computer Architecture & Software
Assignee
Linkedfield Inc.
OA Round
3 (Non-Final)
47%
Grant Probability
Moderate
3-4
OA Rounds
3y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 47% of resolved cases
47%
Career Allow Rate
9 granted / 19 resolved
-7.6% vs TC avg
Strong +60% interview lift
Without
With
+60.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
25 currently pending
Career history
44
Total Applications
across all art units

Statute-Specific Performance

§101
34.9%
-5.1% vs TC avg
§103
41.0%
+1.0% vs TC avg
§102
7.4%
-32.6% vs TC avg
§112
15.3%
-24.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 19 resolved cases

Office Action

§102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1-2 and 5-22 are pending. This Office Action is responsive to the RCE filed on 03/03/2026, which has been entered in the above identified application. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-2, 5-14, 16, and 18-22 are rejected under 35 U.S.C. 103 as being unpatentable over Foroughi et al. (US 20220130272 A1, provisional application filed 10/27/2020), hereinafter Foroughi; in view of Cantor et al. (US 20140222497 A1, filed 02/12/2014), hereinafter Cantor. Foroughi was cited in a previous Office Action. Regarding Claim 1, Foroughi teaches an apparatus, comprising: a processor; a memory that stores code executable by the processor (Foroughi: “The system may also include one or more processors coupled with the memory to receive the organization framework and further configured to generate system processing commands.” [0095]) to: receive data associated with a project, the data describing one or more characteristics of the project and including task-state information for one or more tasks of the project, the task-state information comprising one or more indicators of task progress (Foroughi: “the method further comprises creating a new work task.” [0018]; “each work task comprising a technical description of the work task and a set of requirements for completing the work task” [0014]; “When the user completes a training module, the user profile records the completion data optionally with accompanying metadata, to track how the user is progressing with training and learning.” [0090]; “Once a user has completed each training module the module is marked as such, along with any metadata associated with the training module and/or task completion, and the training course can indicate its degree of completion, as in this case with a percentage completion. Training module completion metadata can include, for example, time completed, date completed, amount of time taken to complete, and the associated task that the training module was completed alongside.” [0107]); determine one or more metadata tags for classifying the data (Foroughi: “In a newly created task or for a task that is not already matched to at least one training module, a context analysis engine 118 can analyse the contents of the task to identify the task context, and use the identified task context to match relevant training modules. When the system scans a task, such as a work instruction or job description, it searches through the task context and identifies and combines the identified words and/or strings of characters into one or more task descriptions, context identifiers, or keywords... From the task description can be extracted one or more contextual identifiers, also referred to as keywords or metadata tags, which are based on the task description and/or other contextual data such as code fragments, associated with the task.” [0071]; “selection of training modules can be identified using search, tags, artificial intelligence based on previous identification of relevant tasks, or a combination thereof. Further, text and audio in the training modules can be used as metadata tags to bring forward training modules relevant to a particular task or ticket. Over time human selection and searching results, as well as relevance assessments provided by developers, can assist with machine learning of the training system to bring forward the most relevant training modules for a particular task.” [0123]), match the classified data to one or more predetermined knowledge insights for the project using the metadata tags as lookup keys, the one or more predetermined knowledge insights stored in a knowledge database and comprising predefined informational items associated with task execution or project scheduling (Foroughi: “matching the identified work tasks to one or more training module in a training module database by comparing contextual identifiers associated with the training module comprising one or more keyword, code fragment, or metadata tag, to the technical description or the set of requirements for the work task” [0014]; “Contextual identifiers in the training modules and tasks can be matched by tracking and comparing them in a keyword database. In one example, the system tracks contextual identifiers or keywords in a keyword database (KWd) having N keywords or strings of characters: {k1, k2, . . . , kN; N>0}” [0077]; “Training modules can be further tagged so that they are positively associated with relevant modules. Additionally or alternatively, selection of training modules can be identified using search, tags, artificial intelligence based on previous identification of relevant tasks, or a combination thereof. Further, text and audio in the training modules can be used as metadata tags to bring forward training modules relevant to a particular task or ticket.” [0123]; “a training database comprising a plurality of training modules, each of the training modules comprising contextual identifiers associated with the training module comprising one or more keyword, code fragment, or metadata tag, to the technical description or the set of requirements for the work task to match the training module to a work task based on the task context” [0035]; “Each task that a user is working on can be matched with one or more relevant training modules in a few different ways. In one way, one or more of the work tasks on the work task list can be selected from a list of tasks in a task database 114, where the tasks in the task database are already mapped to particular training modules in a training database 106. In organizations where the same or similar tasks are required for a plurality of projects, such as in a software development environment, a task database 114 can comprise pre-assembled tasks with pre-mapped relevant training modules.” [0071]); predict one or more future conditions of the project based on a state of one or more tasks of the project (Foroughi: “Since the system keeps track of the training progression of the user as well as the tasks that the user has in queue and the team the user is on, the system can offer the next level of training to the user based on the user's experience, interest, and job-applicability. In particular, the system can anticipate and recommend training using additional inputs such as but not limited to the user's role in the company, the user's work experience, the user's previous project or next project, the project context, required deliverables, completion of other training, social connections, group membership, geography, and real world objects (e.g. augmented reality input).” [0113]; “A user training program can further be configured or be scaled contextually using artificial intelligence (AI) or other processes to predict associations between training modules and work contexts that haven't been explicitly defined but may be of interest.” [0120]; In light of Paragraph [0093] of the specification, which states “The future conditions may include the next steps, other tasks that may need to be completed before moving on to the next steps, potential problems to foresee/expect, materials or personnel needed to complete the next steps, or the like”, BRI would support that “predicting future conditions based on a state of a task” encompasses observing the present task progression to determine future steps or tasks); identify, in response to detection of the system state and based on a predetermined association defining a mapping between detected system states of the project and corresponding knowledge insights, knowledge insights associated with a detected system state (Foroughi: “In one way, one or more of the work tasks on the work task list can be selected from a list of tasks in a task database 114, where the tasks in the task database are already mapped to particular training modules in a training database 106. In organizations where the same or similar tasks are required for a plurality of projects, such as in a software development environment, a task database 114 can comprise pre-assembled tasks with pre-mapped relevant training modules. Alternatively, a new work task that is not already in the task database 114 can be can be created, such as in a task creation module 116, and relevant training modules can be recommended based on the task context of the task.” [0071]); and present, on a digital display device, the identified knowledge insights that are associated with the detected system state (Foroughi: “Specific training modules which have been matched to each task are then recommended for each particular user based further on the user profile such that the most relevant training tasks for any given task are presented to the user.” [0073]; “A user work interface 112 can provide tasks to the user from the work task list and training module list 104 and can be displayed on a display such as a graphical user interface on a screen or electronic device.” [0086]; “Each training module incorporates at least the minimal amount of instruction or learning needed to complete an upcoming task or piece of an upcoming task, or next step of instruction or training along a learning path.” [0064]; “In another embodiment, each training module comprises one or more of text, slideshow, video, audio, games, puzzle, virtual reality simulation, augmented reality, mini-task, quiz, external link, and interactive media.” [0020]). However, Foroughi fails to expressly disclose steps to determine, by executing a machine learning model on the data, metadata tags for classifying the data, wherein the machine learning model is trained using historical project data and dynamically applied to current task-state inputs; predict, by executing the machine model on the task-state information and previously stored task-state information associated with the project, one or more future conditions of the project comprising one or more predicted task execution states or schedule-related conditions, wherein the predicted task execution states or schedule-related conditions define detected system states of the project; and retrain the machine learning model based on newly received task-state information associated with the project. In the same field of endeavor, Cantor teaches steps to determine, by executing a machine learning model on the data, metadata tags for classifying the data, wherein the machine learning model is trained using historical project data and dynamically applied to current task-state inputs (Cantor: “Once all tasks have associated effort values--either manually provided or automatically estimated--the learning algorithm builds an estimation model using the tasks. Specifically, for example, it uses a subset of the attributes of those tasks to partition the tasks into disjoint groups, each of which disjoint groups contains tasks that have similar effort values. The partitioning is performed by repeatedly choosing attributes and attribute values for those attributes that divide the tasks.” [0048]; See [0049]-[0074], in which examples of input task attributes are provided); predict, by executing the machine model on the task-state information and previously stored task-state information associated with the project, one or more future conditions of the project comprising one or more predicted task execution states or schedule-related conditions, wherein the predicted task execution states or schedule-related conditions define detected system states of the project (Cantor: “The machine learner uses a training set of examples of completed tasks with their attributes including their actual completion times to build a prediction model. The prediction model discriminates the completed training tasks using a variety of task attributes (such as owner, type, or priority). Once the model is available, the machine learner can apply it to a new task to obtain a task effort prediction by matching the new task to the most similar training tasks.” [0089]; “A project completion predictor 110 may take as input, information on resource and scheduling constraints applicable to the project to be estimated. Based on the estimated probability distribution times for each of the as-yet incomplete tasks, e.g., determined by the task estimator 108, and also based on the information on resource and scheduling constraints applicable for the project to be estimated, the project completion predictor 110 determines an estimated probability distribution of the completion time of the collection of as-yet incomplete tasks as a whole.” [0037]; “In addition to the date and time of worked performed, the effort estimator 108 may also take into consideration the “state” of the task at different points in time. Tasks may have a special “state” attribute which indicates the status, condition, progress, or disposition of the task. For example, a newly created task might have a state of “New”. After a task has been assigned to a developer to be worked on it may be (manually) moved into a state of “Triaged”. When that developer begins work he could indicate this by moving the task into a state called “In Progress”. Finally, when the task is completed he could move it to a “Closed” state to indicate that is complete. The dates at which these state transitions occur are another clue as to when the task was worked on and therefore how much effort was taken to complete it. The task estimator 108 may, on the basis of those considerations, determine an estimate of the effort spent on any of the completed tasks.” [0041]); and retrain the machine learning model based on newly received task-state information associated with the project (Cantor: “Both team velocity (the amount of work a team completes in a given period of time) and the nature of tasks may change over time on a given project. Thus, the machine learning is an ongoing process. Newly completed tasks increase the size of the training sets, and the machine learner continuously builds new models out of the new training sets. As a result, task effort prediction is adaptive and reflects changes and trends that may occur during a project's evolution.” [0094]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to have incorporated steps to determine, by executing a machine learning model on the data, metadata tags for classifying the data, wherein the machine learning model is trained using historical project data and dynamically applied to current task-state inputs; predict, by executing the machine model on the task-state information and previously stored task-state information associated with the project, one or more future conditions of the project comprising one or more predicted task execution states or schedule-related conditions, wherein the predicted task execution states or schedule-related conditions define detected system states of the project; and retrain the machine learning model based on newly received task-state information associated with the project, as taught by Cantor to the apparatus of Foroughi because both of these apparatuses are directed towards automated project management and task recommendation systems in which current task information is captured to guide future steps in a project. In making this combination and actively training, applying, and retraining a machine learning model to make predictions of future states from which to derive suggestions for user guidance, as taught by Cantor, it would allow the system of Foroughi to “adapt dynamically to changes in status and operating conditions, thereby allowing outdated plans and expectations to be replaced by new ones that are more attuned to changed circumstances” (Cantor: [0026]). Regarding Claim 2, Foroughi and Cantor teach the apparatus of Claim 1, wherein the project comprises a construction project composed of a plurality of tasks, the data describing each task of the plurality of tasks such that the tasks are classified according to the metadata and associated with the one or more knowledge insights (Foroughi: “Although software development is used in the present description as a model of where the presently described just-in-time training system and method can be used, it is understood that the same or similar system can be used in other industries, including but not limited to information technology, pharmaceutical, legal, medical, construction, marketing, trades, computer aided design (CAD), accounting, system operation, as well as online education systems.” [0070]; “the method further comprises generating an electronic project in the work environment comprising a plurality of work tasks required for completing the project” [0016]; “each of the training modules comprising contextual identifiers associated with the training module comprising one or more keyword, code fragment, or metadata tag, to the technical description or the set of requirements for the work task to match the training module to a work task based on the task context” [0035]). Regarding Claims 19, 21, 20 and 22, they are method and apparatus claims respectively that correspond with the apparatus of Claims 1 and 2. Therefore, they are rejected for the same reasons as Claims 1 and 2 above. Regarding Claim 5, Foroughi and Cantor teach the apparatus of Claim 1, wherein the one or more knowledge insights comprise information related to one or more of common mistakes, training materials, lessons learned, definitions, procedures, manuals, or a combination thereof (Foroughi: “User tasks are matched to training modules in a training database to assist with completion of a task while a user profile tracks user training to deliver the most appropriate training modules.” [Abstract]). Regarding Claim 6, Foroughi and Cantor teach the apparatus of Claim 5, wherein the one or more knowledge insights comprise explanations, schematics, diagrams, blueprints, instructional multimedia, associated codes and laws, or a combination thereof (Foroughi: “each training module comprises one or more of text, slideshow, video, audio, games, puzzle, virtual reality simulation, augmented reality, mini-task, quiz, external link, and interactive media.” [0020]). Regarding Claim 7, Foroughi and Cantor teach the apparatus of Claim 1, wherein the one or more tasks of the project are associated with one or more users that are engaged to complete the one or more tasks (Foroughi: “The term “task” refers to any issue, job, work assignment, or requirement to be completed. The term “work order” is the assignment of a task to a user, person, worker, or employee.” [0059]; “The present system and method provides for in-context and on-time delivery of relevant microlearning and training material that users need to know to best perform their job or tasks at work by matching training to one or more work orders or tasks that a user is required to do as part of a work assignment” [0064]). Regarding Claim 8, Foroughi and Cantor teach the apparatus of Claim 7, wherein the code is executable by the processor to predict the one or more future conditions of the project and present the corresponding knowledge insights for the one or more future conditions in response to the one or more users signing in to work on the project (Foroughi: “The system then preferably recommends additional training modules 210 to the user. The recommendation of additional training modules can be based on, for example: past training; past performance; tasks upcoming in the user task list; tasks the user has shown poor performance at in past; knowledge they have not learned yet that is related to other material they know; courses that they have expressed an interest in taking; future work goals; and new or emerging knowledge relevant to their job.” [0119]; “the method further comprises generating an electronic project in the work environment comprising a plurality of work tasks required for completing the project; and automatically allocating, in an application lifecycle management (ALM) tool, one or more user work tasks to the user from the plurality of work tasks in the project” [0016]; “a web application that runs the system can be provided to enable user access to the task list associated with a particular project. The system authenticates the user to make sure they have access to the relevant project space and the user is authorized to view a selected or complete task list as well as their training list.” [0094]). Regarding Claim 9, Foroughi and Cantor teach the apparatus of Claim 8, wherein the code is executable by the processor to present an interface for receiving user log in information from the one or more users and create a contact list for the project for tracking the one or more users that are working on the project (Foroughi: “a web application that runs the system can be provided to enable user access to the task list associated with a particular project. The system authenticates the user to make sure they have access to the relevant project space and the user is authorized to view a selected or complete task list as well as their training list.” [0094]; [See Figure 14], which shows a user login interface; “management teams can also be provided with real-time visibility of a project's verification status during production in addition to the training completion record of a particular user or team” [0113]). Regarding Claim 10, Foroughi and Cantor teach the apparatus of Claim 7, wherein the code is executable by the processor to periodically push knowledge insight information to the one or more users based on a status of the one or more tasks that the one or more users are in the process of completing (Foroughi: “The just-in-time training system 100 shown identifies work being done by a user, and in combination with the user profile 110 keeps track of training completion and recommends training to the user based on their user profile 110 as well as the task they are currently working on.” [0071]; “Accessing the training database preferably occurs throughout the workday so that relevant guidance and assistance can be provided to users as they progress with their work and through the tasks on their work task list 102. Both the task database 114 and the training database 106 are preferably updated regularly such that updated requirements can be quickly added to the prioritized task list of a project that requires immediate action and so that up-to-date training modules are relevant and available.” [0088]). Regarding Claim 11, Foroughi and Cantor teach the apparatus of Claim 7, wherein the code is executable by the processor to assign knowledge insight information to one or more users in response to input from a manager and push the knowledge insight information to the one or more assigned users (Foroughi: “With a training completion navigation page users can keep track of the courses and modules that they have completed and view which additional modules and/or courses are on their learning path as selected by themselves, their manager, the company priorities, or based on current or future tasks or projects assignments.” [0107]; “Individuals can be on a learning path with a set of courses, or alternatively individuals may be assigned particular courses by a manager to improve their skills in a particular skill set.” [0116]). Regarding Claim 12, Foroughi and Cantor teach the apparatus of Claim 1, wherein the code is executable by the processor to receive external information for the knowledge database, the external information comprising experiential survey data received from one or more project managers, information scraped from one or more online resources, and information derived from one or more documents (Foroughi: “Some formats that training modules can take include but are not limited to one or more of text, slideshow, video, audio, photographs, virtual and augmented reality.” [0087]; “The training modules can also provide one or more links to external resources, recorded or archived material, live coaching, or demonstrations.” [0087]; “Other data which can also assist in the recommendation of additional training can include manager reviews of user work, time to complete a task, and training behaviour such as number of times and how a microlearning module was accessed.” [0090]). Regarding Claim 13, Foroughi and Cantor teach the apparatus of Claim 1, wherein the code is executable by the processor to generate a plan and schedule for the project based on the details of the project and the knowledge insights in the knowledge database (Foroughi: “The work task list 102 comprises at least one task, but can also comprise many tasks (T.sub.1, T.sub.2, T.sub.3 . . . T.sub.n), which are preferably listed in the user work interface in order of importance or priority. One or more larger tasks can also be broken up into smaller tasks to help the user manage and plan their workload, as well as to match smaller tasks to appropriate training modules. The training module list 104 for the user has a list of training modules (M.sub.1, M.sub.2, M.sub.3 . . . M.sub.n) that are associated with the tasks on the work task list 102. The training modules on the training module list 104 are taken from a training database 106 and selected based on their relevance to tasks on the work task list 102.” [0096]; “The training modules offered can be automatically anticipated by the system, or selected by the user based on their learning goals or desires, or a combination thereof. In this training schedule, training modules offered may not necessarily be directly related to particular tasks in the user work task list, but may be offered independently and in alignment with user or organization goals.” [0119]). Regarding Claim 14, Foroughi and Cantor teach the apparatus of Claim 1, wherein the code is executable by the processor to receive information for a change order for the project, the information describing one or more characteristics of the change order (Foroughi: “Management can also globally add to the task database and/or training database as issues arise and push items to users' task list or training module list to provide timely training or information. Certain company-wide information can be shared, for example sensitivity training, updated safety training, evacuation training, announcements, or updated branding and procedural guidelines. Training tasks can also be time-bound, such as required to be completed by a certain date, required for refreshing every certain period of time. Training tasks and training modules can be updated as needed, for example to change to the prioritized security requirements task list to change focus and mitigate risk according to best practices.” [0104]; BRI in light of the specification, given Paragraph [0106] which states “a change order may refer to an amendment to a construction contract (or other similar type of contract for a project) that changes the scope of work”, would support that a “change order” can encompass updating, adding or otherwise changing the tasks in a project). Regarding Claim 16, Foroughi and Cantor teach the apparatus of Claim 14, wherein the code is executable by the processor to generate one or more recommendations for completing the change order, the one or more recommendations based on knowledge insights generated for the change order (Foroughi: “A training module can also be recommended to a user based on a task update” [0099]). Regarding Claim 18, Foroughi and Cantor teach the apparatus of Claim 13, wherein the code is executable by the processor to determine metadata tags for the change order information and add the change order information to the knowledge database for generating one or more knowledge insights in response to the knowledge database not comprising the change order information (Foroughi: “a new work task that is not already in the task database 114 can be can be created, such as in a task creation module 116, and relevant training modules can be recommended based on the task context of the task.” [0071]). Claims 15 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Foroughi in view of Cantor, as applied to Claim 14 above, in further view of Galaviz (US 20110054968 A1, filed 06/04/2010). Galaviz was cited in a previous Office Action. Regarding Claim 15, Foroughi and Cantor teach the apparatus of Claim 14, wherein the code is executable by the processor to determine a phase of the change order and skills required to complete the change order (Foroughi: “software engineers can be provided with updated knowledge from the training database 106 throughout the SDLC, including during creation of the project, as well as during the requirements phase, design phase, development phase, test phase, deployment phase, maintenance and update phase, replacement phase, and deprecation phase. Developers can also be pushed relevant training modules in their work tasks or training module list 104 to provide immediate knowledge on emerging security threats and how to handle them.” [0089]; “Assignment of training courses to individual workers can also assist managers with filling in skill sets in team and for ensuring that workers are up to speed on skills and knowledge required for current and upcoming projects” [0116]). However, Foroughi and Cantor fail to expressly disclose wherein the code is executable by the processor to determine whether the change order is outsourceable. In the same field of endeavor, Galaviz teaches wherein the code is executable by the processor to determine whether the change order is outsourceable (Galaviz: “Elements of sound resource allocation include requiring all projects to use corporate experience based cost models for resource estimating; requiring skill requirements, skill levels, and quantity estimates to follow experience based business rules; requiring make/buy decisions be based on company skills inventory maintained in database, and subcontractor selection, where needed, sourced to pre-qualified vendors; requiring independent quality control verifying resource planning is consistent with our cost models and business rules.” [0073]). It would have been obvious to one of ordinary skill in the art before the effective filing date to have incorporated wherein the code is executable by the processor to determine whether the change order is outsourceable, as taught by Galaviz to the apparatus of Foroughi and Cantor because both of these systems are directed towards project management automation. In making this combination and determining if tasks in a project can be outsourced, it would allow the apparatus of Foroughi to allocate resources in way that is most efficient in facilitating success (Foroughi: [0071]-[0073]). Regarding Claim 17, Foroughi, Cantor, and Galaviz teach the apparatus of Claim 15, wherein, in response to the change order being outsourceable, the one or more recommendations comprise a recommendation for one or more professionals that have skills matching the skills required to complete the change order (Foroughi: “managers can use the user profile and training record for automatically allocating a worker to a particular task based on their training and skill proficiency profile” [0117]; Galaviz: “subcontractor selection, where needed, sourced to pre-qualified vendors” [0073]). Response to Arguments The Examiner acknowledges the Applicant’s amendments to Claims 1, 7, 19, and 20. Applicant’s arguments, filed 03/03/2026, regarding the rejection of Claim 7-11 under 35 U.S.C. § 112(d) has been fully considered and are persuasive. The rejection has been withdrawn. Applicant’s arguments, filed 03/03/2026, regarding the rejection of Claims 1-2 and 5-22 under 35 U.S.C. § 101 have been fully considered and are persuasive. The rejection has been withdrawn. Applicant’s arguments, filed 03/03/2026, regarding the rejection of Claims 1-2 and 5-22 under 35 U.S.C. § 102/103 have been fully considered and are found moot in light of the new grounds of rejection (see rejection above). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Mahdi et al. (“Software Project Management using Machine Learning Technique—A Review”) discusses utilizing knowledge from historical project data sets for the development of predictive machine learning models for project risk management. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MEGAN E HWANG whose telephone number is (703)756-1377. The examiner can normally be reached Monday-Thursday 10:00-7:30 ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jennifer Welch can be reached at (571) 272-7212. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /M.E.H./Examiner, Art Unit 2143 /JENNIFER N WELCH/Supervisory Patent Examiner, Art Unit 2143
Read full office action

Prosecution Timeline

Jul 26, 2022
Application Filed
Apr 16, 2025
Non-Final Rejection — §102, §103
Jul 01, 2025
Applicant Interview (Telephonic)
Jul 01, 2025
Examiner Interview Summary
Jul 24, 2025
Response Filed
Oct 28, 2025
Final Rejection — §102, §103
Jan 27, 2026
Examiner Interview Summary
Jan 27, 2026
Applicant Interview (Telephonic)
Mar 03, 2026
Request for Continued Examination
Mar 12, 2026
Response after Non-Final Action
Apr 03, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12456093
Corporate Hierarchy Tagging
2y 5m to grant Granted Oct 28, 2025
Patent 12437514
VIDEO DOMAIN ADAPTATION VIA CONTRASTIVE LEARNING FOR DECISION MAKING
2y 5m to grant Granted Oct 07, 2025
Patent 12437517
VIDEO DOMAIN ADAPTATION VIA CONTRASTIVE LEARNING FOR DECISION MAKING
2y 5m to grant Granted Oct 07, 2025
Patent 12437518
VIDEO DOMAIN ADAPTATION VIA CONTRASTIVE LEARNING FOR DECISION MAKING
2y 5m to grant Granted Oct 07, 2025
Patent 12437519
VIDEO DOMAIN ADAPTATION VIA CONTRASTIVE LEARNING FOR DECISION MAKING
2y 5m to grant Granted Oct 07, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
47%
Grant Probability
99%
With Interview (+60.2%)
3y 0m
Median Time to Grant
High
PTA Risk
Based on 19 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month