Prosecution Insights
Last updated: April 19, 2026
Application No. 17/483,941

AI AUTO-SCHEDULER

Non-Final OA §101§103
Filed
Sep 24, 2021
Examiner
PUJOLS-CRUZ, MARJORIE
Art Unit
3624
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Hexagon Technology Center GmbH
OA Round
9 (Non-Final)
18%
Grant Probability
At Risk
9-10
OA Rounds
3y 2m
To Grant
46%
With Interview

Examiner Intelligence

Grants only 18% of cases
18%
Career Allow Rate
25 granted / 136 resolved
-33.6% vs TC avg
Strong +28% interview lift
Without
With
+27.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
50 currently pending
Career history
186
Total Applications
across all art units

Statute-Specific Performance

§101
38.7%
-1.3% vs TC avg
§103
43.3%
+3.3% vs TC avg
§102
9.4%
-30.6% vs TC avg
§112
6.6%
-33.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 136 resolved cases

Office Action

§101 §103
DETAILED ACTION This communication is a Non-Final Office Action rejection on the merits. Claims 1-3, 5-9, 12, 14-20, and 22-23 are currently pending and have been addressed below. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant's arguments filed on 12/24/2025 (related to the 103 Rejection) have been fully considered but they are not persuasive. Applicant states, on pages 10-11, that the claimed invention is coordinating resources across multiple third level work packages in a way that maximizes the prime objective to be achieved within the hierarchical work breakdown structure while also coordinating availability of resources between a plurality of third level work packages requiring resources of the given resource category. This is different than merely having a work package that requires a particular resource for a particular amount of time and assigning an available resource to the work package, as it can involve such things as ensuring that a particular resource is available when needed for a particular work package or adjusting the work package so that the resource will be available when needed. Applicant respectfully submits that the claimed invention is not disclosed or suggested by the cited references alone or in combination. Examiner respectfully disagrees with Applicant. Cami discloses coordinating resources across multiple third level work packages in a way that maximizes the prime objective to be achieved within the hierarchical work breakdown structure (see Figure 2E and related text in Paragraph 0044, In yet other examples, the algorithm can optimize for one aspect of a construction project, such as speed of the construction, total cost of the construction, or the use of particular types of materials within the project, such as, for example, eco-friendly materials, fire proof materials, or thermally resistive materials; Paragraph 0136, Currently, scheduling for a construction project is done manually. A reinforced learning algorithm can be trained from experience and through experimentation can determine a path forward to completing the project in cases where work is stalled. Once the algorithm is sufficiently trained, it can be used on all projects. An example state in the algorithm can be that a framing task needs to be put on hold for inspection. The algorithm will look at the remaining tasks on the project and suggest to the supervisor that a plumbing task can be moved up in the schedule while the framing inspection is completed. The suggestions are based on a policy created from reinforced learning on prior experiences and experimentation by the algorithm itself. The object of the algorithm is to reduce down time on the project; Examiner notes that Cami discloses “a plurality of different scheduling objectives” because the reinforcement learning can be trained to: reduce downtime of the project; reduce delays of the project; and/or optimize speed of the construction) while also coordinating availability of resources between a plurality of third level work packages requiring resources of the given resource category (see Figure 2E and related text in Paragraph 0033, additional inputs can be taken based on the available resources (materials available, workers available, weather, and equipment available); Paragraph 0106, Paragraph 0106, Database 405 can contain information related to one or more construction projects, tasks to be completed, equipment available, materials, workers, historical information about projects, worker efficiencies, or any other data). In this case, Cami discloses availability of a given resource category since the AI auto-scheduler (Paragraph 0033, reinforcement learning) can determine available quantity for each resource category and the times of availability for each resource category (Paragraph 0033, additional inputs can be taken based on the available resources (materials available, workers available, weather, and equipment available; Paragraph 0038, assigned resources and scheduled with start and end dates). Examiner notes that “workers available” is considered one resource category and “equipment available” is considered another resource category. Also, as seen in Figure 2E, resources are allocated at the lowest level of the hierarchical work breakdown structure, which is also commonly known as the work package level or third level work package (e.g., lowest level as defined in the hierarchical work breakdown structure). Although Cami discloses all the limitations above and availability information for each resource category, Cami does not specifically disclose wherein different resource categories are allocated to the same work package (e.g., each work package may need two or more different resource categories at the same time for a specified duration). However, Blackmon discloses coordinating availability of resources between a plurality of third level work packages requiring resources of the given resource category (Paragraph 0054, The constraints analysis module 510 determines whether a work package is valid by evaluating project constraints for the work package (e.g., availability of project materials, site space, work crews and site equipment at the proposed time of release to a work crew)). Therefore, Blackmon improves upon Cami by further specifying wherein different resource categories may be allocated in the same work package based on the quantity required for each resource (e.g., coordinate assignment of workers and equipment at the work package level based on workers and equipment availability). Applicant's arguments filed on 12/24/2025 (related to the 101 Rejection) have been fully considered but they are not persuasive. Applicant states, on page 10, that the claims already following this example/claim (example 47, claim 3) in at least the sense that the claims specify the remedial action that is executed to solve the specific problem, i.e., generating generate an optimum work package schedule that maximizes the prime objective to be achieved within the hierarchical work breakdown structure while automatically determining and assigning resource needs, timeframes, and dependencies between resources for a given third level work package requiring a resource of a given resource category, and between the plurality of third level work packages requiring resources of the given resource category, including automatically coordinating availability of the resources across multiple work packages based on the duration that each resource will be required, the available quantity of the given resource category, and the times of availability of the given resource category so that resources of the given resource category required for the plurality of third level work packages will be available when needed for the sequenced third level work packages. Examiner respectfully disagrees with Applicant. Claim 1 is considered to be an abstract idea because the claim limitations are directed to “mathematical concepts” or “certain methods of organizing human activity”. In this case, scheduling work packages to maximize the one or more prime objectives is considered a mathematical calculation. Also, scheduling work packages based on availability of resources is a social activity (see MPEP 2106.04(a)(2)). If a claim limitation, under its broadest reasonable interpretation, covers mathematical concepts or managing interactions between people, then it falls within the “mathematical concepts” or “certain methods of organizing human activity” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. Claim 1 includes additional elements: an artificial intelligence reinforcement learning engine; and databases. The artificial intelligence reinforcement learning engine is merely used to generate an optimum task schedule to sequence the items in the work package (Paragraph 0016). The databases are merely used to store tasks defined in the work package, resources, constraints, and scheduling objectives (Paragraphs 0078-0079). Merely stating that the step is performed by a computer component results in “apply it” on a computer (MPEP 2106.05f). These elements of “artificial intelligence reinforcement learning engine” and “databases” are recited at a high level of generality such that it amounts no more than mere instructions to apply the exception using a generic computer element. In this case, the plain meaning of the “training” step is merely describing how the reinforcement learning engine is receiving continuous data to iteratively adjust the values/parameters to minimize a loss function (e.g., based on a reward function). The “training” step is similar to example 47, claim 2 of the 2024 AI Guidance, which uses mathematical calculations to iteratively adjust the values. Also, the step of “training” by receiving expert feedback is “well-known” in the art (see MPEP 2106.05(d). Further, Examiner notes that the step of “generating an optimum work package schedule that maximizes the prime objective” is still part of the abstract idea (e.g., mathematical concepts such as using an algorithm/model to maximize a prime objective function) and not an action that integrates the abstract idea into a practical application. Therefore, claim 1 is not similar to example 47, claim 3. Lastly, claim 1 fails to recite any improvements to another technology or technical field, improvements to the functioning of the computer itself, use of a particular machine, effecting a transformation or reduction of a particular article to a different state or thing, adding unconventional steps that confine the claim to a particular useful application, and/or meaningful limitations beyond generally linking the use of an abstract idea to a particular environment. See 84 Fed. Reg. 55. Viewed individually or as a whole, these additional claim elements do not provide meaningful limitations to transform the abstract idea into a patent eligible application of the abstract idea such that the claim amounts to significantly more than the abstract idea itself. Thus, the claim is not patent eligible. Independent claim 20 recites similar features and therefore is rejected for the same reasons as independent claim 1. Claims 2-3, 5-9, 12, 14-19, and 22-23 are rejected for having the same deficiencies as those set forth with respect to the claims that they depend from, independent claims 1 and 20. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-3, 5-9, 12, 14-20, and 22-23 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., an abstract idea) without reciting significantly more. Independent Claim 1 Step One - First, pursuant to step 1 in the January 2019 Revised Patent Subject Matter Eligibility Guidance (“2019 PEG”) on 84 Fed. Reg. 53, the claim 1 is directed to an apparatus which is a statutory category. Step 2A, Prong One - Claim 1 recites: A system for generating task schedules using an electronic device, the system comprising: at least one processor; at least one memory coupled to the at least one processor and containing instructions which, when executed by the at least one processor, causes the system to implement an auto-scheduler configured to: train an engine for a plurality of different scheduling objectives based on a plurality of data sets representing a plurality of different work projects including at least one of simulated work projects or actual completed work projects with each scheduling objective having a different reward function to produce a first model for an inference process, wherein each training data set includes: items representing a plurality of training work packages, wherein the plurality of training work packages are associated with a hierarchical work breakdown structure that comprises a first level representing the total work to be planned; a second level representing a logical breakdown of the first level total work into a plurality of subsystems; and a third level representing the plurality of work packages, each of the plurality of second level subsystems associated with one or more of the plurality of third level work packages, each of the plurality of third level work packages identifying specific tasks, resources, and a duration that each resource will be required; items representing all of the resources from all of the third level work packages, wherein each of the resources is characterized by a resource category; and items representing constraints on the resources including an available quantity and times of availability for each resource category, wherein the first model is trained for scheduling third level work packages so that required resources will be available when needed for each of the scheduling objectives; receive a plurality of work packages to be scheduled, comprising: a total work database containing items representing a plurality of work packages, wherein the plurality of work packages are associated with a hierarchical work breakdown structure that comprises a first level representing the total work to be planned; a second level representing a logical breakdown of the first level total work into a plurality of subsystems; and a third level representing the plurality of work packages, each of the plurality of second level subsystems associated with one or more of the plurality of third level work packages, each of the plurality of third level work packages identifying specific tasks, resources, and a duration that each resource will be required; a resources database containing items representing all of the resources from all of the third level work packages, wherein each of the resources is characterized by a resource category; a constraints database containing items representing constraints on the resources listed in the resources database, wherein the constraints database specifies an available quantity and times of availability for each resource category; and a scheduling objective database designating a prime objective that is to be achieved by the optimum task schedule; and generate an optimum work package schedule to sequence the third level work packages using the trained engine based on the first model applied to inputs from the total work database, the resources database, the constraints database, and the scheduling objectives database, wherein the optimum work package schedule maximizes the prime objective that is to be achieved within the hierarchical work breakdown structure, and wherein the auto-scheduler automatically determines and assigns resource needs, timeframes, and dependencies between resources for a given third level work package requiring a resource of a given resource category, and between the plurality of third level work packages requiring resources of the given resource category, including sequencing of tasks and resources between the third level work packages associated with each second level subsystem, and automatically coordinates availability of the resources across multiple work packages based on the duration that each resource will be required, the available quantity of the given resource category, and the times of availability of the given resource category so that resources of the given resource category required for the plurality of third level work packages will be available when needed for the sequenced third level work packages. These claim elements are considered to be abstract ideas because they are directed to “mathematical concepts” which include “mathematical calculations.” In this case, scheduling work packages to maximize the one or more prime objectives is considered a mathematical calculation (see MPEP 2106.04(a)(2)). If a claim limitation, under its broadest reasonable interpretation, covers mathematical calculations, then it falls within the “mathematical concepts” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. Step 2A Prong 2 - The judicial exception is not integrated into a practical application. Claim 1 includes additional elements: using an electronic device; a processor; a memory; an artificial intelligence auto-scheduler configured to train a reinforcement learning engine though a deep reinforcement learning training process; a work database; a resource database; a constraint database; and a scheduling objective database. The electronic device is merely used to generate a task schedule (Paragraph 0027). The processor is merely used to execute instructions (Paragraph 0034). The memory is merely used to store instructions (Paragraph 0034). The artificial intelligence reinforcement learning engine is merely used to generate an optimum task schedule to sequence the items in the work package (Paragraph 0016). The databases are merely used to store tasks defined in the work package, resources, constraints, and scheduling objectives (Paragraphs 0078-0079). Merely stating that the step is performed by a computer component results in “apply it” on a computer (MPEP 2106.05f). These elements of “electronic device,” “processor,” “memory,” “artificial intelligence reinforcement learning engine,” and “databases” are recited at a high level of generality such that it amounts no more than mere instructions to apply the exception using a generic computer element. In this case, the plain meaning of “training” is describing how the reinforcement learning engine is learning to generate specific types of outputs based on specific types of inputs. However, claim 1 does not include any details about how the trained reinforcement learning operates (see 2024 AI Guidance, Example 47, Claim 2). Accordingly, alone and in combination, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. Therefore, the claim is directed to an abstract idea. Step 2B - The claim does not include additional elements that are sufficient to amount significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the claims describe how to generally “apply” the concept of generating an optimum work package schedule. The specification shows that the electronic device is merely used to generate a task schedule (Paragraph 0027). The processor is merely used to execute instructions (Paragraph 0034). The memory is merely used to store instructions (Paragraph 0034). The artificial intelligence reinforcement learning engine is merely used to generate an optimum task schedule to sequence the items in the work package (Paragraph 0016). The databases are merely used to store tasks defined in the work package, resources, constraints, and scheduling objectives (Paragraphs 0078-0079). Also, as discussed above, the artificial intelligence reinforcement learning engine is recited at a high level of generality and is not directed to an improvement in machine learning technology. Thus, nothing in the claim adds significantly more to the abstract idea. The claim is ineligible. See 2024 AI Guidance, Example 47, claims 2-3. Independent claim 20 is directed to a method at step 1, which is a statutory category. Claim 20 recites similar limitations as claim 1 and is rejected for the same reasons at step 2a, prong one; step 2a, prong 2; and step 2b. The claim is not patent eligible. Dependent claims 2-3 are not directed to any additional claim elements. Rather, these claims offer further descriptive limitations of elements found in the independent claims and addressed above - such as by: specifying what is included in the databases. These processes are similar to the abstract idea noted in the independent claim because they further the limitations of the independent claim which are directed to “certain methods of organizing human activity” which include “managing interactions between people.” In addition, no additional elements are integrated into the abstract idea. Therefore, the claims still recite an abstract idea that can be grouped into certain methods of organizing human activity. Dependent claims 5-9 are not directed to any additional claim elements. Rather, these claims offer further descriptive limitations of elements found in the independent claims and addressed above - such as by specifying: wherein the reinforcement learning includes a neural network; descriptions of what is included in the generated schedule; other services provided by the AI auto-scheduler; and wherein the services are provided as cloud-based micro-services. These processes are similar to the abstract idea noted in the independent claim because they further the limitations of the independent claim which are directed to “certain method of organizing human activity” which include “managing interactions between people.” In addition, no additional elements are integrated into the abstract idea. Therefore, the claims still recite an abstract idea that can be grouped into certain methods of organizing human activity. Lastly, Examiner notes that merely describing a neural network and other services provided is not enough to show any improvement in computer functionality (see MPEP 2106.06(b)). Dependent claims 14-19, and 22-23 are not directed to any additional claim elements. Rather, these claims offer further descriptive limitations of elements found in the independent claims and addressed above - such as by specifying wherein the AI-auto scheduler: automatically sequences third level items relating to a given second level subsystem; automatically coordinates availability of resources across multiple work packages so that the multiple work packages are optimized with respect to a given resource; receives feedback relating to execution of the sequenced third level items in the work package and automatically updates the sequence of third level items based on a list of uncompleted third level items and resources needed for completion of such items; receives industry expert feedback regarding scheduling best practices; automatically updates the sequence of third level items based on the list of uncompleted third level items and the resources needed for completion of such items utilizing the updated trained reinforcement learning engine; produces a plurality of optimized third level work package candidates and presents alternative options to the schedule of third level work packages; provides insights on scheduling of the third level work packages so that industry experts can be trained by the trained reinforcement learning engine. These processes are similar to the abstract idea noted in the independent claim because they further the limitations of the independent claim which are directed to “certain methods of organizing human activity” which include “managing interactions between people.” In addition, no additional elements are integrated into the abstract idea. Therefore, the claims still recite an abstract idea that can be grouped into certain methods of organizing human activity. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-3, 5-9, 12, 14-20, and 22-23 are rejected under 35 U.S.C. 103 as being unpatentable over Cami (WO 2022/026520 A1), in view of Kintsakis (Kintsakis, A.M., Psomopoulos, F.E. and Mitkas, P.A., 2019. Reinforcement learning based scheduling in a workflow management system. Engineering Applications of Artificial Intelligence, 81, pp.94-106), in further view of Blackmon (US 2005/0171790 A1). Regarding claim 1 (Previously Presented), Cami discloses a system for generating task schedules using an electronic device, the system comprising (Figure 3, item 300, Electronic Device; Paragraph 0010, Aspects of the disclosed technology can comprise methods, systems, and computer readable medium): at least one processor; at least one memory coupled to the at least one processor and containing instructions which, when executed by the at least one processor, causes the system to implement an artificial intelligence (AI) auto-scheduler configured to (Paragraph 0012, The system can comprise one or more processors coupled to a memory, the memory containing instructions to determine a completion metric for a construction activity, the instructions when executed configured to perform the steps of: selecting, one or more parameters related to a construction activity; receiving, by one or more processors, a current condition of a construction project; determining, by the one or more processors, from possible tasks, a scheduled sequence of tasks to complete the construction activity; evaluating, by the one or more processors, a current status of at least one task from the schedule of tasks; and computing, by one or more processors, the completion metric for the construction project based on the current condition of the construction project and the current status of at least one task from the schedule sequence of tasks; Paragraph 0033, The schedule of tasks can be generated using a reinforcement learning): train an artificial intelligence reinforcement learning engine through a deep reinforcement learning training process for a plurality of different scheduling objectives based on a plurality of data sets representing a plurality of different work projects including at least one of simulated work projects or actual completed work projects with each scheduling objective having a different reinforcement learning reward function to produce a first artificial intelligence reinforcement engine model for an … process (see Figures 2E, 2D, and 6; Paragraph 0033, The schedule of tasks can be generated using a reinforcement learning; Paragraph 0044, In yet other examples, the algorithm can optimize for one aspect of a construction project, such as speed of the construction, total cost of the construction, or the use of particular types of materials within the project, such as, for example, eco-friendly materials, fire proof materials, or thermally resistive materials; Paragraph 0125, Ensemble methods can be used, which primarily use the idea of combining several predictive models, which can be supervised ML or unsupervised ML to get higher quality predictions than each of the models could provide on their own. As one example, random forest algorithms Neural networks and deep learning techniques can also be used for the techniques described above; Paragraph 0136, Currently, scheduling for a construction project is done manually. A reinforced learning algorithm can be trained from experience and through experimentation can determine a path forward to completing the project in cases where work is stalled. Once the algorithm is sufficiently trained, it can be used on all projects. An example state in the algorithm can be that a framing task needs to be put on hold for inspection. The algorithm will look at the remaining tasks on the project and suggest to the supervisor that a plumbing task can be moved up in the schedule while the framing inspection is completed. The suggestions are based on a policy created from reinforced learning on prior experiences and experimentation by the algorithm itself. The object of the algorithm is to reduce down time on the project; Paragraph 0137, The environment for the algorithm is defined by the tasks available for a project and the workers available. The experimentation agent can be rewarded for keeping resources on a task when there are tasks to complete and inversely penalized when letting tasks sit idle or losing resources outright. Using the reward/penalty procedure the agent can train itself through experimentation to come up with a policy. Additionally, the policy can be updated from additional data which can form a database of already implemented examples. The resultant policy can be used (exploited) on real construction projects. In some examples, multiple paths can be generated and then screened using other machine learning techniques. For example, the algorithm can learn from a “good” supervisor what to do during specific scenarios. Reinforced learning can be used to teach the algorithm because we have data from previous projects of how schedules were rearranged to deal with delays. Once the algorithm, which can include a convoluted neural network (CNN), is trained, the algorithm can then provide task adjustments in real time to help with projects; Examiner notes that Cami discloses “a plurality of different scheduling objectives” because the reinforcement learning can be trained to: reduce downtime of the project; reduce delays of the project; and/or optimize speed of the construction. Also, Examiner notes that Cami is learning over time what is the best action for a given state. In this case, the reinforcement learning predicts/infers the best action based on real time data. Then, the reinforcement learning receives feedback that includes a successful or failed action (e.g., rewards, penalties, learning from a good supervisor). Therefore, based on broadest reasonable interpretation in light of the specification, Cami discloses to “train an artificial intelligence reinforcement training” because the reinforcement learning is updated over time based on feedback provided from actual completed work projects), wherein each training data set includes (Paragraph 0137, Reinforced learning can be used to teach the algorithm because we have data from previous projects of how schedules were rearranged to deal with delays): items representing a plurality of training work packages, wherein the plurality of training work packages are associated with a hierarchical work breakdown structure that comprises a first level representing the total work to be planned; a second level representing a logical breakdown of the first level total work into a plurality of subsystems; and a third level representing the plurality of work packages, each of the plurality of second level subsystems associated with one or more of the plurality of third level work packages, each of the plurality of third level work packages identifying specific tasks, resources, and a duration that each resource will be required (Paragraph 0033, The schedule of tasks can be generated using a reinforcement learning; Paragraph 0038, In some examples, work blocs can be further broken down into assignable actions or sub tasks, which can be assigned resources and scheduled with start and end dates; Paragraph 0077, Each task can be scheduled by start date and end date. Additionally, the worker or crew that will perform the task can be determined so that at the end of scheduling resources needed and end date are defined. The duration of the task can be determined by several factors such as task duration is automatically adjusted from historical data; Paragraph 0084, When the project is first created, scheduled and workers can be assigned tasks that include the UOM to be done and hours allowed for the work; Paragraph 0095, Figure 2E similar to Figure 2D illustrates an example schedule 275 which can reflect updates or completions to the schedule based on user inputs or a calculation of a percentage of an event being completed; Paragraph 0106, Database 405 can contain information related to one or more construction projects, tasks to be completed, equipment available, materials, workers, historical information about projects, worker efficiencies, or any other data which can be used by a simulation module or other trained machine learning or other algorithmic model to predict information about related to a construction project. For example, database 405 may contain a relational database containing information described within this disclosure. Any other database structure can also be used. In some examples, information obtained from any of the modules described with respect to Figure 4 can be stored within database 405; Examiner notes that the Gantt chart in Figure 2E represents a hierarchical work breakdown structure, wherein “HealthLogicX” is the first level, “Drywall & Taping” is the second level, and “Partition Type, Patch & Prep Existing Walls, Corner Beads, and Finished Ends” is the third level representing the plurality of work packages); PNG media_image1.png 525 824 media_image1.png Greyscale items representing all of the resources from all of the third level work packages, wherein each of the resources is characterized by a resource category (Paragraph 0035, In some examples, each task can line up with a building product, a worker assigned to the task, or a division; Paragraph 0040, automatic scheduling based on historical data related to a tasks, project, construction site, specific details related to the site (such as weather, location, local holidays), skill level of workers, equipment availability and level; Paragraph 0077, Each task can be scheduled by start date and end date. Additionally, the worker or crew that will perform the task can be determined so that at the end of scheduling resources needed and end date are defined; Examiner interprets the “resource type such as worker or equipment” as the “resource category”); PNG media_image1.png 525 824 media_image1.png Greyscale and items representing constraints on the resources including an available quantity and times of availability for each resource category (Paragraph 0021, This drawback limits the performance of state-of-the-art machine learning models, that are typically trained using stationary batches of data without accounting for situations in which the number of available machines may change; Paragraph 0032, In some examples, constraints can be added on the construction project which can affect the schedule of tasks. For instance, if a project must be completed more promptly or in a smaller time frame, the sequenced schedule of tasks can be updated. The completion metric can be updated for the new sequenced schedule of tasks. In other examples, if a delay or impossibility to complete a particular task, due to material or labor shortage, or other delays (e.g. weather, zoning changes, permitting issues), the algorithm can update the completion metric; Paragraph 0033, In some examples, additional inputs can be taken based on the available resources (materials available, workers available, weather, and equipment available), wherein the first artificial intelligence reinforcement engine model is trained for scheduling third level work packages so that required resources will be available when needed for each of the scheduling objectives (Paragraph 0044, In yet other examples, the algorithm can optimize for one aspect of a construction project, such as speed of the construction, total cost of the construction, or the use of particular types of materials within the project, such as, for example, eco-friendly materials, fire proof materials, or thermally resistive materials; Paragraph 0136, Currently, scheduling for a construction project is done manually. A reinforced learning algorithm can be trained from experience and through experimentation can determine a path forward to completing the project in cases where work is stalled. Once the algorithm is sufficiently trained, it can be used on all projects. An example state in the algorithm can be that a framing task needs to be put on hold for inspection. The algorithm will look at the remaining tasks on the project and suggest to the supervisor that a plumbing task can be moved up in the schedule while the framing inspection is completed. The suggestions are based on a policy created from reinforced learning on prior experiences and experimentation by the algorithm itself. The object of the algorithm is to reduce down time on the project; Paragraph 0137, The environment for the algorithm is defined by the tasks available for a project and the workers available. The experimentation agent can be rewarded for keeping resources on a task when there are tasks to complete and inversely penalized when letting tasks sit idle or losing resources outright. Using the reward/penalty procedure the agent can train itself through experimentation to come up with a policy. Additionally, the policy can be updated from additional data which can form a database of already implemented examples. The resultant policy can be used (exploited) on real construction projects. In some examples, multiple paths can be generated and then screened using other machine learning techniques. For example, the algorithm can learn from a “good” supervisor what to do during specific scenarios. Reinforced learning can be used to teach the algorithm because we have data from previous projects of how schedules were rearranged to deal with delays. Once the algorithm, which can include a convoluted neural network (CNN), is trained, the algorithm can then provide task adjustments in real time to help with projects); receive a plurality of work packages to be scheduled, comprising (Paragraph 0028, A scheduled sequence of tasks can be a sequence of tasks wherein each task can be related to another task or be independent of another tasks, and each task is assigned to a particular time in which it is to be completed or expected to be completed. The tasks can collectively form the steps required to finish the project. Any arbitrary granularity of tasks is possible and any task can be divided into sub-tasks); a total work database containing items representing a plurality of work packages, wherein the plurality of work packages are associated with a hierarchical work breakdown structure that comprises a first level representing the total work to be planned; a second level representing a logical breakdown of the first level total work into a plurality of subsystems; and a third level representing the plurality of work packages, each of the plurality of second level subsystems associated with one or more of the plurality of third level work packages, each of the plurality of third level work packages identifying specific tasks, resources, and a duration that each resource will be required (Paragraph 0033, The schedule of tasks can be generated using a reinforcement learning; Paragraph 0077, Each task can be scheduled by start date and end date. Additionally, the worker or crew that will perform the task can be determined so that at the end of scheduling resources needed and end date are defined. The duration of the task can be determined by several factors; Paragraph 0084, When the project is first created, scheduled and workers can be assigned tasks that include the UOM to be done and hours allowed for the work; Paragraph 0095, Figure 2E similar to Figure 2D illustrates an example schedule 275 which can reflect updates or completions to the schedule based on user inputs or a calculation of a percentage of an event being completed; Paragraph 0106, Database 405 can contain information related to one or more construction projects, tasks to be completed, equipment available, materials, workers, historical information about projects, worker efficiencies, or any other data which can be used by a simulation module or other trained machine learning or other algorithmic model to predict information about related to a construction project. For example, database 405 may contain a relational database containing information described within this disclosure. Any other database structure can also be used. In some examples, information obtained from any of the modules described with respect to Figure 4 can be stored within database 405; Examiner notes that the Gantt chart in Figure 2E represents a hierarchical work breakdown structure, wherein “HealthLogicX” is the first level, “Drywall & Taping” is the second level, and “Partition Type, Patch & Prep Existing Walls, Corner Beads, and Finished Ends” is the third level representing the plurality of work packages); PNG media_image1.png 525 824 media_image1.png Greyscale a resources database containing items representing all of the resources from all of the third level work packages (Paragraph 0033, The schedule of tasks can be generated using a reinforcement learning; Paragraph 0077, Each task can be scheduled by start date and end date. Additionally, the worker or crew that will perform the task can be determined so that at the end of scheduling resources needed and end date are defined. The duration of the task can be determined by several factors; Paragraph 0095, Figure 2E similar to Figure 2D illustrates an example schedule 275 which can reflect updates or completions to the schedule based on user inputs or a calculation of a percentage of an event being completed; Paragraph 0106, Database 405 can contain information related to one or more construction projects, tasks to be completed, equipment available, materials, workers, historical information about projects, worker efficiencies, or any other data which can be used by a simulation module or other trained machine learning or other algorithmic model to predict information about related to a construction project. For example, database 405 may contain a relational database containing information described within this disclosure. Any other database structure can also be used. In some examples, information obtained from any of the modules described with respect to Figure 4 can be stored within database 405), wherein each of the resources is characterized by a resource category (Paragraph 0034, In the examples described herein, information can be sent to a computer via input from a user device, and multiple inputs from multiple users (e.g. multiple workers on a job site) can be aggregated or stored on a database for analysis; Paragraph 0035, In some examples, each task can line up with a building product, a worker assigned to the task, or a division; Paragraph 0040, automatic scheduling based on historical data related to a tasks, project, construction site, specific details related to the site (such as weather, location, local holidays), skill level of workers, equipment availability and level; Paragraph 0077, Each task can be scheduled by start date and end date. Additionally, the worker or crew that will perform the task can be determined so that at the end of scheduling resources needed and end date are defined); a constraints database containing items representing constraints on the resources listed in the resources database (Paragraph 0032, In some examples, constraints can be added on the construction project which can affect the schedule of tasks. For instance, if a project must be completed more promptly or in a smaller time frame, the sequenced schedule of tasks can be updated. In some examples, additional inputs can be taken based on the available resources (materials available, workers available, weather, and equipment available); Paragraph 0034, In the examples described herein, information can be sent to a computer via input from a user device, and multiple inputs from multiple users (e.g. multiple workers on a job site) can be aggregated or stored on a database for analysis; Paragraph 0033, The schedule of tasks can be generated using a reinforcement learning), wherein the constraints database specifies an available quantity and times of availability for each resource category (Paragraph 0021, This drawback limits the performance of state-of-the-art machine learning models, that are typically trained using stationary batches of data without accounting for situations in which the number of available machines may change; Paragraph 0032, In some examples, constraints can be added on the construction project which can affect the schedule of tasks. For instance, if a project must be completed more promptly or in a smaller time frame, the sequenced schedule of tasks can be updated. The completion metric can be updated for the new sequenced schedule of tasks. In other examples, if a delay or impossibility to complete a particular task, due to material or labor shortage, or other delays (e.g. weather, zoning changes, permitting issues), the algorithm can update the completion metric; Paragraph 0033, In some examples, additional inputs can be taken based on the available resources (materials available, workers available, weather, and equipment available; Paragraph 0040, automatic scheduling based on historical data related to a tasks, project, construction site, specific details related to the site (such as weather, location, local holidays), skill level of workers, equipment availability and level; Paragraph 0077, Each task can be scheduled by start date and end date. Additionally, the worker or crew that will perform the task can be determined so that at the end of scheduling resources needed and end date are defined); and a scheduling objective database designating a prime objective that is to be achieved by the optimum task schedule (Paragraph 0032, In some examples, constraints can be added on the construction project which can affect the schedule of tasks. For instance, if a project must be completed more promptly or in a smaller time frame, the sequenced schedule of tasks can be updated. In some examples, additional inputs can be taken based on the available resources (materials available, workers available, weather, and equipment available); Paragraph 0033, The schedule of tasks can be generated using a reinforcement learning; Paragraph 0034, In the examples described herein, information can be sent to a computer via input from a user device, and multiple inputs from multiple users (e.g. multiple workers on a job site) can be aggregated or stored on a database for analysis; Paragraph 0044, In yet other examples, the algorithm can optimize for one aspect of a construction project, such as speed of the construction, total cost of the construction, or the use of particular types of materials within the project, such as, for example, eco-friendly materials, fire proof materials, or thermally resistive materials); and generate an optimum work package schedule to sequence the third level work packages using the trained artificial intelligence reinforcement learning engine based on the first artificial intelligence reinforcement engine model applied to inputs from the total work database, the resource database, the constraints database, and the scheduling objectives database, wherein the optimum work package schedule maximizes the prime objective that is to be achieved within the hierarchical work breakdown structure (see Figures 2E, 2D, and 6; Paragraph 0033, The schedule of tasks can be generated using a reinforcement learning; Paragraph 0044, In yet other examples, the algorithm can optimize for one aspect of a construction project, such as speed of the construction, total cost of the construction, or the use of particular types of materials within the project, such as, for example, eco-friendly materials, fire proof materials, or thermally resistive materials; Paragraph 0136, Currently, scheduling for a construction project is done manually. A reinforced learning algorithm can be trained from experience and through experimentation can determine a path forward to completing the project in cases where work is stalled. Once the algorithm is sufficiently trained, it can be used on all projects. An example state in the algorithm can be that a framing task needs to be put on hold for inspection. The algorithm will look at the remaining tasks on the project and suggest to the supervisor that a plumbing task can be moved up in the schedule while the framing inspection is completed. The suggestions are based on a policy created from reinforced learning on prior experiences and experimentation by the algorithm itself. The object of the algorithm is to reduce down time on the project; Paragraph 0137, Additionally, the policy can be updated from additional data which can form a database of already implemented examples; Examiner notes that the task schedule is always generated at the work package level, then the tasks are rolled up to the second and first level based on a defined hierarchical work breakdown structure), and wherein the Al auto-scheduler automatically determines and assigns resource needs, timeframes, and dependencies between resources for a given third level work package requiring a resource of a given resource category, and between the plurality of third level work packages requiring resources of the given resource category, including sequencing of tasks and resources between the third level work packages associated with each second level subsystem, and automatically coordinates availability of the resources across multiple work packages based on the duration that each resource will be required, the available quantity of the given resource category, and the times of availability of the given resource category so that resources of the given resource category required for the plurality of third level work packages will be available when needed for the sequenced third level work packages (Paragraph 0021, This drawback limits the performance of state-of-the-art machine learning models, that are typically trained using stationary batches of data without accounting for situations in which the number of available machines may change; Paragraph 0026, Aspects of the current disclosure include intelligent workflow generation and metrics related to the workflow and construction projects based on updatable parameters (e.g. tasks, sub-tasks, weather, workers, resources, tools; Paragraph 0028, A scheduled sequence of tasks can be a sequence of tasks wherein each task can be related to another task or be independent of another tasks, and each task is assigned to a particular time in which it is to be completed or expected to be completed. The tasks can collectively form the steps required to finish the project. Any arbitrary granularity of tasks is possible and any task can be divided into sub-tasks; Paragraph 0032, In some examples, constraints can be added on the construction project which can affect the schedule of tasks. For instance, if a project must be completed more promptly or in a smaller time frame, the sequenced schedule of tasks can be updated. The completion metric can be updated for the new sequenced schedule of tasks; Paragraph 0033, In some examples, the schedule of tasks can be updated based on information from the environment or indications that a particular task cannot be completed. In other examples, if a particular task becomes too costly, it can be attempted to be replaced with another task. In some examples, the schedule of tasks can be generated using a machine learning algorithm, such as, for example, through one or more experimentation agents. Other algorithms include genetic algorithms, reinforcement learning, hybrid deep neural network methods, neural networks, generative adversarial networks, or heuristic optimization methods. The generation of tasks can be done in non-brute force computation, non-polynomial time, or in a time that is computationally feasible to provide real-time or near-real time updates to the schedule of tasks and completion metrics. In some examples, additional inputs can be taken based on the available resources (materials available, workers available, weather, and equipment available); Paragraph 0038, In some examples, work blocs can be further broken down into assignable actions or sub tasks, which can be assigned resources and scheduled with start and end dates; Paragraph 0040, Automatic scheduling based on historical data related to a tasks, project, construction site, specific details related to the site (such as weather, location, local holidays), skill level of workers, equipment availability and level; Paragraph 0077, Each task can be scheduled by start date and end date. Additionally, the worker or crew that will perform the task can be determined so that at the end of scheduling resources needed and end date are defined. The duration of the task can be determined by several factors; Paragraph 0079, Additionally, the technology disclosed herein can learn to schedule tasks and have suggestions ready if tasks are blocked or delayed to keep available resources being utilized towards completing the project; Examiner notes that a workflow includes dependencies between the plurality of work packages and resources). Cami discloses to: predict (e.g., infer), using a trained reinforcement learning, the best action for a given state (see at least Paragraphs 0125-0134); and wherein the reinforcement engine is trained by combining reinforcement learning (RL) and deep learning techniques (Paragraph 0125, Examiner notes that combining reinforcement learning with deep learning is known as deep reinforcement learning). Although Cami discloses all the limitations above and inherently discloses an inference process, Cami does not specifically disclose how the reinforcement learning is inferring the best action for a given state. However, Kintsakis discloses to: train an artificial intelligence reinforcement learning engine through a deep reinforcement learning training process for a … scheduling objectives based on a plurality of data sets representing a plurality of different work projects including at least one of simulated work projects or actual completed work projects with each scheduling objective having a different reinforcement learning reward function to produce a first artificial intelligence reinforcement engine model for an inference process (Page 96, 3.2 Proposed Solution, In this direction, we have expanded the capabilities of our previous work, the Hermes WMS (Kintsakis et al., 2017), to a system that can continuously learn to improve its workflow execution performance with respect to minimizing workflow makespan. Our approach entails a built-in capability of the WMS to accurately collect historical task execution data with the purpose of training models off-line that can estimate task runtime and failure probability for task executions of varying input sizes and across different execution sites. These models are then used on-line in inference mode to inform scheduling decisions. The outputs of these models along with other dynamically generated features are passed on to a policy network, which is in fact a neural model capable of performing scheduling decisions. More specifically, the policy network is capable of identifying a near optimal scheduling decision when presented with all possible scheduling choices that the system can immediately act upon, at any given point in time. The inability to generate high quality labeled data for an NP-Hard problem such as scheduling DAG workflows has urged us to adopt a reinforcement learning approach towards training the policy network. Due to the sheer number of episodes required for reinforcement learning, the policy network is trained off-line in a simulated environment that closely resembles the real one. The simulated environment consists of workflow DAGs, task characteristics and input sizes as well as execution sites, similar to those encountered in the real environment. As will become apparent, this allows for the training of a policy network that can deliver sterling performance when tasked with performing scheduling decisions in the real world environment; In this case, the scheduling objective is to minimize the workflow makespan. Applicant defines, on page 40, that the inference process is used to give the recommendation from the saved AI model. Based on broadest reasonable interpretation in light of the specification, Kintsakis discloses an inference process because the inference mode identifies/recommends a scheduling decision when presented with all possible scheduling choices). It would have been obvious to one ordinary skill in the art before the effective filing date to modify the reinforcement learning used to generate an optimum work package schedule (e.g., rearrange tasks to reduce downtime and/or optimize speed of construction), wherein the optimum work package includes predicting the best action for a given state (see at least Paragraphs 0125-0136) of the invention of Cami to further specify how the reinforcement learning is inferring the best action of the invention of Kintsakis because doing so would allow the reinforcement learning to identify a near optimal scheduling decision when presented with all possible scheduling choices (see Kintsakis, Page 96, 3.2 Proposed Solution). Further, the claimed invention is merely a combination of old elements, and in combination each element would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Cami discloses to automatically coordinate availability of the resources across multiple work packages based on the duration that each resource will be required and the available quantity of the given resource category (Paragraph 0021, number of available machines available; Paragraph 0033, additional inputs can be taken based on the available resources (materials available, workers available, weather, and equipment available; Paragraph 0038, assigned resources and scheduled with start and end dates; Paragraph 0040, automatic scheduling based on historical data related to a tasks, project, construction site, specific details related to the site, skill level of workers, equipment availability and level). Although Cami discloses all the limitations above and availability of the given resource category (e.g., materials available, workers available, and equipment available), Cami does not specifically disclose coordinating availability of multiple resource categories at the same time (e.g. assigning workers and equipment when both resource categories are available). However, Blackmon discloses … and items representing constraints on the resources including an available quantity and times of availability for each resource category, … wherein the constraints database specifies an available quantity and times of availability for each resource category; … coordinates availability of the resources across multiple work packages based on the duration that each resource will be required, the available quantity of the given resource category, and the times of availability of the given resource category so that resources of the given resource category required for the plurality of third level work packages will be available when needed for the sequenced third level work packages (Paragraph 0054, The constraints analysis module 510 determines whether a work package is valid by evaluating project constraints for the work package (e.g., availability of project materials, site space, work crews and site equipment at the proposed time of release to a work crew). Additionally, the constraints analysis module 510 works with the creation module 500 to allow a user to modify work packages and with the sequencing module 505 to allow a user to modify the sequence of work packages. Thus, the constraints analysis module 510 evaluates constraints on a given work package to allow a user to determine whether to release the work package to a work crew; Paragraph 0022, The computerized simulation model automatically generates a time and cost estimate for the work package based on project controls data (e.g., a library of unit time rates and unit cost rates) accessed in the various project databases). It would have been obvious to one ordinary skill in the art before the effective filing date to modify the reinforcement learning used to generate an optimum work package schedule (e.g., rearrange tasks to reduce downtime and/or optimize speed of construction), wherein the optimum work package is generated based on availability of the resources across multiple work packages of the invention of Cami to further specify availability of the resources across multiple work packages based on the duration that each resource will be required of the invention of Blackmon because doing so would allow the reinforcement learning to evaluate project constraints for the work package (e.g., availability of project materials, site space, work crews and site equipment) at the proposed time of release to a work crew (see Blackmon, Paragraph 0054). Further, the claimed invention is merely a combination of old elements, and in combination each element would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Regarding claim 2 (Original), which is dependent of claim 1, the combination of Cami, Kintsakis, and Blackmon discloses all the limitations in claim 1. Cami further discloses wherein: the total work database comprises at least a first list of a plurality of work packages to be performed (Paragraph 0033, The schedule of tasks can be generated using a reinforcement learning; Paragraph 0095, Figure 2E similar to Figure 2D illustrates an example schedule 275 which can reflect updates or completions to the schedule based on user inputs or a calculation of a percentage of an event being completed; Paragraph 0106, Database 405 can contain information related to one or more construction projects, tasks to be completed, equipment available, materials, workers, historical information about projects, worker efficiencies, or any other data which can be used by a simulation module or other trained machine learning or other algorithmic model to predict information about related to a construction project. For example, database 405 may contain a relational database containing information described within this disclosure. Any other database structure can also be used. In some examples, information obtained from any of the modules described with respect to Figure 4 can be stored within database 405; Examiner notes that the Gantt chart in Figure 2E represents a hierarchical work breakdown structure, wherein “HealthLogicX” is the first level, “Drywall & Taping” is the second level, and “Partition Type, Patch & Prep Existing Walls, Corner Beads, and Finished Ends” is the third level representing the plurality of work packages); the resources database comprises: a second list of resource requirements for each work package of the plurality of work package to be performed (Paragraph 0033, The schedule of tasks can be generated using a reinforcement learning; Paragraph 0095, Figure 2E similar to Figure 2D illustrates an example schedule 275 which can reflect updates or completions to the schedule based on user inputs or a calculation of a percentage of an event being completed; Paragraph 0106, Database 405 can contain information related to one or more construction projects, tasks to be completed, equipment available, materials, workers, historical information about projects, worker efficiencies, or any other data which can be used by a simulation module or other trained machine learning or other algorithmic model to predict information about related to a construction project. For example, database 405 may contain a relational database containing information described within this disclosure. Any other database structure can also be used. In some examples, information obtained from any of the modules described with respect to Figure 4 can be stored within database 405); and a third list of time requirements for each resource requirement of the plurality of work packages to be performed (Paragraph 0137, The environment for the algorithm is defined by the tasks available for a project and the workers available); and the constraints database comprises: a fourth list of resource types for each resource requirement of the plurality of work packages to be performed (Paragraph 0039, automatic scheduling based on historical data related to a tasks, project, construction site, specific details related to the site (such as weather, location, local holidays), skill level of workers, equipment availability and level); a fifth list of a quantity of each resource type for each resource requirement of the plurality of work packages to be performed (Paragraph 0021, Once configured, current machine learning algorithms are set in stone and unable to handle real-time changes to a production schedule. This drawback limits the performance of state-of-the-art machine learning models, that are typically trained using stationary batches of data without accounting for situations in which the number of available machines may change (machine breakdown) and the information becomes incrementally available over time (e.g. utility price)); and a sixth list of the time availability for each resource type for each resource requirement of the plurality of work packages to be performed ((Paragraph 0137, The environment for the algorithm is defined by the tasks available for a project and the workers available). Regarding claim 3 (Original), which is dependent of claim 1, the combination of Cami, Kintsakis, and Blackmon discloses all the limitations in claim 1. Cami further discloses wherein the scheduling objective database is selected from a group of scheduling objectives comprising at least: minimize average slow down; minimize average completion time; maximize efficiency of resource utilization; prioritize meeting desired customer dates; and prioritize meeting customer cost estimates (Paragraph 0044, In yet other examples, the algorithm can optimize for one aspect of a construction project, such as speed of the construction, total cost of the construction, or the use of particular types of materials within the project, such as, for example, eco-friendly materials, fire proof materials, or thermally resistive materials; It can be noted that the claim language is written in alternative form. The limitation taught by Cami is based on “minimize average completion time"). Regarding claim 5 (Previously Presented), which is dependent of claim 1, the combination of Cami, Kintsakis, and Blackmon discloses all the limitations in claim 1. Cami further discloses wherein the Al auto-scheduler generates the optimum work package schedule using neural networks that have been trained by reinforcement learning to maximize the prime objective (Paragraph 0136, A reinforced learning algorithm can be trained from experience and through experimentation can determine a path forward to completing the project in cases where work is stalled. Once the algorithm is sufficiently trained, it can be used on all projects. An example state in the algorithm can be that a framing task needs to be put on hold for inspection. The algorithm will look at the remaining tasks on the project and suggest to the supervisor that a plumbing task can be moved up in the schedule while the framing inspection is completed. The suggestions are based on a policy created from reinforced learning on prior experiences and experimentation by the algorithm itself. The object of the algorithm is to reduce down time on the project; Paragraph 0137, The environment for the algorithm is defined by the tasks available for a project and the workers available. The experimentation agent can be rewarded for keeping resources on a task when there are tasks to complete and inversely penalized when letting tasks sit idle or losing resources outright. Using the reward/penalty procedure the agent can train itself through experimentation to come up with a policy. Additionally, the policy can be updated from additional data which can form a database of already implemented examples. The resultant policy can be used (exploited) on real construction projects. In some examples, multiple paths can be generated and then screened using other machine learning techniques. For example, the algorithm can learn from a “good” supervisor what to do during specific scenarios. Reinforced learning can be used to teach the algorithm because we have data from previous projects of how schedules were rearranged to deal with delays. Once the algorithm, which can include a convoluted neural network (CNN), is trained, the algorithm can then provide task adjustments in real time to help with projects). Regarding claim 6 (Original), which is dependent of claim 5, the combination of Cami, Kintsakis, and Blackmon discloses all the limitations in claim 5. Cami further discloses wherein the neural networks comprise one input layer, one or more hidden layers with a plurality of neurons, and one output layer (Paragraph 0013, Aspects of the disclosed technology can comprise a method of generating a completion metric related to a construction project by using a trained neural network. The method can comprise receiving, in an input layer of the neural network, one or more inputs related to the construction project, the one or more inputs including at least information obtained from (i) a smarttool or (ii) a machine vision algorithm; evaluating, through a middle layer, the received one or more inputs; and outputting, in an output layer, one or more outputs from which a completion metric can be generated, the one or more outputs comprising at least one of (i) an end date or (ii) estimated remaining work hours. The trained neural network can be trained on historic data related to the output layer to generate weights for connections between the input layer and the middle layer and between the middle layer and the output layer). Regarding claim 7 (Original), which is dependent of claim 1, the combination of Cami, Kintsakis, and Blackmon discloses all the limitations in claim 1. Cami further discloses wherein the optimum task schedule comprises a sequence of the first list of a plurality of work packages to be performed, a start date and an end date for each work package of the plurality of work packages to be performed, a schedule chart for each work package of the plurality of work packages to be performed, a start date and an end date for the first list of the plurality of work packages to be performed, and a schedule chart for the first list of a plurality of work packages to be performed (Paragraph 0011, Aspects of the technology disclosed include a method, the method of determining a completion metric for a construction activity. The method can comprise receiving, by one or more processors, a current condition of a construction project; determining, by the one or more processors, from possible tasks, a scheduled sequence of tasks to complete the construction activity; evaluating, by the one or more processors, a current status of at least one task from the schedule of tasks; and computing, by one or more processors, the completion metric for the construction project based on the current condition of the construction project and the current status of at least one task from the schedule sequence of tasks; Figure 2A, Gantt chart; Figure 6 and related text in Paragraph 0141, Figure 6 illustrates a visual representation of an example adaption through the experimentation agent. Figure 6 illustrates graph 605, which reflects various tasks at various timelines, and the generation of graph 610, by adapting to a change in the tasks when task 2 cannot be completed and the order of tasks is changed. Graph 610 or information contained therein can be generated using machine learning techniques based on inputs or variables described in this disclosure; Examiner notes that the Gantt chart provided in Figure 2A includes a start data and an end data for a first list of plurality of work packages to be performed). Regarding claim 8 (Original), which is dependent of claim 1, the combination of Cami, Kintsakis, and Blackmon discloses all the limitations in claim 1. Cami further discloses wherein the Al auto-scheduler is further configured to provide: an artificial intelligence total job duration service (Paragraph 0111, In some examples, simulation module 425 can contain trained machine learning modules which can take multiple inputs to simulate or generate information related to the construction project. Historical data can be used to train an algorithm to predict the duration of tasks in hours and days. The training can be done in a simulation mode and the trained algorithm can be used through an application when being used on a construction project. Inputs can be simulated from various tools and other data sources during the training process); an artificial intelligence resource requirement service (Paragraph 0026, Aspects of the current disclosure include intelligent workflow generation and metrics related to the workflow and construction projects based on updatable parameters (e.g. tasks, sub-tasks, weather, workers, resources, tools)); an artificial intelligence work package dependency service (Paragraph 0011, Aspects of the technology disclosed include a method, the method of determining a completion metric for a construction activity. The method can comprise receiving, by one or more processors, a current condition of a construction project; determining, by the one or more processors, from possible tasks, a scheduled sequence of tasks to complete the construction activity; evaluating, by the one or more processors, a current status of at least one task from the schedule of tasks; and computing, by one or more processors, the completion metric for the construction project based on the current condition of the construction project and the current status of at least one task from the schedule sequence of tasks; Paragraph 0026, Aspects of the current disclosure include intelligent workflow generation and metrics related to the workflow and construction projects based on updatable parameters (e.g. tasks, sub-tasks, weather, workers, resources, tools)); and an artificial intelligence resource dependency service (Paragraph 36, For instance, an example of the disclosed technology may comprise a project that involves multiple tasks (as discussed above) that must be completed by a given time and for a given budget. A starting point for the project may involve a starting point (e.g., pour the foundation) and a start date. Once the foundation is poured, coordination of multiple tasks involving multiple resources must be accounted for and sequenced so that the projection completion time and/or costs are met. The multiple or possible tasks may comprise all the tasks involved in a construction project, e.g., pouring the foundation, getting a crane on site, building the frame, putting sheetrock, plumbing, floors, etc. However, interrelationship between different tasks may impact completion time and costs. As the number of tasks increase, tracking them and how they may impact each other gets beyond human capability and state of the art known computing tools. An aspect of the disclosed technology takes the information related to tasks and other parameters as input and dynamically tracks the progress of the project and updates the completion costs, time or other metrics that may be used to monitor progress or goals. For instance, once the foundation is poured, welding may need to be done to establish the frame of the building. The site may be monitored and the progress of the frame be monitored based on the rate at which welding materials are being used (e.g., 50% of the welding materials used based on productivity detected through use of welding equipment). The progress may then be used to update the tasks to be performed, their sequence, whether additional resources need to be brought on board, etc.). Regarding claim 9 (Original), which is dependent of claim 8, the combination of Cami, Kintsakis, and Blackmon discloses all the limitations in claim 8. Although Cami discloses all the limitations above and a server-type computer (see Figure 1), Cami does not specifically disclose wherein the services are provided as cloud-based micro-services. However, Kintsakis further discloses wherein the services are provided as cloud-based micro-services (The computer 900 is, for example, a stationary computer or a portable computer, and is an arbitrary form of electronic equipment. The computer 900 may be a client-type computer, a server-type computer, or a cloud-type computer. Computer 900 may be applied to devices other than machine learning device 4, production planning device 5, production management database device 6, and production simulation device 7). It would have been obvious to one ordinary skill in the art before the effective filing date to modify the reinforcement learning, in a server-type computer, used to generate an optimum work package schedule of the invention of Cami to further specify wherein the reinforcement learning is applied in a cloud-based micro-services of the invention of Kintsakis because the claimed invention is merely a combination of old elements, and in combination each element would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Regarding claim 12 (Previously Presented), which is dependent of claim 1, the combination of Cami, Kintsakis, and Blackmon discloses all the limitations in claim 1. Cami further discloses wherein the Al auto-scheduler automatically sequences third level items relating to a given second level subsystem (see Figures 2E, 2D, and 6; Paragraph 0026, Aspects of the current disclosure include intelligent workflow generation and metrics related to the workflow and construction projects based on updatable parameters (e.g. tasks, sub-tasks, weather, workers, resources, tools; Paragraph 0136, Currently, scheduling for a construction project is done manually. A reinforced learning algorithm can be trained from experience and through experimentation can determine a path forward to completing the project in cases where work is stalled. Once the algorithm is sufficiently trained, it can be used on all projects. An example state in the algorithm can be that a framing task needs to be put on hold for inspection. The algorithm will look at the remaining tasks on the project and suggest to the supervisor that a plumbing task can be moved up in the schedule while the framing inspection is completed. The suggestions are based on a policy created from reinforced learning on prior experiences and experimentation by the algorithm itself. The object of the algorithm is to reduce down time on the project). Regarding claim 14 (Previously Presented), which is dependent of claim 1, the combination of Cami, Kintsakis, and Blackmon discloses all the limitations in claim 1. Cami further discloses wherein the Al auto-scheduler automatically coordinates availability of resources across multiple work packages so that the multiple work packages are optimized with respect to a given resource (Paragraph 0032, Other algorithms include genetic algorithms, reinforcement learning, hybrid deep neural network methods, neural networks, generative adversarial networks, or heuristic optimization methods. The generation of tasks can be done in non-brute force computation, non-polynomial time, or in a time that is computationally feasible to provide real-time or near-real time updates to the schedule of tasks and completion metrics. In some examples, additional inputs can be taken based on the available resources (materials available, workers available, weather, and equipment available; Paragraph 0079, Additionally, the technology disclosed herein can learn to schedule tasks and have suggestions ready if tasks are blocked or delayed to keep available resources being utilized towards completing the project; Paragraph 0137, The environment for the algorithm is defined by the tasks available for a project and the workers available. The experimentation agent can be rewarded for keeping resources on a task when there are tasks to complete and inversely penalized when letting tasks sit idle or losing resources outright. Using the reward/penalty procedure the agent can train itself through experimentation to come up with a policy. Additionally, the policy can be updated from additional data which can form a database of already implemented examples). Regarding claim 15 (Previously Presented), which is dependent of claim 1, the combination of Cami, Kintsakis, and Blackmon discloses all the limitations in claim 1. Cami further discloses wherein the Al auto-scheduler further receives feedback relating to execution of the sequenced third level items in the work package and automatically updates the sequence of third level items based on a list of uncompleted third level items and resources needed for completion of such items (Paragraph 0079, At block 230, building can occur. Once the project is scheduled, a build phase can start. The “Cost to Complete" or other metrics for the project can be continuously updated as tasks are completed and verified. The client can follow along and see how tasks are progressing on a regular basis. In addition, a contractor will be able to see delays and issues as they are happening to address them as quickly as possible to minimize delays to the overall project timeline. Additionally, the technology disclosed herein can learn to schedule tasks and have suggestions ready if tasks are blocked or delayed to keep available resources being utilized towards completing the project. The overarching goal of the build phase can be to adhere to the original completion date and cost. In some examples, the “cost to complete” algorithm can continually or periodically adjust the schedule or completion cost based on information obtained. In some examples, Fig. 2C can be displayed on a user device to indicate current tasks which can be determined according to aspects of the technology described herein. During this phase, information can be obtained from a smarttool as the smarttools are being used; Paragraph 0095, Figure 2E similar to Figure 2D illustrates an example schedule 275 which can reflect updates or completions to the schedule based on user inputs or a calculation of a percentage of an event being completed; Paragraph 0136, Currently, scheduling for a construction project is done manually. A reinforced learning algorithm can be trained from experience and through experimentation can determine a path forward to completing the project in cases where work is stalled. Once the algorithm is sufficiently trained, it can be used on all projects. An example state in the algorithm can be that a framing task needs to be put on hold for inspection. The algorithm will look at the remaining tasks on the project and suggest to the supervisor that a plumbing task can be moved up in the schedule while the framing inspection is completed. The suggestions are based on a policy created from reinforced learning on prior experiences and experimentation by the algorithm itself. The object of the algorithm is to reduce down time on the project). Regarding claim 16 (Previously Presented), which is dependent of claim 1, the combination of Cami, Kintsakis, and Blackmon discloses all the limitations in claim 1. Cami further discloses wherein the Al auto-scheduler further receives industry expert feedback regarding scheduling best practices and updates the trained reinforcement learning engine (Paragraph 0137, The environment for the algorithm is defined by the tasks available for a project and the workers available. The experimentation agent can be rewarded for keeping resources on a task when there are tasks to complete and inversely penalized when letting tasks sit idle or losing resources outright. Using the reward/penalty procedure the agent can train itself through experimentation to come up with a policy. Additionally, the policy can be updated from additional data which can form a database of already implemented examples. The resultant policy can be used (exploited) on real construction projects. In some examples, multiple paths can be generated and then screened using other machine learning techniques. For example, the algorithm can learn from a “good” supervisor what to do during specific scenarios. Reinforced learning can be used to teach the algorithm because we have data from previous projects of how schedules were rearranged to deal with delays. Once the algorithm, which can include a convoluted neural network (CNN), is trained, the algorithm can then provide task adjustments in real time to help with projects; Examiner interprets receiving feedback from a good supervisor as the industry expert feedback). Regarding claim 17 (Previously Presented), which is dependent of claim 16, the combination of Cami, Kintsakis, and Blackmon discloses all the limitations in claim 16. Cami further discloses wherein the Al auto-scheduler automatically updates the sequence of third level items based on the list of uncompleted third level items and the resources needed for completion of such items utilizing the updated trained reinforcement learning engine (Paragraph 0136, Currently, scheduling for a construction project is done manually. A reinforced learning algorithm can be trained from experience and through experimentation can determine a path forward to completing the project in cases where work is stalled. Once the algorithm is sufficiently trained, it can be used on all projects. An example state in the algorithm can be that a framing task needs to be put on hold for inspection. The algorithm will look at the remaining tasks on the project and suggest to the supervisor that a plumbing task can be moved up in the schedule while the framing inspection is completed. The suggestions are based on a policy created from reinforced learning on prior experiences and experimentation by the algorithm itself. The object of the algorithm is to reduce down time on the project). Regarding claim 18 (Previously Presented), which is dependent of claim 1, the combination of Cami, Kintsakis, and Blackmon discloses all the limitations in claim 1. Cami further discloses wherein the Al auto-scheduler produces a plurality of optimized third level work package candidates and presents alternative options to the schedule of third level work packages (Paragraph 0042, In other examples, the technology may assist in the creation of one or more suggestions which can be generated or provided to construction project managers or other users. In some examples, a cost-to-complete algorithm may have one or more aspects which are generated in response to a current condition of a construction project. In some examples, the alerts can be chosen or acted upon by project managers or end users. In some cases, the alerts can provide a plurality of choices to a user. The choices, and the alerts, can be actionable. In some examples, the actionable alert can be integrated or cause an autonomous or semi-autonomous construction equipment to take action. In other examples, aspects related to the construction project, such as scheduling of labor, hiring of more workers, or ordering of materials or equipment can be performed automatically based on the generated alerts; Paragraph 0129, State (S) - a state is the current situation the algorithm is assessing. From this state it will figure out the best action to take and move to another state; Paragraph 0136, Currently, scheduling for a construction project is done manually. A reinforced learning algorithm can be trained from experience and through experimentation can determine a path forward to completing the project in cases where work is stalled. Once the algorithm is sufficiently trained, it can be used on all projects. An example state in the algorithm can be that a framing task needs to be put on hold for inspection. The algorithm will look at the remaining tasks on the project and suggest to the supervisor that a plumbing task can be moved up in the schedule while the framing inspection is completed. The suggestions are based on a policy created from reinforced learning on prior experiences and experimentation by the algorithm itself. The object of the algorithm is to reduce down time on the project). Regarding claim 19 (Previously Presented), which is dependent of claim 1, the combination of Cami, Kintsakis, and Blackmon discloses all the limitations in claim 1. Cami further discloses wherein the Al auto-scheduler provides insights on scheduling of the third level work packages so that industry experts can be trained by the trained reinforcement learning engine (Paragraph 0136, Currently, scheduling for a construction project is done manually. A reinforced learning algorithm can be trained from experience and through experimentation can determine a path forward to completing the project in cases where work is stalled. Once the algorithm is sufficiently trained, it can be used on all projects. An example state in the algorithm can be that a framing task needs to be put on hold for inspection. The algorithm will look at the remaining tasks on the project and suggest to the supervisor that a plumbing task can be moved up in the schedule while the framing inspection is completed. The suggestions are based on a policy created from reinforced learning on prior experiences and experimentation by the algorithm itself. The object of the algorithm is to reduce down time on the project; Examiner interprets training from experience and through experimentation as the insights). Regarding claim 20 (Previously Presented), Cami discloses a method for generating an optimum task schedule for fulfilling a large-scale capital project using reinforcement learning (Paragraph 0044, In yet other examples, the algorithm can optimize for one aspect of a construction project, such as speed of the construction, total cost of the construction, or the use of particular types of materials within the project, such as, for example, eco-friendly materials, fire proof materials, or thermally resistive materials; Paragraph 0136, A reinforced learning algorithm can be trained from experience and through experimentation can determine a path forward to completing the project in cases where work is stalled), the method comprising: training an artificial intelligence reinforcement learning engine through a deep reinforcement learning training process for a plurality of different scheduling objectives based on a plurality of data sets representing a plurality of different work projects including at least one of simulated work projects or actual completed work projects with each scheduling objective having a different reinforcement learning reward function to produce a first artificial intelligence reinforcement engine model for an … process (see Figures 2E, 2D, and 6; Paragraph 0033, The schedule of tasks can be generated using a reinforcement learning; Paragraph 0044, In yet other examples, the algorithm can optimize for one aspect of a construction project, such as speed of the construction, total cost of the construction, or the use of particular types of materials within the project, such as, for example, eco-friendly materials, fire proof materials, or thermally resistive materials; Paragraph 0125, Ensemble methods can be used, which primarily use the idea of combining several predictive models, which can be supervised ML or unsupervised ML to get higher quality predictions than each of the models could provide on their own. As one example, random forest algorithms Neural networks and deep learning techniques can also be used for the techniques described above; Paragraph 0136, Currently, scheduling for a construction project is done manually. A reinforced learning algorithm can be trained from experience and through experimentation can determine a path forward to completing the project in cases where work is stalled. Once the algorithm is sufficiently trained, it can be used on all projects. An example state in the algorithm can be that a framing task needs to be put on hold for inspection. The algorithm will look at the remaining tasks on the project and suggest to the supervisor that a plumbing task can be moved up in the schedule while the framing inspection is completed. The suggestions are based on a policy created from reinforced learning on prior experiences and experimentation by the algorithm itself. The object of the algorithm is to reduce down time on the project; Paragraph 0137, The environment for the algorithm is defined by the tasks available for a project and the workers available. The experimentation agent can be rewarded for keeping resources on a task when there are tasks to complete and inversely penalized when letting tasks sit idle or losing resources outright. Using the reward/penalty procedure the agent can train itself through experimentation to come up with a policy. Additionally, the policy can be updated from additional data which can form a database of already implemented examples. The resultant policy can be used (exploited) on real construction projects. In some examples, multiple paths can be generated and then screened using other machine learning techniques. For example, the algorithm can learn from a “good” supervisor what to do during specific scenarios. Reinforced learning can be used to teach the algorithm because we have data from previous projects of how schedules were rearranged to deal with delays. Once the algorithm, which can include a convoluted neural network (CNN), is trained, the algorithm can then provide task adjustments in real time to help with projects; Examiner notes that Cami discloses “a plurality of different scheduling objectives” because the reinforcement learning can be trained to: reduce downtime of the project; reduce delays of the project; and/or optimize speed of the construction. Also, Examiner notes that Cami is learning over time what is the best action for a given state. In this case, the reinforcement learning predicts/infers the best action based on real time data. Then, the reinforcement learning receives feedback that includes a successful or failed action (e.g., rewards, penalties, learning from a good supervisor). Therefore, based on broadest reasonable interpretation in light of the specification, Cami discloses to “train an artificial intelligence reinforcement training” because the reinforcement learning is updated over time based on feedback provided from actual completed work projects), wherein each training data set includes (Paragraph 0137, Reinforced learning can be used to teach the algorithm because we have data from previous projects of how schedules were rearranged to deal with delays): items representing a plurality of training work packages, wherein the plurality of training work packages are associated with a hierarchical work breakdown structure that comprises a first level representing the total work to be planned; a second level representing a logical breakdown of the first level total work into a plurality of subsystems; and a third level representing the plurality of work packages, each of the plurality of second level subsystems associated with one or more of the plurality of third level work packages, each of the plurality of third level work packages identifying specific tasks, resources, and a duration that each resource will be required (Paragraph 0033, The schedule of tasks can be generated using a reinforcement learning; Paragraph 0038, In some examples, work blocs can be further broken down into assignable actions or sub tasks, which can be assigned resources and scheduled with start and end dates; Paragraph 0077, Each task can be scheduled by start date and end date. Additionally, the worker or crew that will perform the task can be determined so that at the end of scheduling resources needed and end date are defined. The duration of the task can be determined by several factors such as task duration is automatically adjusted from historical data; Paragraph 0084, When the project is first created, scheduled and workers can be assigned tasks that include the UOM to be done and hours allowed for the work; Paragraph 0095, Figure 2E similar to Figure 2D illustrates an example schedule 275 which can reflect updates or completions to the schedule based on user inputs or a calculation of a percentage of an event being completed; Paragraph 0106, Database 405 can contain information related to one or more construction projects, tasks to be completed, equipment available, materials, workers, historical information about projects, worker efficiencies, or any other data which can be used by a simulation module or other trained machine learning or other algorithmic model to predict information about related to a construction project. For example, database 405 may contain a relational database containing information described within this disclosure. Any other database structure can also be used. In some examples, information obtained from any of the modules described with respect to Figure 4 can be stored within database 405; Examiner notes that the Gantt chart in Figure 2E represents a hierarchical work breakdown structure, wherein “HealthLogicX” is the first level, “Drywall & Taping” is the second level, and “Partition Type, Patch & Prep Existing Walls, Corner Beads, and Finished Ends” is the third level representing the plurality of work packages); PNG media_image1.png 525 824 media_image1.png Greyscale items representing all of the resources from all of the third level work packages, wherein each of the resources is characterized by a resource category (Paragraph 0035, In some examples, each task can line up with a building product, a worker assigned to the task, or a division; Paragraph 0040, automatic scheduling based on historical data related to a tasks, project, construction site, specific details related to the site (such as weather, location, local holidays), skill level of workers, equipment availability and level; Paragraph 0077, Each task can be scheduled by start date and end date. Additionally, the worker or crew that will perform the task can be determined so that at the end of scheduling resources needed and end date are defined; Examiner interprets the “resource type such as worker or equipment” as the “resource category”); PNG media_image1.png 525 824 media_image1.png Greyscale and items representing constraints on the resources including an available quantity and times of availability for each resource category (Paragraph 0021, This drawback limits the performance of state-of-the-art machine learning models, that are typically trained using stationary batches of data without accounting for situations in which the number of available machines may change; Paragraph 0032, In some examples, constraints can be added on the construction project which can affect the schedule of tasks. For instance, if a project must be completed more promptly or in a smaller time frame, the sequenced schedule of tasks can be updated. The completion metric can be updated for the new sequenced schedule of tasks. In other examples, if a delay or impossibility to complete a particular task, due to material or labor shortage, or other delays (e.g. weather, zoning changes, permitting issues), the algorithm can update the completion metric; Paragraph 0033, In some examples, additional inputs can be taken based on the available resources (materials available, workers available, weather, and equipment available), wherein the first artificial intelligence reinforcement engine model is trained for scheduling third level work packages so that required resources will be available when needed for each of the scheduling objectives (Paragraph 0044, In yet other examples, the algorithm can optimize for one aspect of a construction project, such as speed of the construction, total cost of the construction, or the use of particular types of materials within the project, such as, for example, eco-friendly materials, fire proof materials, or thermally resistive materials; Paragraph 0136, Currently, scheduling for a construction project is done manually. A reinforced learning algorithm can be trained from experience and through experimentation can determine a path forward to completing the project in cases where work is stalled. Once the algorithm is sufficiently trained, it can be used on all projects. An example state in the algorithm can be that a framing task needs to be put on hold for inspection. The algorithm will look at the remaining tasks on the project and suggest to the supervisor that a plumbing task can be moved up in the schedule while the framing inspection is completed. The suggestions are based on a policy created from reinforced learning on prior experiences and experimentation by the algorithm itself. The object of the algorithm is to reduce down time on the project; Paragraph 0137, The environment for the algorithm is defined by the tasks available for a project and the workers available. The experimentation agent can be rewarded for keeping resources on a task when there are tasks to complete and inversely penalized when letting tasks sit idle or losing resources outright. Using the reward/penalty procedure the agent can train itself through experimentation to come up with a policy. Additionally, the policy can be updated from additional data which can form a database of already implemented examples. The resultant policy can be used (exploited) on real construction projects. In some examples, multiple paths can be generated and then screened using other machine learning techniques. For example, the algorithm can learn from a “good” supervisor what to do during specific scenarios. Reinforced learning can be used to teach the algorithm because we have data from previous projects of how schedules were rearranged to deal with delays. Once the algorithm, which can include a convoluted neural network (CNN), is trained, the algorithm can then provide task adjustments in real time to help with projects); receiving, by an artificial intelligence (AI) auto-scheduler, a plurality of work packages to be scheduled, comprising (Paragraph 0028, A scheduled sequence of tasks can be a sequence of tasks wherein each task can be related to another task or be independent of another tasks, and each task is assigned to a particular time in which it is to be completed or expected to be completed. The tasks can collectively form the steps required to finish the project. Any arbitrary granularity of tasks is possible and any task can be divided into sub-tasks; Currently, scheduling for a construction project is done manually. A reinforced learning algorithm can be trained from experience and through experimentation can determine a path forward to completing the project in cases where work is stalled); PNG media_image1.png 525 824 media_image1.png Greyscale a total work database containing items representing a plurality of work packages, wherein the plurality of work packages are associated with a hierarchical work breakdown structure that comprises a first level representing the total work to be planned; a second level representing a logical breakdown of the first level total work into a plurality of subsystems; and a third level representing the plurality of work packages, each of the plurality of second level subsystems associated with one or more of the plurality of third level work packages, each of the plurality of third level work packages identifying specific tasks, resources, and a duration that each resource will be required (Paragraph 0033, The schedule of tasks can be generated using a reinforcement learning; Paragraph 0077, Each task can be scheduled by start date and end date. Additionally, the worker or crew that will perform the task can be determined so that at the end of scheduling resources needed and end date are defined. The duration of the task can be determined by several factors; Paragraph 0084, When the project is first created, scheduled and workers can be assigned tasks that include the UOM to be done and hours allowed for the work; Paragraph 0095, Figure 2E similar to Figure 2D illustrates an example schedule 275 which can reflect updates or completions to the schedule based on user inputs or a calculation of a percentage of an event being completed; Paragraph 0106, Database 405 can contain information related to one or more construction projects, tasks to be completed, equipment available, materials, workers, historical information about projects, worker efficiencies, or any other data which can be used by a simulation module or other trained machine learning or other algorithmic model to predict information about related to a construction project. For example, database 405 may contain a relational database containing information described within this disclosure. Any other database structure can also be used. In some examples, information obtained from any of the modules described with respect to Figure 4 can be stored within database 405; Examiner notes that the Gantt chart in Figure 2E represents a hierarchical work breakdown structure, wherein “HealthLogicX” is the first level, “Drywall & Taping” is the second level, and “Partition Type, Patch & Prep Existing Walls, Corner Beads, and Finished Ends” is the third level representing the plurality of work packages); receiving a resources database containing items representing all of the resources from all of the third level work packages (Paragraph 0033, The schedule of tasks can be generated using a reinforcement learning; Paragraph 0077, Each task can be scheduled by start date and end date. Additionally, the worker or crew that will perform the task can be determined so that at the end of scheduling resources needed and end date are defined. The duration of the task can be determined by several factors; Paragraph 0095, Figure 2E similar to Figure 2D illustrates an example schedule 275 which can reflect updates or completions to the schedule based on user inputs or a calculation of a percentage of an event being completed; Paragraph 0106, Database 405 can contain information related to one or more construction projects, tasks to be completed, equipment available, materials, workers, historical information about projects, worker efficiencies, or any other data which can be used by a simulation module or other trained machine learning or other algorithmic model to predict information about related to a construction project. For example, database 405 may contain a relational database containing information described within this disclosure. Any other database structure can also be used. In some examples, information obtained from any of the modules described with respect to Figure 4 can be stored within database 405), wherein each of the resources is characterized by a resource category (Paragraph 0035, In some examples, each task can line up with a building product, a worker assigned to the task, or a division; Paragraph 0040, automatic scheduling based on historical data related to a tasks, project, construction site, specific details related to the site (such as weather, location, local holidays), skill level of workers, equipment availability and level; Paragraph 0077, Each task can be scheduled by start date and end date. Additionally, the worker or crew that will perform the task can be determined so that at the end of scheduling resources needed and end date are defined); receiving a constraints database containing items representing constraints on the resources listed in the resources database (Paragraph 0032, In some examples, constraints can be added on the construction project which can affect the schedule of tasks. For instance, if a project must be completed more promptly or in a smaller time frame, the sequenced schedule of tasks can be updated. In some examples, additional inputs can be taken based on the available resources (materials available, workers available, weather, and equipment available); Paragraph 0034, In the examples described herein, information can be sent to a computer via input from a user device, and multiple inputs from multiple users (e.g. multiple workers on a job site) can be aggregated or stored on a database for analysis; Paragraph 0033, The schedule of tasks can be generated using a reinforcement learning), wherein the constraints database specifies an available quantity and times of availability for each resource category (Paragraph 0021, This drawback limits the performance of state-of-the-art machine learning models, that are typically trained using stationary batches of data without accounting for situations in which the number of available machines may change; Paragraph 0032, In some examples, constraints can be added on the construction project which can affect the schedule of tasks. For instance, if a project must be completed more promptly or in a smaller time frame, the sequenced schedule of tasks can be updated. The completion metric can be updated for the new sequenced schedule of tasks. In other examples, if a delay or impossibility to complete a particular task, due to material or labor shortage, or other delays (e.g. weather, zoning changes, permitting issues), the algorithm can update the completion metric; Paragraph 0033, In some examples, additional inputs can be taken based on the available resources (materials available, workers available, weather, and equipment available; Paragraph 0040, automatic scheduling based on historical data related to a tasks, project, construction site, specific details related to the site (such as weather, location, local holidays), skill level of workers, equipment availability and level; Paragraph 0077, Each task can be scheduled by start date and end date. Additionally, the worker or crew that will perform the task can be determined so that at the end of scheduling resources needed and end date are defined); and receiving a scheduling objectives database designating a prime objective that is to be achieved by the optimum task schedule (Paragraph 0032, In some examples, constraints can be added on the construction project which can affect the schedule of tasks. For instance, if a project must be completed more promptly or in a smaller time frame, the sequenced schedule of tasks can be updated. In some examples, additional inputs can be taken based on the available resources (materials available, workers available, weather, and equipment available); Paragraph 0034, In the examples described herein, information can be sent to a computer via input from a user device, and multiple inputs from multiple users (e.g. multiple workers on a job site) can be aggregated or stored on a database for analysis; Paragraph 0044, In yet other examples, the algorithm can optimize for one aspect of a construction project, such as speed of the construction, total cost of the construction, or the use of particular types of materials within the project, such as, for example, eco-friendly materials, fire proof materials, or thermally resistive materials; Paragraph 0033, The schedule of tasks can be generated using a reinforcement learning); and generating, by the AI auto-scheduler, an optimum work package schedule to sequence the third level work packages using the trained artificial intelligence reinforcement learning engine based on the first artificial intelligence reinforcement engine model applied to inputs from the total work database, the resource database, the constraints database, and the scheduling objectives database; wherein the optimum work package schedule maximizes the prime objective that is to be achieved within the hierarchical work breakdown structure (see Figures 2E, 2D, and 6; Paragraph 0033, The schedule of tasks can be generated using a reinforcement learning; Paragraph 0044, In yet other examples, the algorithm can optimize for one aspect of a construction project, such as speed of the construction, total cost of the construction, or the use of particular types of materials within the project, such as, for example, eco-friendly materials, fire proof materials, or thermally resistive materials; Paragraph 0136, Currently, scheduling for a construction project is done manually. A reinforced learning algorithm can be trained from experience and through experimentation can determine a path forward to completing the project in cases where work is stalled. Once the algorithm is sufficiently trained, it can be used on all projects. An example state in the algorithm can be that a framing task needs to be put on hold for inspection. The algorithm will look at the remaining tasks on the project and suggest to the supervisor that a plumbing task can be moved up in the schedule while the framing inspection is completed. The suggestions are based on a policy created from reinforced learning on prior experiences and experimentation by the algorithm itself. The object of the algorithm is to reduce down time on the project; Examiner notes that the task schedule is always generated at the work package level, then the tasks are rolled up to the second and first level based on a defined hierarchical work breakdown structure), and wherein the Al auto-scheduler automatically determines and assigns resource needs, timeframes, dependencies between resources for a given third level work package, and between the plurality of third level work packages including sequencing of tasks and resources between the third level work packages associated with each second level subsystem, and automatically coordinates availability of the resources across multiple work packages based on the duration that each resource will be required, the available quantity of the given resource category, and times of availability of the given resource category so that resources of the given resource category required for the plurality of third level work packages will be available when needed for the sequenced third level work packages (Paragraph 0021, This drawback limits the performance of state-of-the-art machine learning models, that are typically trained using stationary batches of data without accounting for situations in which the number of available machines may change; Paragraph 0026, Aspects of the current disclosure include intelligent workflow generation and metrics related to the workflow and construction projects based on updatable parameters (e.g. tasks, sub-tasks, weather, workers, resources, tools; Paragraph 0028, A scheduled sequence of tasks can be a sequence of tasks wherein each task can be related to another task or be independent of another tasks, and each task is assigned to a particular time in which it is to be completed or expected to be completed. The tasks can collectively form the steps required to finish the project. Any arbitrary granularity of tasks is possible and any task can be divided into sub-tasks; Paragraph 0032, In some examples, constraints can be added on the construction project which can affect the schedule of tasks. For instance, if a project must be completed more promptly or in a smaller time frame, the sequenced schedule of tasks can be updated. The completion metric can be updated for the new sequenced schedule of tasks; Paragraph 0033, In some examples, the schedule of tasks can be updated based on information from the environment or indications that a particular task cannot be completed. In other examples, if a particular task becomes too costly, it can be attempted to be replaced with another task. In some examples, the schedule of tasks can be generated using a machine learning algorithm, such as, for example, through one or more experimentation agents. Other algorithms include genetic algorithms, reinforcement learning, hybrid deep neural network methods, neural networks, generative adversarial networks, or heuristic optimization methods. The generation of tasks can be done in non-brute force computation, non-polynomial time, or in a time that is computationally feasible to provide real-time or near-real time updates to the schedule of tasks and completion metrics. In some examples, additional inputs can be taken based on the available resources (materials available, workers available, weather, and equipment available); Paragraph 0038, In some examples, work blocs can be further broken down into assignable actions or sub tasks, which can be assigned resources and scheduled with start and end dates; Paragraph 0040, Automatic scheduling based on historical data related to a tasks, project, construction site, specific details related to the site (such as weather, location, local holidays), skill level of workers, equipment availability and level; Paragraph 0077, Each task can be scheduled by start date and end date. Additionally, the worker or crew that will perform the task can be determined so that at the end of scheduling resources needed and end date are defined. The duration of the task can be determined by several factors; Paragraph 0079, Additionally, the technology disclosed herein can learn to schedule tasks and have suggestions ready if tasks are blocked or delayed to keep available resources being utilized towards completing the project; Examiner notes that a workflow includes dependencies between the plurality of work packages and resources). Cami discloses to: predict (e.g., infer), using a trained reinforcement learning, the best action for a given state (see at least Paragraphs 0125-0134); and wherein the reinforcement engine is trained by combining reinforcement learning (RL) and deep learning techniques (Paragraph 0125, Examiner notes that combining reinforcement learning with deep learning is known as deep reinforcement learning). Although Cami discloses all the limitations above and inherently discloses an inference process, Cami does not specifically disclose how the reinforcement learning is inferring the best action for a given state. However, Kintsakis discloses training an artificial intelligence reinforcement learning engine through a deep reinforcement learning training process for a … scheduling objectives based on a plurality of data sets representing a plurality of different work projects including at least one of simulated work projects or actual completed work projects with each scheduling objective having a different reinforcement learning reward function to produce a first artificial intelligence reinforcement engine model for an inference process (Page 96, 3.2 Proposed Solution, In this direction, we have expanded the capabilities of our previous work, the Hermes WMS (Kintsakis et al., 2017), to a system that can continuously learn to improve its workflow execution performance with respect to minimizing workflow makespan. Our approach entails a built-in capability of the WMS to accurately collect historical task execution data with the purpose of training models off-line that can estimate task runtime and failure probability for task executions of varying input sizes and across different execution sites. These models are then used on-line in inference mode to inform scheduling decisions. The outputs of these models along with other dynamically generated features are passed on to a policy network, which is in fact a neural model capable of performing scheduling decisions. More specifically, the policy network is capable of identifying a near optimal scheduling decision when presented with all possible scheduling choices that the system can immediately act upon, at any given point in time. The inability to generate high quality labeled data for an NP-Hard problem such as scheduling DAG workflows has urged us to adopt a reinforcement learning approach towards training the policy network. Due to the sheer number of episodes required for reinforcement learning, the policy network is trained off-line in a simulated environment that closely resembles the real one. The simulated environment consists of workflow DAGs, task characteristics and input sizes as well as execution sites, similar to those encountered in the real environment. As will become apparent, this allows for the training of a policy network that can deliver sterling performance when tasked with performing scheduling decisions in the real world environment; In this case, the scheduling objective is to minimize the workflow makespan. Applicant defines, on page 40, that the inference process is used to give the recommendation from the saved AI model. Based on broadest reasonable interpretation in light of the specification, Kintsakis discloses an inference process because the inference mode identifies/recommends a scheduling decision when presented with all possible scheduling choices). It would have been obvious to one ordinary skill in the art before the effective filing date to modify the reinforcement learning used to generate an optimum work package schedule (e.g., rearrange tasks to reduce downtime and/or optimize speed of construction), wherein the optimum work package includes predicting the best action for a given state (see at least Paragraphs 0125-0136) of the invention of Cami to further specify how the reinforcement learning is inferring the best action of the invention of Kintsakis because doing so would allow the reinforcement learning to identify a near optimal scheduling decision when presented with all possible scheduling choices (see Kintsakis, Page 96, 3.2 Proposed Solution). Further, the claimed invention is merely a combination of old elements, and in combination each element would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Cami discloses to automatically coordinate availability of the resources across multiple work packages based on the duration that each resource will be required and the available quantity of the given resource category (Paragraph 0021, number of available machines available; Paragraph 0033, additional inputs can be taken based on the available resources (materials available, workers available, weather, and equipment available; Paragraph 0038, assigned resources and scheduled with start and end dates; Paragraph 0040, automatic scheduling based on historical data related to a tasks, project, construction site, specific details related to the site, skill level of workers, equipment availability and level). Although Cami discloses all the limitations above and availability of the given resource category (e.g., materials available, workers available, and equipment available), Cami does not specifically disclose coordinating availability of multiple resource categories at the same time (e.g. assigning workers and equipment when both resource categories are available). However, Blackmon discloses … and items representing constraints on the resources including an available quantity and times of availability for each resource category, … wherein the constraints database specifies an available quantity and times of availability for each resource category; … coordinates availability of the resources across multiple work packages based on the duration that each resource will be required, the available quantity of the given resource category, and the times of availability of the given resource category so that resources of the given resource category required for the plurality of third level work packages will be available when needed for the sequenced third level work packages (Paragraph 0054, The constraints analysis module 510 determines whether a work package is valid by evaluating project constraints for the work package (e.g., availability of project materials, site space, work crews and site equipment at the proposed time of release to a work crew). Additionally, the constraints analysis module 510 works with the creation module 500 to allow a user to modify work packages and with the sequencing module 505 to allow a user to modify the sequence of work packages. Thus, the constraints analysis module 510 evaluates constraints on a given work package to allow a user to determine whether to release the work package to a work crew; Paragraph 0022, The computerized simulation model automatically generates a time and cost estimate for the work package based on project controls data (e.g., a library of unit time rates and unit cost rates) accessed in the various project databases). It would have been obvious to one ordinary skill in the art before the effective filing date to modify the reinforcement learning used to generate an optimum work package schedule (e.g., rearrange tasks to reduce downtime and/or optimize speed of construction), wherein the optimum work package is generated based on availability of the resources across multiple work packages of the invention of Cami to further specify availability of the resources across multiple work packages based on the duration that each resource will be required of the invention of Blackmon because doing so would allow the reinforcement learning to evaluate project constraints for the work package (e.g., availability of project materials, site space, work crews and site equipment) at the proposed time of release to a work crew (see Blackmon, Paragraph 0054). Further, the claimed invention is merely a combination of old elements, and in combination each element would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Regarding claim 22 (Previously Presented), which is dependent of claim 20, the combination of Cami, Kintsakis, and Blackmon discloses all the limitations in claim 20. Cami further discloses wherein the Al auto-scheduler automatically sequences third level items relating to a given second level subsystem (see Figures 2E, 2D, and 6; Paragraph 0026, Aspects of the current disclosure include intelligent workflow generation and metrics related to the workflow and construction projects based on updatable parameters (e.g. tasks, sub-tasks, weather, workers, resources, tools; Paragraph 0136, Currently, scheduling for a construction project is done manually. A reinforced learning algorithm can be trained from experience and through experimentation can determine a path forward to completing the project in cases where work is stalled. Once the algorithm is sufficiently trained, it can be used on all projects. An example state in the algorithm can be that a framing task needs to be put on hold for inspection. The algorithm will look at the remaining tasks on the project and suggest to the supervisor that a plumbing task can be moved up in the schedule while the framing inspection is completed. The suggestions are based on a policy created from reinforced learning on prior experiences and experimentation by the algorithm itself. The object of the algorithm is to reduce down time on the project). Regarding claim 23 (Previously Presented), which is dependent of claim 20, the combination of Cami, Kintsakis, and Blackmon discloses all the limitations in claim 20. Cami further discloses wherein the Al auto-scheduler automatically coordinates availability of resources across multiple work packages so that the multiple work packages are optimized with respect to a given resource (Paragraph 0032, Other algorithms include genetic algorithms, reinforcement learning, hybrid deep neural network methods, neural networks, generative adversarial networks, or heuristic optimization methods. The generation of tasks can be done in non-brute force computation, non-polynomial time, or in a time that is computationally feasible to provide real-time or near-real time updates to the schedule of tasks and completion metrics. In some examples, additional inputs can be taken based on the available resources (materials available, workers available, weather, and equipment available; Paragraph 0079, Additionally, the technology disclosed herein can learn to schedule tasks and have suggestions ready if tasks are blocked or delayed to keep available resources being utilized towards completing the project; Paragraph 0137, The environment for the algorithm is defined by the tasks available for a project and the workers available. The experimentation agent can be rewarded for keeping resources on a task when there are tasks to complete and inversely penalized when letting tasks sit idle or losing resources outright. Using the reward/penalty procedure the agent can train itself through experimentation to come up with a policy. Additionally, the policy can be updated from additional data which can form a database of already implemented examples). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure. Wen et al. (US 2021/0278825 A1) - Systems and methods provide real-time production scheduling by integrating deep reinforcement learning and Monte Carlo tree search. A manufacturing process simulator is used to train a deep reinforcement learning agent to identify the sub-optimal policies for a production schedule. A Monte Carlo tree search agent is implemented to speed up the search for near-optimal policies of higher quality from the sub-optimal policies (see Abstract). Cunha (Cunha, B., Madureira, A., Fonseca, B. and Matos, J., 2021. Intelligent scheduling with reinforcement learning. Applied Sciences, 11(8), p.3710) – discloses a novel architecture that incorporates reinforcement learning into scheduling systems in order to improve their overall performance and overcome the limitations that current approaches present. It is also intended to investigate the development of a learning environment for reinforcement learning agents to be able to solve the Job Shop scheduling problem. The reported experimental results and the conducted statistical analysis conclude about the benefits of using an intelligent agent created with reinforcement learning techniques. The main contribution of this work is proving that reinforcement learning has the potential to become the standard method whenever a solution is necessary quickly, since it solves any problem in very few seconds with high quality, approximate to the optimal methods (see Abstract). Cunha (Cunha, B., Madureira, A.M., Fonseca, B. and Coelho, D., 2020. Deep reinforcement learning as a job shop scheduling solver: A literature review. In Hybrid Intelligent Systems: 18th International Conference on Hybrid Intelligent Systems (HIS 2018) Held in Porto, Portugal, December 13-15, 2018 18 (pp. 350-359). Springer International Publishing) – discloses Deep Q-Network (DQN) is the deep learning evolution of Q-learning. The DQN approach replaces the action-state matrix by neural networks (hence the deep learning). This neural network is able to provide an estimate of the Qvalue; it receives the current value and outputs the corresponding one of taking each action. The first application of this technique was famously achieved by the DeepMind team, using it to play Atari games [27]. The training cycle of this neural network is based on the squared error between target and output Q-values; and also contains some key techniques, developed in [27]: experience replay (explained on the next chapter) and a separate target network. MacElheron et al. (US 2014/0229212 A1) - discloses components are allocated to IWPs, resources related to the components can be assigned to the IWPs at stage V. Furthermore, the preparation status and field labor allocation can be monitored for IWPs at stage VI and stage VII. These operations formulate prerequisites for releasing IWPs for construction. As such, in some embodiments, a list of constraints including labour, materials, equipment, tools, safety issues, etc. related to the components can be required to be satisfied, before implementing the schedule activities or releasing the IWPs (see Paragraph 0054). Any inquiry concerning this communication or earlier communications from the examiner should be directed to MARJORIE PUJOLS-CRUZ whose telephone number is (571)272-4668. The examiner can normally be reached Mon-Thru 7:30 AM - 5:00 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Patricia H Munson can be reached at (571)270-5396. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MARJORIE PUJOLS-CRUZ/ Examiner, Art Unit 3624
Read full office action

Prosecution Timeline

Sep 24, 2021
Application Filed
Nov 10, 2022
Non-Final Rejection — §101, §103
Jan 12, 2023
Response Filed
Jan 23, 2023
Final Rejection — §101, §103
Feb 01, 2023
Interview Requested
Feb 22, 2023
Applicant Interview (Telephonic)
Feb 23, 2023
Examiner Interview Summary
Mar 27, 2023
Response after Non-Final Action
Mar 31, 2023
Response after Non-Final Action
Apr 27, 2023
Request for Continued Examination
May 08, 2023
Response after Non-Final Action
Jul 05, 2023
Non-Final Rejection — §101, §103
Oct 12, 2023
Response Filed
Oct 30, 2023
Final Rejection — §101, §103
Jan 08, 2024
Response after Non-Final Action
Feb 06, 2024
Response after Non-Final Action
Mar 06, 2024
Request for Continued Examination
Mar 07, 2024
Response after Non-Final Action
Jun 17, 2024
Non-Final Rejection — §101, §103
Sep 23, 2024
Response Filed
Oct 01, 2024
Final Rejection — §101, §103
Oct 28, 2024
Interview Requested
Nov 19, 2024
Applicant Interview (Telephonic)
Nov 19, 2024
Examiner Interview Summary
Feb 04, 2025
Request for Continued Examination
Feb 05, 2025
Response after Non-Final Action
Apr 07, 2025
Non-Final Rejection — §101, §103
Jul 11, 2025
Response Filed
Jul 21, 2025
Final Rejection — §101, §103
Oct 28, 2025
Response after Non-Final Action
Dec 24, 2025
Request for Continued Examination
Feb 02, 2026
Response after Non-Final Action
Mar 16, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12106240
SYSTEMS AND METHODS FOR ANALYZING USER PROJECTS
2y 5m to grant Granted Oct 01, 2024
Patent 12014298
AUTOMATICALLY SCHEDULING AND ROUTE PLANNING FOR SERVICE PROVIDERS
2y 5m to grant Granted Jun 18, 2024
Patent 11966927
Multi-Task Deep Learning of Client Demand
2y 5m to grant Granted Apr 23, 2024
Patent 11941651
LCP Pricing Tool
2y 5m to grant Granted Mar 26, 2024
Patent 11847602
SYSTEM AND METHOD FOR DETERMINING AND UTILIZING REPEATED CONVERSATIONS IN CONTACT CENTER QUALITY PROCESSES
2y 5m to grant Granted Dec 19, 2023
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

9-10
Expected OA Rounds
18%
Grant Probability
46%
With Interview (+27.9%)
3y 2m
Median Time to Grant
High
PTA Risk
Based on 136 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month