Prosecution Insights
Last updated: April 19, 2026
Application No. 18/894,884

METHOD AND SYSTEM FOR MANAGING RESOURCES USING PREDICTIVE ANALYTICS

Non-Final OA §101§102§103
Filed
Sep 24, 2024
Examiner
KNIGHT, LETORIA G
Art Unit
3623
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Jpmorgan Chase Bank N A
OA Round
1 (Non-Final)
27%
Grant Probability
At Risk
1-2
OA Rounds
2y 9m
To Grant
73%
With Interview

Examiner Intelligence

Grants only 27% of cases
27%
Career Allow Rate
46 granted / 173 resolved
-25.4% vs TC avg
Strong +46% interview lift
Without
With
+46.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
39 currently pending
Career history
212
Total Applications
across all art units

Statute-Specific Performance

§101
43.9%
+3.9% vs TC avg
§103
38.6%
-1.4% vs TC avg
§102
3.7%
-36.3% vs TC avg
§112
10.0%
-30.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 173 resolved cases

Office Action

§101 §102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims This is a first action on the merits in response to the application filed 24 September 2024. Claims 1-20 are pending and have been examined. Priority Acknowledgment is made of applicant's claim for foreign priority based on an application filed in India on 25 September 2023, and based on the priority documents received 04 November 2024. Examiner notes the communication filed 25 February 2025 by the Office for Applicant review and response. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Independent claim 1 recites a process, independent claim 10 recites an apparatus, and independent claim 19 recites a product for facilitating resource management. Independent claims 1, 10, and 19 recite substantially similar limitations. Taking independent claim 1 as representative, claim 1 recites the following limitations: aggregating, by the at least one processor via an application programming interface, data from at least one source, the data including at least one from among end user data, resource data, and influential factor data; generating, by the at least one processor, at least one data product based on the aggregated data, the at least one data product including at least one from among a structured data set, an application, and a tool; training, by the at least one processor, at least one first model by using the generated at least one data product; determining, by the at least one processor, at least one predictive output by using the trained at least one first model and the generated at least one data product, each of the at least one predictive output corresponding to a recommended action for management of at least one resource; and publishing, by the at least one processor, the at least one predictive output to a downstream application. Under Step 1, the claim recites at least one step or act, including aggregating data from at least one source. Thus the claims fall within one of the statutory categories of invention. Under Step 2A Prong One, the limitations recited in claim 1 for aggregating data from at least one source, generating at least one data product based on the aggregated data, training at least one first model by using the generated at least one data product, determining at least one predictive output by using the trained at least one first model and the generated at least one data product, and publishing the at least one predictive output to a downstream application, as drafted, illustrates a process that, under its broadest reasonable interpretation falls within the certain methods of organizing human activity grouping of abstract ideas because the claim recited limitations for collecting, analyzing, manipulating data using a model, and publishing an output for related to end user behavior and resource management for determining availability and usage of equipment, workspaces, and other resources in a building (see at least Figure 6 of the Drawings). Further the claims fall within the mental processes grouping of abstract ideas because a resource manager could gather data from multiple sources, analyze and manipulate the data using pen and paper, and determine a recommended resource management plan mentally. Mental processes remain unpatentable even when automated to reduce the burden on the user of what once could have been done with pen and paper. Under Step 2A Prong Two, the judicial exception of claim 1 is not integrated into a practical application. In particular, the claims recite a processor, application programming interface (API), memory, and communication interface for performing the recited steps. These elements are recited at a high level of generality (i.e., as a generic processor performing a generic computer function) and amount to no more than mere instructions to apply the exception using generic computer components. See MPEP 2106.05(f). For example, Applicant’s specification at paragraph [0041] states: “The processor 104 is an article of manufacture and/or a machine component. The processor 104 is configured to execute software instructions in order to perform functions as described in the various embodiments herein. The processor 104 may be a general-purpose processor or may be part of an application specific integrated circuit (ASIC).” The Specification does not provide additional details about the computer system that would distinguish it from any generic processing devices that communicate with one another in a network environment. Adding generic computer components to perform generic functions, such as data gathering, performing calculations, and outputting a result would not transform the claim into eligible subject matter. See MPEP 2106.05(h). Accordingly, the additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. Outputting data is transmitting data. The step for publishing is outputting and transmitting data, and is insignificant post-solution activity. Outputting or transmitting data is insignificant post-solution activity because merely presenting the results of abstract processes of collecting and analyzing information, without more, is abstract as an ancillary part of such collection and analysis. Using an API for implementing the functionality of the aggregating step does not amount to implementing the judicial exception with a particular machine or manufacture, effecting a particular transformation or reduction of an article, or applying the judicial exception in some other meaningful way. There is nothing about the combination of a processor and use of an API beyond the individual benefits from each of these technological requirements. Under Step 2B the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to the integration of the abstract idea into a practical application, the additional elements of a processor, API, and storage device amount to no more than mere instructions to apply the exception using a generic computer component which cannot provide an inventive concept. See MPEP 2106.05. Dependent claims 2-9, 11-18, and 20 include the abstract ideas of the independent claims. The limitations of the dependent claims merely narrow the mental process/method of organizing human activity abstract idea by describing the type of data used in the data analysis steps and the type of desired output, the type of mathematical model used to analyze the data, and a list of data sources. The limitations of the dependent claims are not integrated into a practical application because none of the additional elements set forth any limitations that meaningfully limit the abstract idea implementation. There are no additional elements that transform the claim into a patent eligible idea by amounting to significantly more. The analysis above applies to all statutory categories of invention. Accordingly, independent claims 10 and 19 and the claims that depend therefrom are rejected as ineligible for patenting under 35 U.S.C. 101 based upon the same analysis applied to claim 1 above. Therefore claims 1 - 20 are ineligible under 35 U.S.C. 101. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1, 5-10, 14-19 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Phillips et al. (US 10,810,528). Regarding Claim 1, Phillips et al. teaches a method for facilitating resource management by using predictive analytics, the method being implemented by at least one processor, the method comprising: (… the method includes evaluating the set of current resource utilizations for each candidate facility in the set of candidate facilities with one or more machine learning models to determine a set of available facilities … and determining one or more recommended facilities to provide as output. Phillips et al. [col. 2, lines 31-62]. … computing architecture 600 includes various common computing elements, such as one or more processors. Phillips et al. [col. 18, lines 49-67; col. 19, lines 1-5; Fig. 2-4B, 6]); aggregating, by the at least one processor via an application programming interface, data from at least one source, the data including at least one from among end user data, resource data, and influential factor data; (… the request manager 104 may have access to data associated with enterprise facilities via at least one application programming interface (API). Phillips et al. [col. 7, lines 15-30]. … recommendation of enterprise resource utilization strategies based on historical, current, and/or future resource availability and/or utilization may be implemented in a practical application to increase capabilities and improve adaptability of enterprise systems. Phillips et al. [col. 4, lines 45-67; col. 5, lines 1-25]. … request analyzer 210 may generate a candidate facility set according to the requisite resources 214 identified as relating to the product request 102, location 212, user data 334, facility data 336, or any combination thereof. Phillips et al. [col. 10, lines 43-67; col. 11, lines 35-67]). generating, by the at least one processor, at least one data product based on the aggregated data, the at least one data product including at least one from among a structured data set, an application, and a tool; (… request manager 104 may include a machine learning model 226 used to further assess the availability of facilities within the candidate facility set 220 to fulfill the product request 102. The machine learning model 226 may be used to probabilistically determine the availability of facilities in the candidate facility set to fulfill the product request 102 by estimated completion times. Philips et al. [col. 9, lines 17-67]. … resource utilization analyzer 222 may be coupled with a storage system which may contain data structures, such as one or more databases, to store information and data including the status data 348. …aspects of status data 348 may be updated in coordination with data received via at least one API. Phillips et al. [col. 11, lines 55-67; col. 12, lines 1-50].); training, by the at least one processor, at least one first model by using the generated at least one data product; (… current resource utilizations 224 may be analyzed using a machine learning model 226. The machine learning model 226 may be trained on or built with historical resource utilizations 346 and associated completion times of product requests. Phillips et al. [col. 12, lines 53-67]. … historical resource utilizations 346 in the resource utilization datastore 344 may be received by a model trainer 456. The model trainer 456 may use a machine learning algorithm 458-A to analyze the historical resource utilizations 346. The model trainer 456 may also consider less persistent data,… Using the results, the model trainer 456 may generate a machine learning model 426-A. A machine learning model may include the predicted availability of a facility to fulfill a product request 102 by considering not only current resource utilizations 224, but also historic trends in availability. Phillips et al. [col. 13, lines 5-35; col. 16, lines 5-55]); determining, by the at least one processor, at least one predictive output by using the trained at least one first model and the generated at least one data product, each of the at least one predictive output corresponding to a recommended action for management of at least one resource; (… system may generate output comprising the one or more recommended facilities. For example, the output may be a recommended facility set 106. Output may comprise a list or array of the recommended facilities. Phillips et al. [col. 16, lines 24-60]); and publishing, by the at least one processor, the at least one predictive output to a downstream application. (… the output may be provided to the user via a user interface, such as a GUI. Phillips et al. [col. 17, lines 60-65]). Regarding Claim 5, Phillips et al. teaches the method of claim 1, wherein the at least one predictive output includes at least one from among synthetic sensor data, resource design data, resource organization data, and resource load balancing data, the resource load balancing data relating to an optimization of at least one resource based on usage demand and cost. (The model trainer 456 may also consider less persistent data, for example, satellite data or other sensor- or internet-collected data recognizing current or impending resource utilizations. Using the results, the model trainer 456 may generate a machine learning model 426-A. A machine learning model may include the predicted availability of a facility to fulfill a product request 102 by considering not only current resource utilizations 224, but also historic trends in availability. Phillips et al. [col. 13, lines 5-30]. … the output may be a recommended facility set 106. Output may comprise a list or array of the recommended facilities. In various embodiments, the output may include estimated completion times for the product request with the facilities. In some examples, the output may order the facilities according to the estimated completion times. Phillips et al. [col. 16, lines 38-50]). Regarding Claim 6, Phillips et al. teaches the method of claim 1, further comprising: identifying, by the at least one processor using the at least one first model, at least one data theme for each of the at least one resource, wherein each of the at least one data theme includes an impact determination for the at least one resource and a corresponding listing of at least one contributing metric. (… a facility that has historically had resource utilization indicating an appropriate workload may be recognized as being understaffed if a roster of facility employees is updated to show an employee no longer works at there. In some embodiments, the machine learning algorithm 458-B may consider data reflecting the use of other facilities. For example, a facility may be predicted to receive more traffic and therefore have higher resource utilization if a nearby facility is closed. Estimated overuse or underuse of resources may be probabilistically scored in a resource utilization score. Phillips et al. [col. 14, lines 1-65]). Regarding Claim 7, Phillips et al. teaches method of claim 1, wherein the end user data includes at least one from among a workplace endpoint that relates to an end user, badge swipe data that relates to the end user, meeting metadata that relates to the end user, email metadata that relates to the end user, instant messaging data that relates to the end user, telephonic call metadata that relates to the end user, video conferencing metadata that relates to the end user, travel pattern data that relates to the end user, meeting room usage data that relates to the end user, and application usage data that relates to the end user. (… the user may input data using a user interface, such as a GUI, including their name, phone number, email address, location, product desired, … If the user made the request while logged into an online account associated with their used services at the enterprise, information about their account may be included in the request. The product request 102 may also include information from other sources, including the mobile device or computer used to make the request and other APIs and applications used by the user. Phillips et al. [col. 6, lines 36-65]). Regarding Claim 8, Phillips et al. teaches the method of claim 1, wherein the resource data includes at least one from among building capacity data that relates to an end user, existing booking data that relates to the end user, desk availability data that relates to the end user, planned meeting data that relates to the end user, manager in-office data that relates to the end user, co-worker in-office data that relates to the end user, and expected in-office time data that relates to the end user. (…data may include the facility address, services provided at a facility, the presence of equipment and supply quantities needed to provide products, employees present at the facility, employees logged into a system at a facility, such as an electronic time card system, employees currently not engaged in or soon to be engaged in another task, and skills in which present employees have been trained. … Further examples of data associated with an enterprise facility include the number of customers at a facility in a waiting queue, regular traffic patterns, prescheduled appointments, events scheduled at the facility, including events scheduled for customers and for employees, and appointments requested by walk-in customers. Further examples of data associated with an enterprise facility include utility statuses of the facility. Phillips et al. [col. 6, lines 30-67]). Regarding Claim 9, Phillips et al. teaches the method of claim 1, wherein the influential factor data includes at least one from among distance-to-office data that relates to an end user, weather data that relates to the end user, traffic condition data that relates to the end user, internal/first-party event data that relates to the end user, and external/third-party event data that relates to the end user. (…data may include the facility address, services provided at a facility, the presence of equipment and supply quantities needed to provide products, employees present at the facility, employees logged into a system at a facility, such as an electronic time card system, employees currently not engaged in or soon to be engaged in another task, and skills in which present employees have been trained. … Further examples of data associated with an enterprise facility include the number of customers at a facility in a waiting queue, regular traffic patterns, prescheduled appointments, events scheduled at the facility, including events scheduled for customers and for employees, and appointments requested by walk-in customers. Further examples of data associated with an enterprise facility include utility statuses of the facility. Phillips et al. [col. 6, lines 30-67]). Regarding Claims 10 and 14-18, Claims 10 and 14-18 recite substantially similar limitations to those of claims 1 and 5-9 respectively and are therefore rejected based upon the same prior art reference, reasoning, and rationale. Claims 10 and 14-18 are directed to a computing apparatus for facilitating resource management by using predictive analytics, the computing apparatus comprising: a processor; a memory; and a communication interface coupled to each of the processor and the memory which is taught by Phillips et al. [col. 19, lines 1-23; Fig. 6]: e terms “system” and “component” and “module” are intended to refer to a computer-related entity, either hardware, a combination of hardware and software, software, or software in execution, examples of which are provided by the exemplary computing architecture 600. For example, a component can be, but is not limited to being, a process running on a processor, a processor, a hard disk drive, multiple storage drives (of optical and/or magnetic storage medium), an object, an executable, a thread of execution, a program, and/or a computer. By way of illustration, both an application running on a server and the server can be a component. Regarding Claim 19, Claim 19 recites substantially similar limitations to those of claim 1 and are therefore rejected based upon the same prior art reference, reasoning, and rationale. Claim 19 is directed to a non-transitory computer readable storage medium storing instructions for facilitating resource management by using predictive analytics, the storage medium comprising executable code which, when executed by a processor, which is taught by Phillips et al. [col. 19, lines 1-23; Fig. 6]: e terms “system” and “component” and “module” are intended to refer to a computer-related entity, either hardware, a combination of hardware and software, software, or software in execution, examples of which are provided by the exemplary computing architecture 600. For example, a component can be, but is not limited to being, a process running on a processor, a processor, a hard disk drive, multiple storage drives (of optical and/or magnetic storage medium), an object, an executable, a thread of execution, a program, and/or a computer. By way of illustration, both an application running on a server and the server can be a component. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or non-obviousness. Claims 2, 11, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Phillips et al. (US 10,810,528) in view of Kenyon et al. (US 2023/0186189). Regarding Claim 2, Phillips et al. fails to explicitly disclose the method, further comprising: identifying, by the at least one processor using at least one second model, at least one behavioral segment based on the end user data, each of the at least one behavioral segment relating to a grouping of a plurality of end users based on at least one shared attribute; and identifying, by the at least one processor using the at least one second model, at least one preference characteristic for each of the at least one behavioral segment based on the end user data. Kenyon et al. discloses this limitation. (… facility data may be collected from reservation systems. For example, organizations that enable their employees to reserve a space (e.g., workspace or conference room) may collect the reservation information, along with data about the person who made the reservation and/or a list of persons that will be at the reserved space. This information can be used to determine a user's location both in the past and in the future and/or identify the user's preferred features. Kenyon et al. [para. 0005, 0032]. …. When, however, location information is not directly available, the user location identifier engine 150 may intelligently determine the user's current or future location using the user data 140 (e.g., current or past location data), contextual data 142, facility data 144 and/ or map data 146. This may involve use of one or more ML models, such as the user location identifier model 152, for predicting the user's location based on past behavior and/or patterns in user behavior. … user location identifier model 152 may receive the user data 140, contextual data 142, facility data 144 and/or map data 146 as inputs and analyze the data to determine or predict one or more users' current or future locations based on patterns in user behavior and/or other parameters. Kenyon et al. [para. 0034-0041]). It would have been obvious to one of ordinary skill in the art of facilities and resource management before the effective filing date of the claimed invention to modify the steps for automatically monitoring facility resource utilization, analyzing trends in resource utilization, and/or dynamically distributing product requests taught by Phillips et al. to include identifying, by the at least one processor using at least one second model, at least one behavioral segment based on the end user data, each of the at least one behavioral segment relating to a grouping of a plurality of end users based on at least one shared attribute; and identifying, by the at least one processor using the at least one second model, at least one preference characteristic for each of the at least one behavioral segment based on the end user data as disclosed by Kenyon et al. for determining optimal uses for the one or more physical spaces in a future time period, receiving as an output from the trained ML model suggested plans for use or management of the one or more physical spaces in the future time period, and providing the suggested plans for display in a UI screen (Kenyon et al. [para. 0006]), in a manner that would have yielded predictable results at the relevant time. Regarding Claim 11, Claim 11 recites substantially similar limitations to those of claim 2 and are therefore rejected based upon the same prior art reference, reasoning, and rationale. Claim 11 is directed to a computing apparatus for facilitating resource management by using predictive analytics, the computing apparatus comprising: a processor; a memory; and a communication interface coupled to each of the processor and the memory which is taught by Phillips et al. [col. 19, lines 1-23; Fig. 6]: e terms “system” and “component” and “module” are intended to refer to a computer-related entity, either hardware, a combination of hardware and software, software, or software in execution, examples of which are provided by the exemplary computing architecture 600. For example, a component can be, but is not limited to being, a process running on a processor, a processor, a hard disk drive, multiple storage drives (of optical and/or magnetic storage medium), an object, an executable, a thread of execution, a program, and/or a computer. By way of illustration, both an application running on a server and the server can be a component. Regarding Claim 20, Claim 20 recites substantially similar limitations to those of claim 2 and are therefore rejected based upon the same prior art reference, reasoning, and rationale. Claim 20 is directed to a non-transitory computer readable storage medium storing instructions for facilitating resource management by using predictive analytics, the storage medium comprising executable code which, when executed by a processor, which is taught by Phillips et al. [col. 19, lines 1-23; Fig. 6]: e terms “system” and “component” and “module” are intended to refer to a computer-related entity, either hardware, a combination of hardware and software, software, or software in execution, examples of which are provided by the exemplary computing architecture 600. For example, a component can be, but is not limited to being, a process running on a processor, a processor, a hard disk drive, multiple storage drives (of optical and/or magnetic storage medium), an object, an executable, a thread of execution, a program, and/or a computer. By way of illustration, both an application running on a server and the server can be a component. Claims 3-4 and 12-13 are rejected under 35 U.S.C. 103 as being unpatentable over Phillips et al. (US 10,810,528) in view of Kenyon et al. (US 2023/0186189), and in further view of Davis et al. (US 2023/0148149). Regarding Claim 3, Phillips et al. and Kenyon et al. combined disclose the method, further comprising: determining, by the at least one processor using at least one third model, at least one usage forecast based on the at least one behavioral segment, the corresponding at least one preference characteristic, the resource data, the influential factor data, and at least one predetermined criterion; and (… a system can be trained using data generated by a ML model in order to identify patterns in user activity, determine associations between various users and/or user actions, predict user locations, identify user preferences and the like. Such training may be made following the accumulation, review, and/or analysis of data (e.g., user data and facility data) over time. Such data is configured to provide the ML algorithm (MLA) with an initial or ongoing training set. … the ML model trainer is configured to automatically generate multiple different ML models from the same or similar training data for comparison. Kenyon et al. [para. 0034-0040]). It would have been obvious to one of ordinary skill in the art of facilities and resource management before the effective filing date of the claimed invention to modify the steps for automatically monitoring facility resource utilization, analyzing trends in resource utilization, and/or dynamically distributing product requests taught by Phillips et al. to include determining, by the at least one processor using at least one third model, at least one usage forecast based on the at least one behavioral segment, the corresponding at least one preference characteristic, the resource data, the influential factor data, and at least one predetermined criterion as disclosed by Kenyon et al. for determining optimal uses for the one or more physical spaces in a future time period, receiving as an output from the trained ML model suggested plans for use or management of the one or more physical spaces in the future time period, and providing the suggested plans for display in a UI screen (Kenyon et al. [para. 0006]), in a manner that would have yielded predictable results at the relevant time. Phillips and Kenyon et al. combined fail to explicitly disclose determining, by the at least one processor, at least one cost allocation for each of the at least one usage forecast, wherein the at least one predetermined criterion includes at least one from among an organizational criterion and a user criterion. Davis et al. discloses this limitation. (A BMS may include one or more computer systems (e.g., servers, BMS controllers, etc.) that serve as enterprise level controllers, application or data servers, head nodes, master controllers, or field controllers for the BMS. Such computer systems may communicate with multiple downstream building systems or subsystems. Davis et al.[para. 0033-0034]. … the building management platform 102 may be implemented as an “agent”, or artificial intelligent/machine learning component configured to facilitate communication and collection of data between the variety of different data sources. Davis et al. [para. 0037]. … Demand response layer 514 can be configured to determine (e.g., optimize) resource usage (e.g., electricity use, natural gas use, water use, etc.) and/or the monetary cost of such resource usage to satisfy the demand of building 10. The resource usage determination can be based on time-of-use prices, curtailment signals, energy availability, or other data. Davis et al. [para. 0080-0086]. … baseline calculator 610 may determine dynamic baseline by calculating average of historical resource (electricity) consumption values for the historical time period. Davis et al. [para. 0103-105]). It would have been obvious to one of ordinary skill in the art of resource and cost management before the effective filing date of the claimed invention to modify the data analysis steps of Phillips et al. and Kenyon et al. combined to include determining, by the at least one processor, at least one cost allocation for each of the at least one usage forecast, wherein the at least one predetermined criterion includes at least one from among an organizational criterion and a user criterion as disclosed by Davis et al. for reducing operational costs (Davis et al. [para. 0006]), in a manner that would yield positive results at the requisite time. Regarding Claim 4, Phillips, Kenyon et al., and Davis et al. combined disclose the method, wherein each of the at least one first model, the at least one second model, and the at least one third model includes at least one from among a large language model, a deep learning model, a neural network model, a natural language processing model, a machine learning model, a mathematical model, and a process model. (… different underlying MLAs, such as, but not limited to, decision trees, random decision forests, neural networks, deep learning (for example, convolutional neural networks), support vector machines, regression (for example, support vector regression, Bayesian linear regression, or Gaussian process regression) may be trained. Kenyon et al. [para. 0036]). It would have been obvious to one of ordinary skill in the art of facilities and resource management before the effective filing date of the claimed invention to modify the steps for automatically monitoring facility resource utilization, analyzing trends in resource utilization, and/or dynamically distributing product requests taught by Phillips et al. to include each of the at least one first model, the at least one second model, and the at least one third model includes at least one from among a large language model, a deep learning model, a neural network model, a natural language processing model, a machine learning model, a mathematical model, and a process model as disclosed by Kenyon et al. for determining optimal uses for the one or more physical spaces in a future time period, receiving as an output from the trained ML model suggested plans for use or management of the one or more physical spaces in the future time period, and providing the suggested plans for display in a UI screen (Kenyon et al. [para. 0006]), in a manner that would have yielded predictable results at the relevant time. Regarding Claims 12-13, Claims 12-13 recite substantially similar limitations to those of claims 3-4 respectively and are therefore rejected based upon the same prior art reference, reasoning, and rationale. Claims 12-13 are directed to a computing apparatus for facilitating resource management by using predictive analytics, the computing apparatus comprising: a processor; a memory; and a communication interface coupled to each of the processor and the memory which is taught by Phillips et al. [col. 19, lines 1-23; Fig. 6]: e terms “system” and “component” and “module” are intended to refer to a computer-related entity, either hardware, a combination of hardware and software, software, or software in execution, examples of which are provided by the exemplary computing architecture 600. For example, a component can be, but is not limited to being, a process running on a processor, a processor, a hard disk drive, multiple storage drives (of optical and/or magnetic storage medium), an object, an executable, a thread of execution, a program, and/or a computer. By way of illustration, both an application running on a server and the server can be a component. Conclusion The prior art made of record and not relied upon is considered pertinent to Applicant’s disclosure: Gladwin et al. (US 2019/0325355) - an Application Program Interface (API) that accesses a directory and live record of assets (from floor to floor) to create a live map to identify capacity. Capacity may include what desks are free and who is sitting at what desk. An embodiment of the present invention may record this information on a day by day basis and generate capacity maps on a periodic basis, e.g., month to month, etc. Demand response layer 414 may be configured to optimize resource usage (e.g., electricity use, natural gas use, water use, etc.) and/or the monetary cost of such resource usage in response to satisfy the demand of building 10. The optimization may be Based on time-of-use prices, curtailment signals, energy availability, or other data received from utility providers, distributed energy generation systems 424, from energy storage 427 (e.g., hot TES 242, cold TES 244, etc.), or from other sources. Zhang et al. (US 2024/0110717) - method for controlling building equipment includes providing an occupancy prediction for a building using an occupancy prediction model that uses both historical values and forecast values of an environmental condition as inputs. The method also includes controlling the building equipment based on the occupancy prediction. Any inquiry concerning this communication or earlier communications from the examiner should be directed to LETORIA G KNIGHT whose telephone number is (571)270-0485. The examiner can normally be reached M-F 9am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Rutao WU can be reached at 571-272-6045. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /L.G.K/Examiner, Art Unit 3623 /RUTAO WU/Supervisory Patent Examiner, Art Unit 3623
Read full office action

Prosecution Timeline

Sep 24, 2024
Application Filed
Feb 07, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12579488
METHODS AND SYSTEMS FOR OPTIMIZING VALUE IN CERTAIN DOMAINS
2y 5m to grant Granted Mar 17, 2026
Patent 12536552
HUMANOID SYSTEM FOR AUTOMATED CUSTOMER SUPPORT
2y 5m to grant Granted Jan 27, 2026
Patent 12499400
Sensor Input and Response Normalization System for Enterprise Protection
2y 5m to grant Granted Dec 16, 2025
Patent 12380409
METHODS AND SYSTEMS FOR EXPLOITING VALUE IN CERTAIN DOMAINS
2y 5m to grant Granted Aug 05, 2025
Patent 12373748
SYSTEMS AND METHODS OF ASSIGNING MICROTASKS OF WORKFLOWS TO TELEOPERATORS
2y 5m to grant Granted Jul 29, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
27%
Grant Probability
73%
With Interview (+46.5%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 173 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month