Prosecution Insights
Last updated: April 19, 2026
Application No. 18/016,558

COLLABORATIVE, MULTI-USER PLATFORM FOR DATA INTEGRATION AND DIGITAL CONTENT SHARING

Non-Final OA §101§103
Filed
Jan 17, 2023
Examiner
ALSTON, FRANK MAURICE
Art Unit
3625
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Locomex Inc.
OA Round
3 (Non-Final)
0%
Grant Probability
At Risk
3-4
OA Rounds
3y 0m
To Grant
0%
With Interview

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 16 resolved
-52.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
32 currently pending
Career history
48
Total Applications
across all art units

Statute-Specific Performance

§101
40.6%
+0.6% vs TC avg
§103
46.5%
+6.5% vs TC avg
§102
8.4%
-31.6% vs TC avg
§112
2.6%
-37.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 16 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/23/2025 has been entered. Status of Claims This is a NonFinal Action on the merits in response to the application filed on 12/23/2025. Claims 1, 7 – 8, 14 – 15, and 20 are amended. Claims 1 – 2, 4 – 5, 7 – 9, 11 – 12, 14 – 16, and 18 – 20 are pending in this application. Response to Remarks Examiner’s Response to Remarks: II. Rejection of Claims 1 – 20, under 35 USC § 101; III. Rejection of Claims 1 – 20, under 35 USC § 103. Examiner’s Response to II. Rejection of the Claims under 35 U.S.C. § 101. Applicant argues that the amended independent claims recite additional elements that integrate the judicial exception into a practical application and therefore satisfy patent eligibility under 35 U.S.C. § 101 according to the considerations of Step 2A, Prong Two (MPEP § 2106, subsection III, Step 2A). Examiner respectfully disagrees. Applicant’s independent claims recite the abstract idea of certain methods of organizing human activity. The claims recite certain activity that manages interactions between a human and a computer. Claims 1, 8, and 15 do not recite additional elements that integrate the judicial exception into a practical application. Applicant’s independent claims recite the additional elements of a collaborative, multi-user system, comprising: one or more computer-readable media storing a platform and data associated with one or more users and one or more projects; a processing device configured to execute the platform to: generate one or more graphical user interfaces with the platform; execute an ensemble of machine learning models having a tiered hierarchical configuration to: extract data from unstructured documents using natural language processing (NLP); generate recommendations using a first tier of the tiered hierarchical configuration, the first tier comprising at least two machine learning models configured to be executed in parallel; by automatically or semi-automatically generating data; and the tracking comprising updating the one or more graphical user interfaces to automatically display supplier diversity and local content associated with the one or more projects based on the tracked status, a method for a collaborative, multi-user system, and a non-transitory computer-readable medium storing instructions, wherein execution of the instructions by a processing device. However, these are merely generic computer components per Applicant’s Spec. ¶ 0116. Even though Applicant recites using natural language processing (NLP), an ensemble of machine learning models having tiered hierarchical, at least two machine learning models executed in parallel, one or more graphical user interfaces that automatically displays, the independent claims do not recite additional elements that amount to significantly more than the judicial exception and merely recite the words “apply it”. Regarding improvement, Applicant recites from Remarks Step 2B, ¶ 6: “the claimed hierarchy of tiers broadens the scope of available models that can be used in the first prediction tier (thereby improving the accuracy and broadening the applicability of the predictive tool), and improves the resulting efficiency and accuracy of project classification based on how the second classification tier operates on the predictions of the first tier.” However, there is no improvement to the computer nor technological field; and Applicant’s claims resolve a business problem where the business problem is connecting businesses and government agencies on a collaborative platform and using modeling to determine predictions and recommendations for projects. Although Applicant recites a tiered hierarchical configuration of the machine learning models, these additional elements are recited at a high level of generality. Accordingly, the independent claims do not recite additional elements that integrate the judicial exception into a practical application. Applicant’s claims as a whole are not significantly more than the judicial exception; and the limitations of the dependent claims the dependent claims encompass the same abstract idea, and are not integrated into a practical application because none of the additional elements set forth any limitations that meaningfully limit the abstract idea implementation. For the reasons above, claims 1 – 2, 4 – 5, 7 – 9, 11 – 12, 14 – 16, and 18 – 20, are rejected under 35 U.S.C. § 101. III. Rejection of the Claims under 35 U.S.C. § 103. Applicant argues the amended independent claims should not be rejected under 35 U.S.C. § 103 as allegedly being unpatentable over U.S. Patent No. 7,925,568 ("Cullen"), in view of U.S. Publication No. 2018/0060759 ("Chu"), in view of U.S. Publication No. 2020/0065772 ("Whitehead"). Examiner respectfully disagrees. Applicant has amended claims 1, 7 – 8, 14 – 15, and 20. A new search was necessitated due to the amendments to the independent claims and new art has been applied to those independent claims. Accordingly, 1 – 2, 4 – 5, 7 – 9, 11 – 12, 14 – 16, and 18 – 20, are rejected under 35 U.S.C. § 103. Claim Rejections: 35 U.S.C. § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1 – 2, 4 – 5, 7 – 9, 11 – 12, 14 – 16, and 18 – 20, are rejected under 35 U.S.C. §101 because the claimed invention is directed towards an abstract idea without significantly more. Claims 1, 8, and 15: the users interact to facilitate supplier diversity planning and supply chain localization delivery for one or more phases of the one or more projects; by predicting risk scores of candidate suppliers and of a subset of the candidate suppliers, and by predicting environmental, social and governance (ESG) scores for the one or more projects, using the extracted data; provide the recommendations as inputs to a second tier of the tiered hierarchical configuration, the second tier configured to solve a classification problem; and solve the classification problem, using the second tier, based on the recommendations to provide classifications for the one or more projects; create proposal or bid documents for pre-bid intelligent reports, using the classifications for the one or more projects; and track a status of the one or more projects based on the classifications. The limitations of claim 1, under its broadest reasonable interpretation recites certain methods of organizing human activity. The claim particularly recites the activity of managing interactions where there are management of interactions between a human and a computer. For example, we have generate the users interact to facilitate supplier diversity planning and supply chain localization delivery for one or more phases of the one or more projects; predicting risk scores of candidate suppliers and of a subset of the candidate suppliers, and by predicting environmental, social and governance (ESG) scores for the one or more projects, using the extracted data; provide the recommendations as inputs to a second tier of the tiered hierarchical configuration, the second tier configured to solve a classification problem; and solve the classification problem, using the second tier, based on the recommendations to provide classifications for the one or more projects; create proposal or bid documents for pre-bid intelligent reports, using the classifications for the one or more projects; and track a status of the one or more projects based on the classifications where these limitations all involve certain activity between a person and a computer. Accordingly, claim 1 recites certain methods of organizing human activity. Claims 8 and 15 are similar and recite substantially the same subject matter as claim, and recites the same abstract idea. The dependent claims encompass the same abstract ideas as well. For instance, claims 2, 9, and 16 are directed towards evaluating an ensemble of machine learning models based on training data that includes at least supplier data and project data; claims 4, 11, and 18 are directed towards observing at least two machine learning models of the first tier include a random forest model and an extreme gradient boost model; claims 5, 12, and 19 are directed towards observing at least one machine learning model of the second tier is a neural network model; and claims 7, 14, and 20 are directed towards observing at least one machine learning model in the second tier receives outputs from the at least two machine learning models in the first tier and generates a final output based on the outputs from the at least two machine learning models. Accordingly, the dependent claims encompass the same abstract idea. These judicial exceptions are not integrated into a practical application. Claim 1 recites the additional elements of a collaborative, multi-user system, comprising: one or more computer-readable media storing a platform and data associated with one or more users and one or more projects; a processing device configured to execute the platform to: generate one or more graphical user interfaces with the platform; execute an ensemble of machine learning models having a tiered hierarchical configuration to: extract data from unstructured documents using natural language processing (NLP); generate recommendations using a first tier of the tiered hierarchical configuration, the first tier comprising at least two machine learning models configured to be executed in parallel; by automatically or semi-automatically generating data; and the tracking comprising updating the one or more graphical user interfaces to automatically display supplier diversity and local content associated with the one or more projects based on the tracked status. In addition to reciting the additional elements of claim 1, Claim 8 recites the additional elements of a method for a collaborative, multi-user system; and in addition to reciting the additional elements of claim 1, claim 15 recites the additional elements of a non-transitory computer-readable medium storing instructions, wherein execution of the instructions by a processing device. However, the additional elements of a collaborative, multi-user system, comprising: one or more computer-readable media storing a platform and data associated with one or more users and one or more projects; a processing device configured to execute the platform to: generate one or more graphical user interfaces with the platform; execute an ensemble of machine learning models having a tiered hierarchical configuration to: extract data from unstructured documents using natural language processing (NLP); generate recommendations using a first tier of the tiered hierarchical configuration, the first tier comprising at least two machine learning models configured to be executed in parallel; by automatically or semi-automatically generating data; and the tracking comprising updating the one or more graphical user interfaces to automatically display supplier diversity and local content associated with the one or more projects based on the tracked status, a method for a collaborative, multi-user system, and a non-transitory computer-readable medium storing instructions, wherein execution of the instructions by a processing device are considered generic computer components as per Applicant’s Specifications shown below: [0116] The computing device 300 can include a network interface 312 configured to interface via one or more network devices 320 with one or more networks, for example, Local Area Network (LAN), Wide Area Network (WAN) or the Internet through a variety of connections including, but not limited to, standard telephone lines, LAN or WAN links (for example, 802.11, T1, T3, 56kb, X.25), broadband connections (for example, ISDN, Frame Relay, ATM), wireless connections, controller area network (CAN), or some combination of any or all of the above. The network interface 312 may include a built-in network adapter, network interface card, PCMCIA network card, card bus network adapter, wireless network adapter, USB network adapter, modem or any other device suitable for interfacing the computing device 300 to any type of network capable of communication and performing the operations described herein. Moreover, the computing device 300 may be any computer system, such as a workstation, desktop computer, server, laptop, handheld computer, tablet computer (e.g., the iPadTM tablet computer), mobile computing or communication device (e.g., the iPhoneTM communication device), point-of sale terminal, internal corporate devices, or other form of computing or telecommunications device that is capable of communication and that has sufficient processor power and memory capacity to perform the processes and/or operations described herein. and thus are not practically integrated nor significantly more. The claims do not include additional elements that are sufficient to amount significantly more than the judicial exception. As stated above, a collaborative, multi-user system, comprising: one or more computer-readable media storing a platform and data associated with one or more users and one or more projects; a processing device configured to execute the platform to: generate one or more graphical user interfaces with the platform; execute an ensemble of machine learning models having a tiered hierarchical configuration to: extract data from unstructured documents using natural language processing (NLP); generate recommendations using a first tier of the tiered hierarchical configuration, the first tier comprising at least two machine learning models configured to be executed in parallel; by automatically or semi-automatically generating data; and the tracking comprising updating the one or more graphical user interfaces to automatically display supplier diversity and local content associated with the one or more projects based on the tracked status, a method for a collaborative, multi-user system, and a non-transitory computer-readable medium storing instructions, wherein execution of the instructions by a processing device are considered generic computer components performing generic computer functions and amount to no more than mere instructions using generic computer components to implement the judicial exception. Mere instructions to apply an exception using generic computer components cannot provide an inventive concept. Dependent claims 2, 4 – 5, 7, 9, 11 – 12, 14, 16, and 18 – 20, when analyzed both individually and in combination are also held to be ineligible for the same reason above and the additional recited limitations fail to establish that the claims are not directed to an abstract idea. The additional limitations of the dependent claims when considered individually and as an ordered combination do not amount to significantly more than the abstract idea. Looking at these limitations as ordered combination and individually add nothing additional that is sufficient to amount to significantly more than the recited abstract idea because they simply provide instructions to use generic computer components, to “apply” the recited abstract idea. Thus, the elements of the claims, considered both individually and as an ordered combination, are not sufficient to ensure that the claim as a whole amount to significantly more than the abstract idea itself. Therefore, claims 1 – 2, 4 – 5, 7 – 9, 11 – 12, 14 – 16, and 18 – 20, are not patent eligible. Claim Rejections – 35 U.S.C. § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103(a) are summarized as follows: Determining the scope and contents of the prior art. Ascertaining the differences between the prior art and the claims at issue. Resolving the level of ordinary skill in the pertinent art. Considering objective evidence present in the application indicating obviousness or nonobviousness. 6. Claims 1 – 2, 4 – 5, 7 – 9, 11 – 12, 14 – 16, and 18 – 20 are rejected under 35 U.S.C. § 103 as being unpatentable over Cullen III, Andrew A. et al. (U.S. Patent No. 7,925,568) hereinafter “Cullen” in view of Chu, Chengwen Robert et al. (U.S. Publication No. 2018/0060759) hereinafter “Chu” in view of Cao, Yang et al. "Prediction of unit price bids of resurfacing highway projects through ensemble machine learning." Journal of Computing in Civil Engineering 32.5 (2018) hereinafter “Cao” in view of Whitehead, Christina R et al. (U.S. Publication No. 2020/0065772) hereinafter “Whitehead”. Claims 1, 8, and 15: Cullen teaches the following: A collaborative, multi-user system, comprising: one or more computer-readable media storing a platform and data associated with one or more users and one or more projects; Cullen teaches in claim 20, a computer-usable medium having computer-readable program code embodied therein, the computer-readable program code adapted to be executed to implement a method comprising. Cullen teaches in col. 1, lines 65 – 67, and col. 2, lines 1 – 4, the computer system and method is capable of producing analytical data related the project bid management system. Cullen further teaches in col. 2, lines 4 – 10, one or more vendors and one or more buyers may use the system where buyers and vendors may be likened to users where the data is associated with multiple buyers, multiple vendors, and multiple projects. and a processing device configured to execute the platform to: generate one or more graphical user interfaces through which the users interact with the platform to facilitate supplier diversity planning and supply chain localization delivery for one or more phases of the one or more projects; Cullen teaches in col. 4, lines 11 – 15, steps in a project administration setup process, in which the project is awarded to a vendor and the terms and conditions of the project are finalized and entered into the computer system to track milestones and deliverables; Cullen teaches a processor in col. 8, lines 4 – 7, where a processor (e.g., a microprocessor or microcontroller) within the computer loads and runs the web browser to access the data network; Cullen teaches in col. 6, lines 38 – 41, a bid web server allows provides a user interface for the vendors, buyers contractors, and administrators; Cullen teaches in col. 6, lines 50 – 51, a user interface to the vendor users 5 is provided by the bid web server through a vendor module; Cullen teaches in col. 10, lines 9 – 13, vendors designating various personnel to manage different parts of the project for efficiency in the project, where various personnel designated to manage may be likened supplier diversity planning; Cullen teaches in col. 11, lines 11 – 23, pre-bid supplier activity such as the vendor qualification data 162 can identify the specific goods and/or services that the vendor 10 provides and the specific geographical areas that the vendor 10 is capable of supplying the se goods and/or services, along with other vendor information, such as the size of the vendor, whether the vendor has insurance, whether the vendor is certified in certain industries, etc. The buyer-defined vendor criteria data 164 can identify the specific goods and/or services that the buyer 50 desires, the specific geographical areas that the buyer 50 wants the goods and/or services and other buyer constraints, such as the preferred size of the vendor, requisite vendor insurance needs, requisite vendor certifications, etc. Cullen teaches in col. 12, lines 24 – 25, vendors may use the platform for tracking deliverables during the project; Cullen teaches in col. 81, lines 34 – 42, project planning and a buyer requesting vendor information that may include supply chain information. create proposal or bid documents by automatically or semi-automatically generating data for pre-bid intelligent reports, using the classifications for the one or more projects; Cullen teaches in col. 8, line 54, create a bid request; Cullen teaches in col. 10, lines 58 – 59, pre-bid activity; and Cullen teaches in col. 99, lines 47 – 50, a project risk/failure performance exception report containing analytical data related to the performance of at-risk or non-compliant projects. Cullen teaches in col 17, lines 42 – 53, in addition to assigning users to specific user role positions for a bid/project process, the database table structure 300 further provides the ability to designate transactions that require approving and specific approvers for a variety of reasons. Therefore, within a table "tblApprovalLevel" 310, certain user role positions can be classified as approval positions, and for each approval position, the routing order for approval can be specified. For example, a user role position approver (Approver A) can be designated to approve all transactions generated by another user role position (User B), so that the system automatically routes all transactions from User B to Approver A. and track a status of the one or more projects based on the classifications, supplier diversity and local content associated with the one or more projects based on the tracked status; Cullen teaches in col. 8, line 54, create a bid request; Cullen teaches in col. 10, lines 58 – 59, pre-bid activity; and Cullen teaches in col. 99, lines 47 – 50, a project risk/failure performance exception report containing analytical data related to the performance of at-risk or non-compliant projects; Cullen teaches in col. 68, lines 21 – 35, Once the project has begun, the project administrator (or buyer) can monitor the progress of the project using a time keeping system, in which contractors enter time into time cards for project work performed. The time cards can be stored to assess project performance for requisition payment information and/or to generate payment vouchers based on time worked, depending on the requisition payment information. For example, if the requisition payment amount was based, at least in part, on an anticipated number of billable hours of a particular contractor at a particular pay rate, and the contractor completed the project under the anticipated number of billable hours, the project administrator and vendor may be able to re-negotiate the requisition payment amount that was initially set for payment based on deliverables, time frames or units. Cullen teaches in col. 82, lines 42 – 58, the comparison tool 123 can be configured to monitor the database 155 for new voucher information 1160 entries or otherwise be triggered upon the entry of new voucher information 1160 to compare the entered voucher information 1160 with the previously stored project tracking parameters 870 for the project. The voucher information 1160 can contain cost, timing or other information with which to compare to the project tracking parameters 870. The results of the comparison can be stored as project performance data 1190 in the database 155. For example, the voucher information 1160 could indicate an invoice amount paid by the buyer 50 on a project, and the comparison tool 123 can compare the invoice amount with the requisition amount to determine if a discrepancy exists. In this case, the project performance data 1190 could include an indication of the cost status, such as under-budget, over-budget or in-budget, and the amount over or under budget, if any. Cullen teaches in col. 93, lines 43 – 54, generation of analytical data based on transactional data, where the transactional data includes at least bid data, project tracking parameters and project performance data. The transactional data is stored by the system (step 4900), as described above in connection with FIG. 52. In this process, a request for the analytical data is received from an authorized user of the system (step 4910). The request may be submitted as a search and/or sort request to select particular or general types of transactional data as collected by the system. In addition, the request may include one or more filters to narrow the amount of transactional data within the selected types of transactional data that is used in the generation of the analytical data. Cullen teaches in col. 93, lines 55 – 67, and col. 94, lines 1 – 3, once the requisite transactional data is identified and retrieved, the analytical data is generated from one or more components of the transactional data (e.g., bid data, project tracking parameters and/or project performance data) (step 4920). In generating the analytical data, various mathematical and statistical functions may be utilized to produce a wide variety of information requested by the user. The analytical data can be generated from transactional data related to a single project, multiple projects, multiple vendors or multiple buyers, and it can be presented to the user in a variety of reporting views. For example, exemplary reporting views include summary views, aggregate views, estimation views, statistical views, project performance views or any combination of thereof. The analytical data may be graphically displayed to assist the user in analyzing projects or industry trends. While Cullen teaches tracking projects, collaboration, pre-bid activity, and an interface for vendors, buyers and contractors, Cullen does not explicitly teach ensemble machine models having a tiered hierarchical configuration. However, Chu teaches the following: execute an ensemble of machine learning models having a tiered hierarchical configuration to: extract data from unstructured documents using natural language processing (NLP); Chu teaches in ¶ 0051, unstructured data may be analyzed and structuring that data hierarchically; Chu teaches in ¶ 0069, the machine learning model may include layers; Chu teaches in ¶ 0137 machine learning models may be an ensemble combination; Chu teaches in ¶ 0138 Different machine-learning models may be used interchangeably to perform a task such as generating recommendations, and performing natural language processing; the tracking comprising updating the one or more graphical user interfaces to automatically display; Chu teaches in ¶ 0036, In some examples, after a model has been built in response to a model-building request, the system may automatically rebuild or retrain the model in response to certain events occurring internally or externally to the system. For example, as new model-building tools are added to the system, the system can automatically rebuild the model to create a newer version of the model using a newer model-building tool. This can help reduce errors in the models by ensuring that the models are up-to-date, as well as improve the performance and efficiency of such models. As another example, the system can automatically rebuild a model in response to other types of software being added to the system or updated in the system. The system may rebuild the models to, for example, make them compatible with the new or updated software. In some examples, an event external to the system can trigger an automatic rebuild of the model. Examples of an event that is external to the system can include an economic event, regulatory event, legal event, political event, or any combination of these. Chu teaches in ¶ 0110, the computing environments described herein may collect data (e.g., as received from network devices, such as sensors, such as network devices 204-209 in Fig. 2, and client devices or other sources) to be processed as part of a data analytics project, and data may be received in real time as part of a streaming analytics environment (e.g., ESP). Data may be collected using a variety of sources as communicated via different kinds of networks or locally, such as on a real-time streaming basis. For example, network devices may receive data periodically from network device sensors as the sensors continuously sense, monitor and track changes in their environments. Chu teaches in ¶ 0159, The system can create the new version of the project in response to a request from a user or client device. For example, the system can present a user with a graphical user interface (GUI) through which the user can request that the new version of the project be created. The system can receive the request and responsively create the new version of the project. As another example, the system can receive a request in the form of a representational state transfer (REST) command via another type of command. The system can then create the new version of the project in response to the command. Before the effective filing date of the claimed invention it would have been obvious to one of ordinary skill in the art to combine a web-enabled computer system and method for producing analytical data for a project bid management system where multiple projects may be performed by one or more vendors for one or more buyers of Cullen with models developed, deployed, and managed in an automated manner where a model building tool can be selected based on the model building tool being compatible with one or more parameters to assist businesses with model building that includes a first model and a second model with ensemble configuration (Chu Spec. ¶ 0004). While Cullen teaches tracking projects, collaboration, pre-bid activity, and an interface for vendors, buyers and contractors, and Chu teaches ensemble machine models having a tiered hierarchical configuration; neither Cullen nor Chu explicitly teach a gradient boosting algorithm, extreme gradient boosting algorithm, and a random forest algorithm. However, Cao teaches the following: generate recommendations using a first tier of the tiered hierarchical configuration, the first tier comprising at least two machine learning models configured to be executed in parallel; Cao teaches in Ensemble Learning Model, ¶ 4, predicting using an ensemble model composed of two layers of prediction models; Cao further teaches in ¶ 5, multiple machine learning algorithms providing predictions in parallel at the first level of the ensemble model, where the at least two machine learning models are the gradient boosting algorithm, extreme gradient boosting algorithm, and random forest algorithm. Cao teaches in Conclusions, ¶ 6, The extreme boosting algorithm package provided in RStudio has built-in parallel computing that makes it significantly faster than other machine learning algorithms in dealing with data sets of similar sizes. provide the recommendations as inputs to a second tier of the tiered hierarchical configuration, the second tier comprising at least one machine learning model configured to solve a classification problem; Cao teaches in Ensemble Learning Model ¶ 5, Level-1 model predictions come from three machine learning algorithms: gradient boosting, extreme gradient boosting, and random forest, which are capable of dealing with numerical and text attributes simultaneously with good computing speed. The results from this layer make up the input of the second layer, where the second layer is executed as a neural network algorithm to provide the prediction. Cao teaches in Machine Learning Feature Selection, ¶ 3, In this research, the authors used Boruta importance analysis to implement feature selection for the following two reasons: theoretically, it exhibits high computational speed when dealing with a large number of variables, and it is useful when dealing with nonlinear variables. The idea of Boruta is to find the variables that have the most information to make the prediction and rank them. In essence, the Borura algorithm is an ensemble method in which classification is performed by multiple decision trees voting. and solve the classification problem, using the second tier, based on the recommendations to provide classifications for the one or more projects; Cao teaches in Machine Learning Feature Selection, ¶ 4, The result of Boruta analysis is shown Table 1 and Fig. 4; Cao further teaches in ¶ 4, for the purpose of clearly displaying the figure, each feature has been replaced by “x” plus a number. The required explanation is given in Table 1. The vertical coordination is the numerical calculation of feature importance. Because the analysis marked x2 (project year) and x15 (prior month county asphalt volume) as unimportant, they are not used to train the model; all other features were marked as important, including x5 (terrain), x8 (project asphalt quantity), x6 (region number), x13 (number of asphalt plants within 80.5 km of the project), and x10 (project length). Cao further teaches in Machine Learning Feature Selection, ¶ 7, the first 20 most important features selected to train the model. All features not classified as unimportant can be used for model training, but researchers can select a subgroup of important features based on either their knowledge or the model’s performance. In this research, subset selection is based on the recommended critical threshold in the Boruta feature selection algorithm, which is 8.25 (Lin et al. 2015). Features with importance measures of 8.25 or lower are not selected by the algorithm. Cao teaches in Second-Level Model, ¶ 1, in the second level of ensemble modeling, a neural network is selected to produce the final predictions. Compared with linear regression, the neural network is good at modeling complex nonlinear relations and is more widely applied. The three components of a neural network are input layer, hidden layers, and output layer. In this research, the input layer is composed of three nodes, which correspond to the results calculated from gradient boosting, extreme gradient boosting, and random forest. The output layer has only one node: unit price bid prediction. The difficulty is to determine the structure of the hidden layer(s): how many layers there are and how many hidden units there are in each layer. The number of hidden layers is decided to be one because there are only three input nodes, making simplicity reasonable. The best number of hidden nodes (units) is determined through cross-validation by attempting a different number of units. In this research, six attempts are made to train the hidden layer with 1, 3, 5, 7, 9, and 11 units. Through the iteration 9 turned out to be the optimal number because it gives the neural network the smallest root mean square error (RMSE). Before the effective filing date of the claimed invention it would have been obvious to one of ordinary skill in the art to combine a web-enabled computer system and method for producing analytical data for a project bid management system where multiple projects may be performed by one or more vendors for one or more buyers of Cullen and models developed, deployed, and managed in an automated manner where a model building tool can be selected based on the model building tool being compatible with one or more parameters of Chu with ensemble machine learning algorithms used to predict bidding prices of Cao to assist businesses in implementing ensemble machine learning models for training and testing project bidding data (Cao, Machine Learning Feature Selection, Page 5). While Cullen teaches a project risk/failure performance exception report containing analytical data 270 related to the performance of at-risk, tracking projects, collaboration, pre-bid activity, and an interface for vendors, buyers and contractors; and Chu teaches ensemble model configuration and layers in machine learning model building, and Cao teaches a gradient boosting algorithm, an extreme gradient boosting algorithm, and a random forest algorithm, neither Cullen, Chu, nor Cao explicitly teach a predicting with a government data set. However Whitehead teaches the following: by predicting risk scores of candidate suppliers and of a subset of the candidate suppliers, and by predicting environmental, social and governance (ESG) scores for the one or more projects, using the extracted data; Whitehead teaches in ¶ 0006, in particular embodiments, by tracking leading indicators and events that tend to trigger the initial thought process around switching jobs, aspects of the present disclosure allow for recruiters and internal human resources personnel to access key talent before they are actively job searching and being contacted by other recruiters based on lagging indicators like social media activity (e.g., external or internal key talent). In certain embodiments, the systems and methods discussed herein enable users to set alerts to track candidates so that they may be the first to know when a candidate becomes more likely to engage with a recruiter. As will be understood from discussions herein, the present systems and processes may be used to identify external candidates for a potential job opening or internal employees for retention purposes. As such, systems and processes discussed herein for a candidate (or a like term), are also for an employee within an organization. Whitehead teaches in ¶ 0016, determining, for each parameter and based on each impact score, at least one most impactful parameter; Whitehead teaches in ¶ 0019, aggregated machine learned predictions, a talent risk retention score; Whitehead teaches in claim 1, processing the set of candidate criteria to identify a subset of individuals from the plurality of individuals; Whitehead teaches in ¶ 0059, the one or more machine learned predictions may be engagement scores, or the like, that quantify a candidate's risk or inclination to respond to a recruitment technique (for example, a recruitment email) or to leave their current role. For example, a machine learned prediction may be a score between about 1-100. In various embodiments, the system defines each fk(x) by optimizing a machine learning model through training against a historical dataset where the outcomes are known; Whitehead teaches in ¶ 0062, category scores may be used to determine how candidates are being impacted by their current environment; Whitehead teaches on ¶ 0072, the collected data can include, but is not limited to, company data, role data, and candidate data. The company data can include, but is not limited to: 1) industry; 2) company type, including, but is not limited to, public, private, government, academic; 3) company size, including, but is not limited to, revenue, and number of employees; 4) company age; and 5) one or more employee brand metrics. The role data can include, but is not limited to: 1) role title; 2) role level, including, but not limited to, experience level, and education level; 3) role functions; 4) similar open positions; and 5) open growth opportunities. The candidate data can include, but is not limited to: 1) current tenure; 2) average tenure in previous roles; 3) number of previous roles with current company; 4) number of previous roles with previous companies; 5) skills; 6) education level; 7) relative pay; 8) previous industries; 9) previous company size, including, but not limited to, revenue, and number of employees; 10) previous company age; 11) geography (e.g., candidate location, current company location, previous company location, etc.); and 12) commute time. Before the effective filing date of the claimed invention it would have been obvious to one of ordinary skill in the art to combine a web-enabled computer system and method for producing analytical data for a project bid management system where multiple projects may be performed by one or more vendors for one or more buyers of Cullen and models developed, deployed, and managed in an automated manner where a model building tool can be selected based on the model building tool being compatible with one or more parameters of Chu and ensemble machine learning algorithms used to predict bidding prices of Cao with systems identify candidates that are highly likely to change jobs to expand the applicant pool for a given position and/or to increase the number of qualified applicants for the position of Whitehead to assist businesses with systems and processes used to identify external candidates for a potential job opening (Whitehead Spec. ¶ 0006). Claims 2, 9, and 16: Cullen, Chu, Cao, and Whitehead teach claims 1, 8, and 15. Chu further teaches the following: wherein the processing device is configured to execute the platform to: train the ensemble of machine learning models based on training data that includes at least supplier data and project data; Chu teaches in Abstract a first and second machine learning model generated and trained using a training dataset; Chu teaches in ¶ 0137, ensemble model configuration. Chu teaches in ¶ 0113, project data. Before the effective filing date of the claimed invention it would have been obvious to one of ordinary skill in the art to combine a web-enabled computer system and method for producing analytical data for a project bid management system where multiple projects may be performed by one or more vendors for one or more buyers of Cullen and ensemble machine learning algorithms used to predict bidding prices of Cao and systems identify candidates that are highly likely to change jobs to expand the applicant pool for a given position and/or to increase the number of qualified applicants for the position of Whitehead with models developed, deployed, and managed in an automated manner where a model building tool can be selected based on the model building tool being compatible with one or more parameters of Chu to assist businesses with model building that includes a first model and a second model with ensemble configuration (Chu Spec. ¶ 0004). Claims 4, 11, and 18: Cullen, Chu, Cao, and Whitehead teach claims 1, 8, and 15. Cao further teaches the following: wherein the at least two machine learning models of the first tier include a random forest model and an extreme gradient boost model; Cao teaches in Fig. 3., Page 4, a first tier includes at least two machine learning models. Before the effective filing date of the claimed invention it would have been obvious to one of ordinary skill in the art to combine a web-enabled computer system and method for producing analytical data for a project bid management system where multiple projects may be performed by one or more vendors for one or more buyers of Cullen and models developed, deployed, and managed in an automated manner where a model building tool can be selected based on the model building tool being compatible with one or more parameters of Chu and systems identify candidates that are highly likely to change jobs to expand the applicant pool for a given position and/or to increase the number of qualified applicants for the position of Whitehead with ensemble machine learning algorithms used to predict bidding prices of Cao to assist businesses in implementing ensemble machine learning models for training and testing project bidding data (Cao, Machine Learning Feature Selection, Page 5). Claims 5, 12, and 19: Cullen, Chu, Cao, and Whitehead teach claims 1, 8, and 15. Chu further teaches the following: wherein the at least one machine learning model of the second tier is a neural network model; Chu teaches in ¶ 0069, model may have layers arranged in a stack; Chu teaches in ¶ 0148, the neural network can have any number and combination of layers, and each layer can have any number and combination of neurons. Before the effective filing date of the claimed invention it would have been obvious to one of ordinary skill in the art to combine a web-enabled computer system and method for producing analytical data for a project bid management system where multiple projects may be performed by one or more vendors for one or more buyers of Cullen and ensemble machine learning algorithms used to predict bidding prices of Cao and systems identify candidates that are highly likely to change jobs to expand the applicant pool for a given position and/or to increase the number of qualified applicants for the position of Whitehead with models developed, deployed, and managed in an automated manner where a model building tool can be selected based on the model building tool being compatible with one or more parameters of Chu to assist businesses with model building that includes a first model and a second model with ensemble configuration (Chu Spec. ¶ 0004). Claims 7, 14, and 20: Cullen, Chu, Cao, and Whitehead teach claims 1, 8, and 15. Cao further teaches the following: wherein the at least one machine learning model in the second tier receives, as the recommendations, outputs from the at least two machine learning models in the first tier and generates, as the classifications, a final output based on the outputs from the at least two machine learning models; Cao teaches in First-Level model, Page 6, ¶ 1, three machine learning algorithms developed in first tier where each model is capable of running and outputting the regression analysis. Cao teaches in Ensemble Learning Model, ¶ 2, Dietterich defined ensemble methods as learning algorithms that construct a set of classifiers and then classify new data points by taking a weighted average vote of their predictions (Dietterich 2000). Two key points are mentioned in this definition: (1) ensemble methods are composed of more than one machine learning algorithm and (2) the final results are the weighted average of each algorithm’s prediction. Claim 1 above teaches the prediction results from the first level are input to the second level to produce a final prediction. Before the effective filing date of the claimed invention it would have been obvious to one of ordinary skill in the art to combine a web-enabled computer system and method for producing analytical data for a project bid management system where multiple projects may be performed by one or more vendors for one or more buyers of Cullen and models developed, deployed, and managed in an automated manner where a model building tool can be selected based on the model building tool being compatible with one or more parameters of Chu and systems identify candidates that are highly likely to change jobs to expand the applicant pool for a given position and/or to increase the number of qualified applicants for the position of Whitehead with ensemble machine learning algorithms used to predict bidding prices of Cao to assist businesses in implementing ensemble machine learning models for training and testing project bidding data (Cao, Machine Learning Feature Selection, Page 5). Conclusion The prior art made of record and not relied upon is considered relevant but not applied: Note: these are additional references found but not used. - Reference Hu, Bob (U.S. Patent No. 10,121,104) discloses generating recommendations from at least two different machine learning models with multi-prediction model architecture. - Reference Faulhaber Jr., Thomas Albert et al. (U.S. Patent No. 10,621,019) discloses deployment request identify multiple model data files corresponding to different trained machine learning models. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Frank Alston whose telephone number is 703-756-4510. The examiner can normally be reached 9:00 AM – 5:00 PM Monday - Friday. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Beth Boswell can be reached at (571) 272-6737. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /FRANK MAURICE ALSTON/ Examiner, Art Unit 3625 1/24/2026 /BETH V BOSWELL/Supervisory Patent Examiner, Art Unit 3625
Read full office action

Prosecution Timeline

Jan 17, 2023
Application Filed
Dec 17, 2024
Non-Final Rejection — §101, §103
Mar 21, 2025
Response Filed
Jun 19, 2025
Final Rejection — §101, §103
Sep 15, 2025
Interview Requested
Sep 23, 2025
Examiner Interview Summary
Sep 23, 2025
Applicant Interview (Telephonic)
Dec 23, 2025
Request for Continued Examination
Jan 12, 2026
Response after Non-Final Action
Jan 24, 2026
Non-Final Rejection — §101, §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
0%
Grant Probability
0%
With Interview (+0.0%)
3y 0m
Median Time to Grant
High
PTA Risk
Based on 16 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month