Prosecution Insights
Last updated: April 19, 2026
Application No. 18/971,177

COMPUTER SYSTEM AND TASK ASSIGNMENT CONTROL METHOD

Non-Final OA §101§102§103
Filed
Dec 06, 2024
Examiner
TORRICO-LOPEZ, ALAN
Art Unit
3625
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Hitachi, Ltd.
OA Round
1 (Non-Final)
28%
Grant Probability
At Risk
1-2
OA Rounds
3y 10m
To Grant
66%
With Interview

Examiner Intelligence

Grants only 28% of cases
28%
Career Allow Rate
97 granted / 348 resolved
-24.1% vs TC avg
Strong +38% interview lift
Without
With
+38.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 10m
Avg Prosecution
36 currently pending
Career history
384
Total Applications
across all art units

Statute-Specific Performance

§101
41.2%
+1.2% vs TC avg
§103
35.7%
-4.3% vs TC avg
§102
8.1%
-31.9% vs TC avg
§112
12.8%
-27.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 348 resolved cases

Office Action

§101 §102 §103
DETAILED ACTION The following is a first office action upon examination of application number 18/971177. Claims 1-15 are pending in the application and have been examined on the merits discussed below. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on 12/6/2024 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-15 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. (Step 1) Claims 1-8 are directed to system comprising a processor; thus the system comprises a device or set of devices, and therefore, is directed to a machine which is a statutory category of invention. Claims 9-15 are directed to a method; thus these claims are directed to a process, which is one of the statutory categories of invention. (Step 2A) The claims recite an abstract idea instructing how to generate a task workflow and select a worker to assign a task, which is described by claim limitations reciting: … worker management information for managing the worker, data that is constituted of items indicating identification information of the worker and characteristics of the worker is stored in the worker management information, and …a language processing task where a prompt that is a text describing an instruction content is received and a text which forms a response is outputted is operated as the worker, and to receive a task execution request; to generate, for each of the workers, task execution information relating to a plurality of tasks to be executed until a desired result is obtained; to identify candidate workers based on the worker management information and the task execution information, and to select the worker to which the task is assigned from the candidate workers. The identified limitations in the claims describing generating a task workflow and select a worker to assign a task (i.e., the abstract idea) fall within the “Certain Methods of Organizing Human Activity” grouping of abstract ideas, which covers fundamental economic practices and managing personal behavior or, alternatively, the “Mental Processes” grouping of abstract ideas since the identified limitations can be performed by a human, mentally or with pen and paper. Dependent claims 2, 3, 4, 5, 6, 7, 8, 10, 11, 12, 13, 14, and 15 recite limitations that further narrow the abstract idea (i.e., generating a task workflow and select a worker to assign a task); therefore, these claims are also found to recite an abstract idea. This judicial exception is not integrated into a practical application because additional elements such as the processor; storage device connected to the processor; network interface connected to the processor; computer system is connected to a plurality of worker systems; and the processor is configured in claim 1, the processor; storage device connected to the processor; network interface connected to the processor; and wherein the computer system is connected to a plurality of worker systems in claim 9, do not add a meaningful limitation to the abstract idea since these elements are only broadly applied to the abstract ideas at a high level of generality; thus, none of recited hardware offers a meaningful limitation beyond generally linking the abstract idea to a particular technological environment, in this case, implementation via a processor/computer. Additional elements such as the computer system is connected to a plurality of worker systems in each of which a worker that executes a task using a computer resource is operated and in at least one of the worker systems, a large-scale language model that executes a language processing task do not yield an improvement in the functioning of the computer itself, nor do they yield improvements to a technical field or technology; further, these additional elements are recited at a high level of generality and only generally link the abstract idea to at technological environment. Additional elements in claims the computer system holds worker management information do not yield an improvement and only add insignificant extra-solution activities (data storage). Accordingly, these additional element do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. (Step 2B) The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because as discussed above with respect to integration of the abstract idea into a practical application, the hardware additional element amount to no more than mere instructions to apply the exception using a generic computer component (see Spec. [0018]). Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. Additional elements such as the computer system is connected to a plurality of worker systems in each of which a worker that executes a task using a computer resource is operated and in at least one of the worker systems, a large-scale language model that executes a language processing task do not yield an improvement in the functioning of the computer itself, nor do they yield improvements to a technical field or technology; further, these additional elements only generally link the abstract idea to at technological environment. Additional elements in claims the computer system holds worker management information do not yield an improvement and only add insignificant extra-solution activities (data storage). With respect to data storage limitations, the courts have recognized storing and retrieving information in memory as well-understood, routine, and conventional functions, Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015); OIP Techs., 788 F.3d at 1363, 115 USPQ2d at 1092-93. In addition, when taken as an ordered combination, the ordered combination adds nothing that is not already present as when the elements are taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1-6, 8-13 and 15 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by US 2025/0086024 (Naanaa). As per claim 1, Naanaa teaches: a computer system comprising: a processor; a storage device connected to the processor; and a network interface connected to the processor, ([0202] In some embodiments, the system 100 includes non-transitory, computer-readable medium comprising computer program instructions tangibly stored on the non-transitory computer-readable medium, wherein the instructions are executable by at least one processor to perform each of the steps described [0218] … computing device 800 may include a storage device 828, an installation device 816, a network interface 818) wherein the computer system is connected to a plurality of worker systems in each of which a worker that executes a task using a computer resource is operated, ([0027] … the system 100 includes a plurality of user agents 102a-n, a core node 104, a first pre-trained large language model (LLM) 110, a second pre-trained LLM 112, and a plurality of worker agents 106a-n. [0037] The worker agents 106a-n may be AI agents specialized to interface with specific online products or services, serving as dedicated proxies for their respective external platforms. The worker agents 106a-n may be AI agents specialized to solve particular tasks and/or to perform operations on particular types of data. [0039]) the computer system holds worker management information for managing the worker, data that is constituted of items indicating identification information of the worker and characteristics of the worker is stored in the worker management information, and ([0042] … The core node 104 may execute functionality for populating, maintaining, and/or dynamically updating an agent directory that classifies and organizes agents based at least on agent capabilities and specializations. [0058] … The core node 104 may access a dynamically updated directory of worker agents 106 to identify the plurality of candidate worker agents. The directory may classify agents, by way of example, based on agent capabilities, specializations, and other meta-attributes, for instance, based on the online services or products agents cater to and/or interface with. The core node 104 may update the directory to account for new agent enrollments, agent retirements, or capability changes, which may ensure that the discovery process remains current, relevant, and comprehensive). in at least one of the worker systems, a large-scale language model that executes a language processing task where a prompt that is a text describing an instruction content is received and a text which forms a response is outputted is operated as the worker, and ([0125] … receiving a user request (also referred to herein as a query, such as a prompt) from a user (602). For purposes of the following discussion, assume that the user request is received from the user associated with user agent 102a, although the user request may be received from any user. The user request may take any of the forms disclosed herein. [0140] … the current user agent receives the following user query: “Place my usual order for coffee”. [0147] … agent delivers this aggregated response back to the current user. This delivery may occur through the user interface generated by the user agent, ensuring that the user receives a cohesive and complete answer to their original request). the processor is configured: to receive a task execution request; ([0125] … receiving a user request (also referred to herein as a query, such as a prompt) from a user (602). For purposes of the following discussion, assume that the user request is received from the user associated with user agent 102a, although the user request may be received from any user. The user request may take any of the forms disclosed herein. [0140] … the current user agent receives the following user query: “Place my usual order for coffee”.) to generate, for each of the workers, task execution information relating to a plurality of tasks to be executed until a desired result is obtained; ([0064] … execution time, resource usage, success rate, and other relevant performance metrics…[0065] …The matching algorithm may incorporate factors such as task suitability, past performance, availability, agent load, and capability specificity to assign AI agents to sub-tasks. Task suitability may be adjusted by a specificity score, which favors agents with narrower, more focused capabilities over those with broader, less specialized ones. The final score for each agent may be calculated using a weighted formula that balances these factors, ensuring that specialized agents are prioritized for tasks closely aligned with their expertise. The algorithm is designed to adapt over time, refining its weighting based on feedback from task outcomes. A final score may be the sum of multiplying each of suitability, performance, availability, load, and specificity by a corresponding weight [0106] The core node 104 may compute, for each worker agent W in the plurality of worker agents, a value of a success rate metric for the worker agent W [0113] … computing, by the core node, for each worker agent W in the plurality of worker agents, a value of a drift metric for the worker agent W, the computing comprising computing, for the worker agent W, a relevance score representing a relevance of a plurality of responses provided by the worker agent W to a plurality of user requests; thereby computing a plurality of relevance scores for the plurality of worker agents (506-a); computing the value of the drift metric for the worker agent W based on the relevance score for the worker agent W (506-b); and updating the availability status of the worker agent W to the “unavailable” status if the value of the drift metric for the worker agent W satisfies a predetermined criterion). to identify candidate workers based on the worker management information and the task execution information, and to select the worker to which the task is assigned from the candidate workers ([Abstract] …Based at least on the user request, the plurality of availability statuses, and the plurality of clusters, the core node identifies a subset of the plurality of worker agents that are both available to process the user request and that are suitable for processing the user request. [0065] For each of the plurality of sub-tasks, the user agent identifies a subset of the plurality of candidate worker agents to perform the sub-task, based on the ranking (212-a). A matching algorithm may be employed to assign the best available worker agent to each sub-task. The matching algorithm may incorporate factors such as task suitability, past performance, availability, agent load, and capability specificity to assign AI agents to sub-tasks. Task suitability may be adjusted by a specificity score, which favors agents with narrower, more focused capabilities over those with broader, less specialized ones. The final score for each agent may be calculated using a weighted formula that balances these factors, ensuring that specialized agents are prioritized for tasks closely aligned with their expertise. The algorithm is designed to adapt over time, refining its weighting based on feedback from task outcomes. A final score may be the sum of multiplying each of suitability, performance, availability, load, and specificity by a corresponding weight. [0066] For each of the plurality of sub-tasks, the user agent identifies, from the subset of the plurality of candidate worker agents, based on at least one criterion, a best worker agent to perform the sub-task (212-b)). As per claim 2, Naanaa teaches: the items indicating the characteristics of the worker include at least a cost associated with execution of the task, and ([0103] The core node 104 may compute, for each worker agent W in the plurality of worker agents, a value of an operational cost metric for the worker agent W, based on processing of other user requests by the worker agent W. The core node 104 may record an operational cost of each worker agent 106 on a per-million-token basis. The core node 104 may therefore rank worker agents not solely on performance but also on the cost-effectiveness of operations, providing a balanced rank considering high-performing and cost-efficient agents favorably [0188] Techniques disclosed herein for performing agent-specific ranking (such as those shown in FIG. 4 and described in connection therewith) address the technical problem of ranking worker agents (such as worker agents that employ machine learning models to process user requests) for a specific request by considering qualitative performance metrics. Ranking techniques disclosed herein may use a multi-factor ranking approach, considering metrics such as throughput, context window, cost per million tokens, declared capabilities, uptime and age in network, and user feedback. the processor is further configured: to calculate the number of times of task execution until a desired result is obtained with respect to the candidate workers; to calculate evaluation indexes of the candidate workers based on the cost and the number of times of task execution; and to select the worker to which the task is assigned from the candidate workers based on the evaluation indexes. (([0064] … execution time, resource usage, success rate, and other relevant performance metrics…[0065] …The matching algorithm may incorporate factors such as task suitability, past performance, availability, agent load, and capability specificity to assign AI agents to sub-tasks. Task suitability may be adjusted by a specificity score, which favors agents with narrower, more focused capabilities over those with broader, less specialized ones. The final score for each agent may be calculated using a weighted formula that balances these factors, ensuring that specialized agents are prioritized for tasks closely aligned with their expertise. The algorithm is designed to adapt over time, refining its weighting based on feedback from task outcomes. A final score may be the sum of multiplying each of suitability, performance, availability, load, and specificity by a corresponding weight [0066] For each of the plurality of sub-tasks, the user agent identifies, from the subset of the plurality of candidate worker agents, based on at least one criterion, a best worker agent to perform the sub-task (212-b) [0106] The core node 104 may compute, for each worker agent W in the plurality of worker agents, a value of a success rate metric for the worker agent W … The core node 104 may utilize natural language processing techniques (including, by way of example, sentiment analysis) to analyze user feedback, quantifying satisfaction and success rates. The core node 104 may establish a scoring system that incorporates this [0107] … integrates one or more determined scores for one or more metrics to generate a composite score for each worker agent 106. [0108] The method 400 includes generating, for each of the plurality of worker agents, a corresponding ranking, based on the metrics computed for the plurality of worker agents [0188] … Ranking techniques disclosed herein may use a multi-factor ranking approach, considering metrics such as throughput, context window, cost per million tokens, declared capabilities, uptime and age in network, and user feedback. Ranking techniques disclosed herein … generate a composite score for each agent and a real-time updating mechanism to recalibrate rankings as new data comes in). As per claim 3, Naanaa teaches: the task execution request includes a condition relating to a characteristic of the worker to be used, and the processor identifies the candidate worker that satisfies the condition included in the task execution request by referencing the worker management information and the task execution information ([0046] … The core node 104 may optionally include or be in communication with a pre-trained LLM 112 for processing incoming natural language requests, including conversion of incoming requests into one or more executable action items [0061] … receives, from the user agent 102, the user request and/or the embedding of the user request. The core node 104 may, optionally, modify a ranking of one or more candidate worker agents based upon analyzing one or more attributes of the user request [0090] … The worker agent 106 may provide a detailed capability description outlining the types of tasks it can handle [0104] The core node 104 may compute, for each worker agent W in the plurality of worker agents, a value of a declared capabilities metric for the worker agent W, based on declared capabilities of the worker agent W and tasks required to be performed to process the user request. The core node 104 may annotate worker agent profiles with their declared capabilities (e.g., general knowledge, specialized domains such as legal advice, medical information, etc.) and implement a matching algorithm that gives preference to agents whose declared capabilities align closely with the specific needs voiced in a user's query). As per claim 4, Naanaa teaches: in a case where the processor receives an execution request of a new task after the worker is selected, the processor is configured to assign the task to the selected worker ([0182] … incorporate feedback on worker agent performance and personalize task assignment. For example, the system 100 may record which worker agents have interacted with any particular user/user agent and the specific tasks or queries those worker agents have handled. This tracking allows for a comprehensive history of user-worker agent interactions. The system 100 may store user feedback on the performance of worker agents, with particular emphasis on highlighting those worker agents that receive positive reviews. This integration enables the system 100 to maintain a record of user satisfaction with specific worker agents. [0183] …For each user, the system 100 may create a profile that encapsulates their preferences, frequently asked queries, and highly rated worker agents. This profiling enables the system 100 to maintain a personalized record of user interactions and preferences, facilitating more tailored responses in future interactions. [0184] The system 100 may utilize the feedback scores to dynamically assign worker agents to future tasks. This allocation may be based on the worker agents' demonstrated proficiency, as indicated by their feedback scores.). As per claim 5, Naanaa teaches: the computer system is configured to hold execution log information that stores an execution log of the task executed by the worker; in the execution log information, the execution logs of a plurality of tasks that are executed until a desired result is obtained are managed in an associated manner with each other, and the processor generates the task execution information for the respective workers by referencing the execution log information ([0060] … The ranking mechanism may evaluate one or more factors to rank the candidate worker agents 106, including, but not limited to, historical performance data of a candidate worker agent 106, feedback or ratings (if available) from previous interactions, specificity and relevance of the capabilities of a candidate worker agent 106 that relate to the current user request or sub-task, latency or response time statistics from past engagements, and a load and/or current activity level of the candidate worker agent 106. [0106] … maintain a log of user feedback and success rates for completed tasks. The core node 104 may utilize natural language processing techniques (including, by way of example, sentiment analysis) to analyze user feedback, quantifying satisfaction and success rates. [0183] The system 100 may assign scores to worker agents based on the cumulative feedback received from users. These scores serve as indicators of the worker agents' performance and their suitability for future tasks. [0185] The system 100 may continuously update the user profiles based on new interactions). As per claim 6, Naanaa teaches: in the plurality of tasks that are executed until the predetermined result is obtained, the execution log of the task that is executed last contains a user evaluation with respect to the worker to which the task is assigned, the processor is configured to correct the evaluation index based on the user evaluation ([0044] … The core node 104 may execute functionality for ranking the one or more worker agents 106a-n based upon criterion such as performance, user feedback, and latency statistics, which may ensure that the best-suited agent(s) executes on a particular request [0184] The system 100 may utilize the feedback scores to dynamically assign worker agents to future tasks. This allocation may be based on the worker agents' demonstrated proficiency, as indicated by their feedback scores. By doing so, the system 100 aims to match users with the most suitable worker agents for their specific needs. [0185] The system 100 may continuously update the user profiles based on new interactions. This ongoing refinement of the agent assignment process allows the system 100 to adapt to changing user preferences and worker agent performance over time, ensuring that task assignments remain optimized. By implementing these features, the system 100 may create a more efficient and personalized user experience. In particular, it may leverage historical interaction data and user feedback to improve the matching of worker agents to tasks, potentially leading to higher user satisfaction and more effective task completion. [0188] … Ranking techniques disclosed herein may use a multi-factor ranking approach, considering metrics such as throughput, context window, cost per million tokens, declared capabilities, uptime and age in network, and user feedback. Ranking techniques disclosed herein may also … generate a composite score for each agent and a real-time updating mechanism to recalibrate rankings as new data comes in). As per claim 8, Naanaa teaches: the task execution request includes, as the condition, at least one of a size of data in the task, presence or non-presence of use of external data in the worker, a type of the external data, and processing performance of the worker ([0044] …ranking the one or more worker agents 106a-n based upon criterion such as performance [0052] … ranking of a subset of worker agents 106 based upon attributes of the user request [0060] … The ranking mechanism may evaluate one or more factors to rank the candidate worker agents 106, including, but not limited to, historical performance data of a candidate worker agent 106, feedback or ratings (if available) from previous interactions). As per claim 9, this claim recites limitations substantially similar to those addressed by the rejection of claim 1, above; therefore, the same rejection applies. As per claim 10 this claim recites limitations substantially similar to those addressed by the rejection of claim 2, above; therefore, the same rejection applies. As per claim 11 this claim recites limitations substantially similar to those addressed by the rejection of claim 3, above; therefore, the same rejection applies. As per claim 12 this claim recites limitations substantially similar to those addressed by the rejection of claim 5, above; therefore, the same rejection applies. As per claim 13 this claim recites limitations substantially similar to those addressed by the rejection of claim 6, above; therefore, the same rejection applies. As per claim 15 this claim recites limitations substantially similar to those addressed by the rejection of claim 8, above; therefore, the same rejection applies. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 7 and 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 2025/0086024 (Naanaa); in view of US 2005/0177549 (Hornick). As per claim 7, Naanaa teaches: to decide an order of processing performance of the respective workers with respect to the task that is requested to be executed by referencing the worker management information, ([0044] The core node 104 may optionally execute a ranking engine for ranking one or more worker agents 106 in the system 100. The core node 104 may execute functionality for ranking one or more worker agents 106a-n. The core node 104 may execute functionality for ranking the one or more worker agents 106a-n based upon criterion such as performance, user feedback, and latency statistics, which may ensure that the best-suited agent(s) executes on a particular request, thereby improving service quality. [0060] The method 200 includes ranking, by the core node, the plurality of candidate worker agents (206). The core node 104 may execute a ranking mechanism to rank the plurality of candidate worker agents 106. The ranking mechanism may evaluate one or more factors to rank the candidate worker agents 106, including, but not limited to, historical performance data of a candidate worker agent 106, feedback or ratings (if available) from previous interactions, specificity and relevance of the capabilities of a candidate worker agent 106 that relate to the current user request or sub-task, latency or response time statistics from past engagements, and a load and/or current activity level of the candidate worker agent 106). Although not explicitly taught by Naanaa, Hornick teaches: to calculate the number of times of task execution by executing an arithmetic operation using the order ([0066] …To do this, the local data mining agent computes estimates of times to complete the data mining processing task based on the amount of processing that must be performed to complete the data mining processing task, the speed of the other computer systems, and estimates of CPU utilization of the other computer systems. [0067] … The other data mining agents would then compute estimates of times to complete the data mining processing task based on the amount of processing that must be performed to complete the data mining processing task, the speed of the other computer systems, and estimates of CPU utilization of the other computer systems. The responses to the queries would include these completion time estimates [0068] …The local data mining agent then compares the estimated completion time for the local computer system with the estimated completion times for the other computer systems to determine whether another computer system could complete the data mining processing task faster than the local computer system). It would have been obvious, before the effective filing date of the claimed invention, for one of ordinary skill in the art to have modified the teachings of Naanaa with the aforementioned teachings of Hornick with the motivation of identifying an agent that can complete a task faster (Hornick [0068]). Further, one of ordinary skill in the art would have recognized that applying the teachings of Hornick to the system of Naanaa would have yielded predictable results and doing so would have been recognized by those of ordinary skill in the art as resulting in an improved system that would allow for the estimation of a task completion time. As per claim 14 this claim recites limitations substantially similar to those addressed by the rejection of claim 7, above; therefore, the same rejection applies. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. US 2023/0315522 (Davis) – a system that assigns tasks to nodes/workers and tracks worker task performance. US 2025/0190449 (Zhang) – a system that selects an agent to complete a tasks while considering agent’s success rates and costs. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ALAN TORRICO-LOPEZ whose telephone number is (571)272-3247. The examiner can normally be reached M-F 10AM-5PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Beth Boswell can be reached at (571)272-6737. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ALAN TORRICO-LOPEZ/ Primary Examiner, Art Unit 3625 1
Read full office action

Prosecution Timeline

Dec 06, 2024
Application Filed
Jan 22, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586090
ENTERPRISE DATA AGGREGATION AND COLLECTIVE INSIGHTS GENERATION
2y 5m to grant Granted Mar 24, 2026
Patent 12547955
ENTERPRISE RESOURCE PLANNING (ERP) CONFIGURATION MANAGER
2y 5m to grant Granted Feb 10, 2026
Patent 12541681
DISCRETE OPTIMIZATION OF CONFIGURATION ATTRIBUTES
2y 5m to grant Granted Feb 03, 2026
Patent 12518291
PREEMPTIVE PICKING OF ITEMS BY AN ONLINE CONCIERGE SYSTEM BASED ON PREDICTIVE MACHINE LEARNING MODEL
2y 5m to grant Granted Jan 06, 2026
Patent 12511603
SYSTEM AND METHOD FOR PROFILE MATCHING AND GENERATING GAP SCORES AND UPSKILLING RECOMMENDATIONS
2y 5m to grant Granted Dec 30, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
28%
Grant Probability
66%
With Interview (+38.3%)
3y 10m
Median Time to Grant
Low
PTA Risk
Based on 348 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month