DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 11/24/2025 has been entered.
Response to Amendment
The Amendments filed on 11/24/2025 have been entered. Claims 1, 8, and 15 have been amended. Claims 1-20 are pending in the application.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Huang et al. (hereinafter Huang), US 2020/0105255 A1, in view of Khedr et al. (hereinafter Khedr), “Enhancing the e-learning system based on a novel tasks’ classification load-balancing algorithm” (2021).
Regarding independent claim 1, Huang teaches a computer-implemented method comprising ([0051] "tangibly embodied as hardware processors"; [0065] FIG. 8 flow diagram 800 for a process for next action prediction in a conversation system; [0083] "process 1900... obtains, by a processor"): enriching received information based on identified attributes ([0064] PSEUDOCODE: "extract the recognized user inputs and external context information... record the user input and external context information into the edge"; [0067] identifying context info: "1. Browsing History 2. Time 3. Location"; thus Huang teaches obtaining a user message as input (received information) and extracting recognized user inputs and external context information such as time, location, and browsing history (identified attributes) to add to and update the edges of a dialog graph (enriching the received information)), wherein the received information comprises a computing task ([0064]-[0065] teaches receiving user messages as input to be processed); dynamically generating at least one recommendation that satisfies a goal based, at least in part on the enriched information ([0002]-[0003], [0064]-[0067] suggest detecting unrecognized user input and predicting a "next action" (dynamically that satisfies a goal) by processing the input and external context information based on a pre-trained graph model and generating an optimal guiding conversation response (generating at least one recommendation) for the system to output based on dialog nodes in the dialog graph and historical paths (based at least in part on the enriched information)).
Huang does not expressly teach wherein the goal comprises, at least, a reduction of processing time for the computing task; and executing the at least one dynamically generated recommendation that satisfies the goal, wherein the executing comprises altering one or more computing resource settings of a system according to the at least one dynamically generated recommendation to reduce the processing time for execution of the computing task via the system.
However, Khedr teaches wherein a goal comprises, at least, a reduction of processing time for a computing task (pages 2, 5 suggest that load balancing algorithms target system optimization wherein the primary goal comprises a reduction of processing time (wherein a goal comprises at least a reduction of processing time), wherein load balancing is defined as targeting the enhancement of tasks’ “response time” and maintaining the “lowest response time” possible (for a computing task)); and executing at least one dynamically generated recommendation that satisfies the goal (pages 5-6, 10, and 22-23 suggest “Task Allocator” phase which assigns users’ requests to the most suitable cloud server node (and executing at least one dynamically generated recommendation) to achieve the lowest response time (that satisfies the goal)), wherein the executing comprises altering one or more computing resource settings of a system according to the at least one dynamically generated recommendation to reduce the processing time for executing of the computing task via the system (pages 5-6, 10, and 22-23 suggest “Task Allocator” phase which assigns users’ requests (according to the at least one dynamically generated recommendation) to the most suitable cloud server node (wherein the executing comprises altering one or more computer resource settings of a system) to achieve the lowest response time based on CPU utilization, memory usage, and bandwidth (to reduce the processing time for executing the computing task via the system)).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the conversational next-action prediction system of Huang with the task classification and load-balancing resource allocation techniques of Khedr to teach wherein the goal comprises, at least, a reduction of processing time for the computing task; and executing the at least one dynamically generated recommendation that satisfies the goal, wherein the executing comprises altering one or more computing resource settings of a system according to the at least one dynamically generated recommendation to reduce the processing time for execution of the computing task via the system. This modification would have been motivated by the desire to optimize the backend computing resources used by the conversational system of Huang to ensure that the generated actions, tasks, and recommendations are assigned and executed with minimal processing and response times, thereby providing higher system throughput and preventing bottlenecks during high user traffic (Khedr page 2, page 5).
Regarding dependent claim 2, Huang, in view of Khedr, teach the computer-implemented method of claim 1, wherein enriching received information based on identified attributes (see Huang [0064] wherein creating dialog graph PSEUOCODE suggests extracting and recognizing user inputs based on conversation logs) comprises: identifying key phrases, conversational attributes (see Huang [0064]-[0065] conversational logs comprise a user message (comprises identifying key phrases) that is obtained from a user (conversational attributes)), and user attributes associated with a case (see Huang [0066]-[0067] dialog graph for a user associated with user context information such as user attributes of time, location, and browsing history).
Regarding dependent claim 3, Huang, in view of Khedr, teach the computer-implemented method of claim 1, wherein dynamically generating a recommendation that satisfies a goal based, at least in part on the enriched information (see Huang [0002]-[0003], [0064]-[0067] suggest detecting unrecognized user input and generating an optimal guiding conversation response predicting a "next action" based at least in part on dialog graph and historical paths) comprises: selecting a maximal subset of instances of historical data that, in aggregate, achieves a threshold level for satisfaction of the goal (see Huang [0068] and PSEUDOCODE uses an "argmax" probability function on historical path-based chat log context information to maximize the prediction probability threshold, outputting the Optimal_X_{i+1} next action to suggest generating a next action by selecting a maximal subset of instances of historical data that, in aggregate, achieves a threshold level for satisfaction of the goal).
Regarding dependent claim 4, Huang, in view of Khedr, teach the computer-implemented method of claim 3, further comprising: building a machine learning model for a next best action recommendation by extracting conversational and process-aware attributes (see Huang [0002]-[0003], [0061]-[0066] suggest builds a graph-based learning model that evaluates dialog node attributes, transition logic, sequential chat logs, and user context information (process-aware attributes) to predict next actions).
Regarding dependent claim 5, Huang, in view of Khedr, teach the computer-implemented method of claim 4, further comprising: selecting a maximal subset of next best action recommendations using the machine learning model that achieves a threshold level for the goal (see Huang [0065]-[0068] graph model processes historical paths and external context info via an argmax probability formula to isolate the optimal action prediction (maximal subset)).
Regarding dependent claim 6, Huang, in view of Khedr, teach the computer-implemented method of claim 5, further comprising: in response to receiving subsequent information, enriching the received information based on respective identified attributes (see Huang [0063] Fig. 7 showing consecutive user turns "user msg 1", "user msg 2", "user msg 3"; [0064] pseudocode loop "for each user input pair p in conversation_logs" to "record the user input and external context information into the edge"; thus Huang teaches a continuous conversational flow where, in response to receiving subsequent user messages (subsequent information), the system iteratively extracts and adds the newly received inputs and respective external context attributes to the dialog graph edge logs (enriching the received information based on respective identified attributes)); and expressing a dynamic comprehensive satisfaction goal as a function of an aggregate of received elemental goals of the dynamic comprehensive satisfaction goal (see Khedr Pages 12-13, Formula 22: TEPi = (CTHi/TTH x 100) x WCTH + (CRTi/TRT x 100) x WCRT + CPUi x WCPU + CMUi x WCMU + (CBUi/TBU x 100) x WCBU + (CLi/TL x 100) x WCL + CERi x WCER) / 7 teaches calculating an Estimated Utilization (TEP) or Estimated Performance (EP) (dynamic comprehensive satisfaction goal) as an aggregate mathematical function of individual elemental performance metrics/goals, including CPU utilization, memory usage, bandwidth, latency, throughput, and error rate).
Regarding dependent claim 7, Huang, in view of Khedr, teach the computer-implemented method of claim 3, wherein the goal is a function of user specified weighted federated recommendations from multiple goal-optimized recommendation models (see Khedr Pages 12-13, Formula 22: "each parameter has its associated weights... which contribute to estimating the server's performance", and Formula 25: EPj = (WPj + EAPj)/2 teaches that the comprehensive allocation goal (Estimated Performance EP) is a function of aggregating metrics using specific weights (e.g., WCTH, WCRT, WCPU, WCMU, WCBU, WCL, WCER) from multiple distinct goal-optimized recommendation models (e.g., federating outputs from the "Task Classifier" phase leveraging k-means clustering and ID3 trees, and the "Task Allocator" phase calculating Working Performance WP and Estimated Allocation Performance EAP. While Khedr discloses assigning these weights to reflect the parameters' importance, it would have been obvious to a person of ordinary skill in the art to configure the system to allow a user or administrator to specify these weights (user specified) to flexibly tailor the load-balancing system according to custom performance priorities).
Regarding claims 8-14, these are computer program product claims that are substantially the same as the computer-implemented method of claims 1-7, respectively. Thus, claims 8-14 are rejected for the same reasons as claims 1-7. In addition, Huang teaches a computer program product comprising: one or more computer readable storage media and program instructions stored on the one or more computer readable storage media, the program instructions comprising: program instructions to ([0088] teaching that "aspects of the present invention may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied thereon" and [0089] describing examples of a "computer readable storage medium" such as a hard disk, RAM, and ROM).
Regarding claims 15-20, these are computer system claims that are substantially similar to the computer-implemented method of claims 1-6. Thus, claims 15-20 are rejected for the similar reasons as claims 1-6. In addition, Huang teaches a computer system comprising: one or more computer processors; one or more computer readable storage media; and program instructions stored on the one or more computer readable storage media for execution by at least one of the one or more computer processors, the program instructions comprising: program instructions to ([0051] teaching functions "typically performed by the processing system... which can be tangibly embodied as hardware processors and with modules of program code"; [0058] describing a hardware system having a "central processing unit 410" (one or more computer processors) along with "Random Access Memory (RAM) 414, Read Only Memory (ROM) 416" and "disk storage units 420" (one or more computer readable storage media); and [0093] teaching "computer program instructions may be provided to a processor... such that the instructions, which execute via the processor... create means for implementing the functions/acts").
Response to Arguments
Applicant’s arguments and remarks filed 11/24/2025 traversing the 35 U.S.C. 101 rejections set forth in the Office Action dated 10/1/2025 are persuasive. Consequently, the 35 USC 101 rejections set forth in the Office Action dated 10/1/2025 are hereby withdrawn. However, upon further search and reconsideration the claimed invention is not patentable over prior art as detailed in the 35 U.S.C. 103 rejections above.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to KUANG FU CHEN whose telephone number is (571)272-1393. The examiner can normally be reached M-F 9:00-5:30pm ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jennifer Welch can be reached on (571) 272-7212. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/KC CHEN/Primary Patent Examiner, Art Unit 2143