DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on October 30, 2025 has been entered.
In response to Applicant’s claims filed on October 30, 2025, claims 1-17 are now pending for examination in the application.
Response to Arguments
This office action is in response to amendment filed 10/30/2025. In this action Claim(s) 1-17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ben-Noon et al. (US Pub. No. 20240039918) in view of Das et al. (US Pub. No. 20220398598). The Das et al. reference has been added to address the amendment of receiving, by the generative AI model, on-screen control of the user device, wherein, upon receiving the on-screen control, the generative AI model performs an operation corresponding to the rendered instructions.
Applicant’s arguments:
In regards to claim 1 on Pages 9, applicant argues “Accordingly, the features recited in amended claim 1 cannot be considered as mental steps. For example, the amended features of claim 1 reciting "processing in real-time, by the generative AI model, the user query to determine a type of the user query based on the activity, the real-time processing including processing screen activities performed at the user device; and in response to the processing of the user query: rendering instructions on a screen of the user device; and receiving, by the generative AI model, on-screen control of the user device, wherein, upon receiving the on-screen control, the generative AI model performs an operation corresponding to the rendered instructions" cannot be practically performed in the human mind,” as alleged.
Examiner’s Reply:
In processing a model there is a “Determining a type of query” step that has to be performed in the human mind. Determining steps are part of limitations that recites a mental process capable of being performed by the human mind by using data along with a computer being used as a generic tool.
Applicant’s arguments:
In regards to claim 1 on Pages 12, applicant argues “Accordingly, the claims pass the practical application test. The claims as a whole are specifically tailored to improve the efficiency of the computing system. As opposed to merely using a computer as a generic tool to perform an abstract idea, the claims, instead, are directed to a practical application that is expressly limited to providing effective assistance to users using generative AI models including limitations that tie the purported abstract idea to the practical application of improving the system,” as alleged.
Examiner’s Reply:
Providing trouble shooting instructions to a user when queried does not improve the functioning of a computer.
A claim limitation, under its broadest reasonable interpretation, covers a commercial interaction or mental process (eg assiting a user with a help query), then it falls within the “Mental process” grouping of abstract ideas set forth in the 2019 PEG. Accordingly, the claim recites an abstract idea. The examiner notes that the computer as recited in the claims are being used for providing answers using a computer (being used a generic tools).
Applicant’s arguments:
In regards to claim 1 on Pages 13, applicant argues the cited art fails to clearly and unequivocally disclose a “Because the claims recite technological solutions to technological problems, the claims are not directed to an abstract idea under Step 2A of the two-part test for subject matter eligibility, and in any case amounts to significantly more than an abstract idea under Step 2B of the two-part test. Independent claim 1 includes additional elements that are sufficient to amount to significantly more than the alleged judicial exception. For example, amended independent claim 1 recites.” as alleged.
Examiner’s Reply:
Troubleshooting guidance is well-understood routine and conventional. Receiving queries and transmitting solutions are examples of insignificant extra solution activity
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-17 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-patentable subject matter. The claims are directed to an abstract idea without significantly more.
Claim 1-17 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The judicial exception is not integrated into a practical application. The claims do not include additional elements that are sufficient to amount to significantly more than judicial exception. The eligibility analysis in support of these findings is provided below, on Claim Rejections - 35 USC 101 accordance with the "2019 Revised Patent Subject Matter Eligibility Guidance" (published on 1/7/2019 in Fed, Register, Vol. 84, No. 4 at pgs. 50-57, hereinafter referred to as the "2019 PEG").
Step 1. in accordance with Step 1 of the eligibility inquiry (as explained in MPEP 2106), it is first noted the claim method (claims 2-8) and a system (claim 10-18) are directed to one of the eligible categories of subject matter and therefore satisfies Step 1.
Step 2A. In accordance with Step 2A, prong one of the 2019 PEG, it is noted that the independent claims recite an abstract idea falling within the Mental Processes enumerated groupings of abstract ideas set forth in the 2019 PEG. Examiner is of the position that independent claims 1, 9, and 17 are directed towards the Mental Process Grouping of Abstract Ideas.
Independent claim(s) 1, 9, and 17 recites the following limitations directed towards a Mental Processes:
processing in real-timethe real-time processing including including processing screen activities performed at the user device (This limitation, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components. That is, other than reciting “by the generative AI model,” nothing in the claim element precludes the step from practically being performed in the mind. For example, but for the “by the generative AI model” language, the claim encompasses a user thinking about a query and activity to determine a user type. The mere nominal recitation of a generic AI model does not take the claim limitation out of the mental processes grouping. Thus, the claim recites a mental process.); and
Step 2A. In accordance with Step 2A, prong two of the 2019 PEG, the judicial exception is not integrated into a practical application because of the recitation in claim(s) 1, 9, and 17:
receiving, from a user device associated with the user, by the generative AI model, a user query corresponding to an activity, wherein:
the user query comprises one or more multi-modal inputs (recites insignificant extra solution activity that amounts to mere data gathering);
the one or more multi-modal inputs comprise a live screen activity (recites insignificant extra solution activity that amounts to mere data gathering); and
the generative AI model is pretrained based on a set of predefined policies associated with an entity (recites insignificant extra solution activity that amounts to mere data gathering);
processing in real-time, by the generative AI model, the user query to determine a type of the user query based on the activity ((AI models (i.e., merely automate the claimed steps and are no more than mere instructions to apply the exception using generic computer components); and
in response to the processing of the user query:
rendering instructions on a screen of the user device (recites insignificant extra solution activity that amounts to rendering data); and
receiving, by the generative AI model, on-screen on screen control of the user device, wherein, upon receiving the on- screen control, the generative AI model performs an operation corresponding to the rendered instructions (recites insignificant extra solution activity that amounts to mere data gathering).
The claim as a whole merely describes how to generally “apply” the exception in a computer environment. Even when viewed in combination, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claim is directed to the abstract idea.
Step 2B. Similar to the analysis under 2A Prong Two, the claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception. Because the additional elements of the independent claims amount to insignificant extra solution activity and/or mere instructions, the additional elements do not add significantly more to the judicial exception such that the independent claims as a whole would be patent eligible.
Therefore, independent claims 1, 9, and 17 are rejected under 35 U.S.C. 101.
With respect to claim(s) 2 and 10:
Step 2A, prong one of the 2019 PEG:
Examiner is of the position the dependent claim is directed toward additional elements.
Step 2A Prong Two Analysis:
wherein the one or more multi-modal inputs further comprise at least one of a live screen activity display, a text, an audio, a video, or an image (recites insignificant extra solution activity that amounts to mere data gathering).
Step 2B Analysis:
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The specification does not provide any indication that the receiving of the multi-modal input, and its contents, was done by anything other than a generic, off-the-shelf computer component. Additionally, the Symantec, TLI, and OIP Techs court decisions cited in MPEP 2106.05(d)(II) indicate that mere collection or receipt of data over a network is a well- understood, routine, and conventional function when it is claimed in a merely generic manner (as it is here). Accordingly, a conclusion that this limitation is a well-understood, routine, and conventional activity is supported. The claim is not patent eligible.
With respect to claim(s) 3 and 11:
Step 2A, prong one of the 2019 PEG:
Providing assistance to operate at least one of a cloud storage or an application owned by the entity (The limitation recites a mental process of observation and/or evaluation capable of being performed by the human mind by providing assistance).
Step 2A Prong Two Analysis:
rendering the instruction to the user (recites insignificant extrasolution activity for rendering data).
Step 2B Analysis:
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The specification does not provide any indication that the receiving of the multi-modal input, and its contents, was done by anything other than a generic, off-the-shelf computer component. Additionally, the Symantec, TLI, and OIP Techs court decisions cited in MPEP 2106.05(d)(II) indicate that mere collection or receipt of data over a network is a well- understood, routine, and conventional function when it is claimed in a merely generic manner (as it is here). Accordingly, a conclusion that this limitation is a well-understood, routine, and conventional activity is supported. The claim is not patent eligible.
With respect to claim(s) 4 and 12:
Step 2A, prong one of the 2019 PEG:
processing, by the generative AI model, the user selection to determine an accuracy of the user selection, wherein the accuracy of the user selection is determined based on a pre-defined accuracy threshold (The limitation recites a mental process of observation and/or evaluation capable of being performed by the human mind by determining accuracy);
dynamically generating, by the generative AI model, a subsequent instruction based on the accuracy of the user selection (The limitation recites a mental process of observation and/or evaluation capable of being performed by the human mind by generating an instruction).
Step 2A Prong Two Analysis:
receiving, by the generative AI model, a user selection corresponding to the instruction (recites insignificant extra solution activity that amounts to mere data gathering);
rendering the subsequent instruction to the user (recites insignificant extrasolution activity for rendering data).
Step 2B Analysis:
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The specification does not provide any indication that the receiving of the multi-modal input, and its contents, was done by anything other than a generic, off-the-shelf computer component. Additionally, the Symantec, TLI, and OIP Techs court decisions cited in MPEP 2106.05(d)(II) indicate that mere collection or receipt of data over a network is a well- understood, routine, and conventional function when it is claimed in a merely generic manner (as it is here). Accordingly, a conclusion that this limitation is a well-understood, routine, and conventional activity is supported. The claim is not patent eligible.
With respect to claim(s) 5 and 13:
Step 2A, prong one of the 2019 PEG:
wherein the subsequent instruction is one of a subsequent action or a corrective action (The limitation recites a mental process of observation and/or evaluation capable of being performed by the human mind by generating an instruction).
Step 2A Prong Two Analysis:
This judicial exception is not integrated into a practical application because there are no
additional elements to provide practical application.
Step 2B Analysis:
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The claim is not patent eligible.
With respect to claim(s) 6 and 14:
Step 2A, prong one of the 2019 PEG:
wherein the subsequent instruction is the subsequent action when the accuracy of the user selection is above the pre-defined accuracy threshold, and wherein the subsequent instruction is the corrective action when the accuracy of the user selection is below the pre-defined accuracy threshold (The limitation recites a mental process of observation and/or evaluation capable of being performed by the human mind by generating an instruction).
Step 2A Prong Two Analysis:
This judicial exception is not integrated into a practical application because there are no
additional elements to provide practical application.
Step 2B Analysis:
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The claim is not patent eligible.
With respect to claim(s) 7 and 15:
Step 2A, prong one of the 2019 PEG:
iteratively generating, by the generative AI model, the subsequent instruction based on the accuracy of the user selection received corresponding to a previous instruction until the user query is resolved (The limitation recites a mental process of observation and/or evaluation capable of being performed by the human mind by generating an instruction).
Step 2A Prong Two Analysis:
This judicial exception is not integrated into a practical application because there are no
additional elements to provide practical application.
Step 2B Analysis:
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The claim is not patent eligible.
With respect to claim(s) 8 and 16:
Step 2A, prong one of the 2019 PEG:
wherein each of the instruction and the subsequent instruction is at least one multi-modal instruction, and wherein the at least one multi-modal instruction comprises an on-screen control, an on-screen guided instruction, a textual instruction, and an audio instruction (The limitation recites a mental process of observation and/or evaluation capable of being performed by the human mind by generating an instruction).
Step 2A Prong Two Analysis:
This judicial exception is not integrated into a practical application because there are no
additional elements to provide practical application.
Step 2B Analysis:
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The claim is not patent eligible.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1- 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ben-Noon et al. (US Pub. No. 20240039918) in view of Das et al. (US Pub. No. 20220398598).
With respect to claim 1, Ben-Noon et al. teaches a method for providing real-time assistance to a user using a generative Artificial Intelligence (AI) model, the method comprising:
receiving, from a user device associated with the user, by the generative AI model, a user query corresponding to an activity (Paragraph 81 discloses a machine learning algorithm, and/or response to a query submitted to a generative AI), wherein:
the user query comprises one or more multi-modal inputs (Paragraph 81 discloses possible user need for help and prescribing substantive help responsive to the need. The inference may be based on, by way of example, identifying a feature of a frenetic search pattern exhibited by the user, an unusual user activity hiatus, a screen shot at a time of the hiatus);
the one or more multi-modal inputs comprise a live screen activity (Paragraph 81 discloses possible user need for help and prescribing substantive help responsive to the need. The inference may be based on, by way of example, identifying a feature of a frenetic search pattern exhibited by the user, an unusual user activity hiatus, a screen shot at a time of the hiatus); and
processing in real-time, by the generative AI model, the user query to determine a type of the user query based on the activity, the real-time processing including processing screen activities performed at the user device (Paragraph 90 discloses determines a substantially real-time value for a metric of user activity, optionally referred to as an activity temperature, that provides an indication of intensity of user interaction with MyCompany resources while using UE.sub.e). Ben-Noon et al. does not disclose the generative AI model is pretrained based on a set of predefined policies associated with an entity.
However, Das et al. teaches the generative AI model is pretrained based on a set of predefined policies associated with an entity (Paragraph 37 discloses filtering the historical case data to create a subset of the product support cases that are both remotely resolvable and non-critical. A neural network classification model may be trained with pre-labeled data for fields indicative of criticality and remote resolvability);
in response to the processing of the user query:
rendering instructions on a screen of the user device (Paragraph 76 discloses the chatbot may further attempt to help the customer to self-solve the issue by performing a search through a support knowledge article database based on the customer defined issue text and may return relevant links to relevant knowledge documents to the customer); and
receiving, by the generative AI model, on-screen control of the user device, wherein, upon receiving the on-screen control, the generative AI model performs an operation corresponding to the rendered instructions (Paragraph 38 discloses based on the limited set of product issue categories to be supported by the chatbot for each product line at issue, chatbot-specific approved workflows may be created for each issue category within the given product line with input from appropriate members of the product engineering teams).
Therefore, it would have been obvious before the effective filing data of invention was made to a person having ordinary skill in the art to modify Ben-Noon et al. with Das et al. This would have facilitated generative AI by using accuracy to improves its ability to assist a user.
The Ben-Noon et al. reference as modified by Das et al. teaches all the limitations of claim 1. With respect to claim 2, Das et al. teaches the method of claim 1, wherein the one or more multi-modal further inputs comprise at least one of a live screen activity display, a text, an audio, a video, or an image (Paragraph 26 discloses real-time processing of customer live chat text, tokenized text may be converted to numerical vectors with these learned word association models). The motivation to combine statement previously provided in the rejection of independent claim 1 provided above, combining the Ben-Noon et al. reference and the Das et al. reference is applicable to dependent claim 2.
The Ben-Noon et al. reference as modified by Das et al. teaches all the limitations of claim 1. With respect to claim 3, Das et al. teaches the method of claim 1, wherein providing the assistance to the user in the real-time comprises:
Providing assistance to operate at least one of a cloud storage or an application owned by the entity (Paragraph 76 discloses the chatbot may further attempt to help the customer to self-solve the issue by performing a search through a support knowledge article database based on the customer defined issue text and may return relevant links to relevant knowledge documents to the customer). The motivation to combine statement previously provided in the rejection of independent claim 1 provided above, combining the Ben-Noon et al. reference and the Das et al. reference is applicable to dependent claim 3.
The Ben-Noon et al. reference as modified by Das et al. teaches all the limitations of claim 1. With respect to claim 4, Das et al. teaches the method of claim 1,
receiving, by the generative AI model, a user selection corresponding to the instruction (Paragraph 39 discloses historical case data preparation and feature selection and labeling logic may be performed as part of the domain data preparation);
processing, by the generative AI model, the user selection to determine an accuracy of the user selection, wherein the accuracy of the user selection is determined based on a pre-defined accuracy threshold (Paragraph 40 discloses In the AI lab 230 stage, various word association models may be evaluated with live test data, and an accuracy level determined for each iteration. In one embodiment, after desired levels of accuracy are achieved, a selected model may be saved and exposed via a REST API for use by the chatbot during run-time);
dynamically generating, by the generative AI model, a subsequent instruction based on the accuracy of the user selection (Paragraph 75 discloses the chatbot may initiate an automated, interactive, troubleshooting conversational dialog with the user guided based on a decision tree (e.g., one of decision trees 355) for the matching product issue category within the product line); and
rendering the subsequent instruction to the user (Paragraph 76 discloses the chatbot may further attempt to help the customer to self-solve the issue by performing a search through a support knowledge article database based on the customer defined issue text and may return relevant links to relevant knowledge documents to the customer). The motivation to combine statement previously provided in the rejection of independent claim 1 provided above, combining the Ben-Noon et al. reference and the Das et al. reference is applicable to dependent claim 4.
The Ben-Noon et al. reference as modified by Das et al. teaches all the limitations of claim 4. With respect to claim 5, Das et al. teaches the method of claim 4, wherein the subsequent instruction is one of a subsequent action or a corrective action (Paragraph 75 discloses the chatbot may initiate an automated, interactive, troubleshooting conversational dialog with the user guided based on a decision tree (e.g., one of decision trees 355) for the matching product issue category within the product line). The motivation to combine statement previously provided in the rejection of dependent claim 4 provided above, combining the Ben-Noon et al. reference and the Das et al. reference is applicable to dependent claim 5.
The Ben-Noon et al. reference as modified by Das et al. teaches all the limitations of claim 5. With respect to claim 6, Das et al. teaches the method of claim 5, wherein the subsequent instruction is the subsequent action when the accuracy of the user selection is above the pre-defined accuracy threshold, and wherein the subsequent instruction is the corrective action when the accuracy of the user selection is below the pre-defined accuracy threshold (Paragraph 59 discloses it is determined whether the accuracy of the intermediate classification model exceeds a predetermined or configurable accuracy threshold. If so, processing branches to block 460; otherwise, processing continues with block 470. In one embodiment, accuracy is measured by setting aside 30% of the original dataset for testing and validation. For example, the learned model may be run against the unseen 30% dataset and the predictions may be checked with already existing SME validated labels. Accuracy may be measured based on the number of correct predictions divided by the total number of predictions. In one embodiment, the accuracy threshold is 90% and multiple training iterations are expected to be performed before such accuracy is achieved). The motivation to combine statement previously provided in the rejection of dependent claim 5 provided above, combining the Ben-Noon et al. reference and the Das et al. reference is applicable to dependent claim 6.
The Ben-Noon et al. reference as modified by Das et al. teaches all the limitations of claim 6. With respect to claim 7, Das et al. teaches the method of claim 6, further comprising:
iteratively generating, by the generative AI model, the subsequent instruction based on the accuracy of the user selection received corresponding to a previous instruction until the user query is resolved (Paragraph 59 discloses it is determined whether the accuracy of the intermediate classification model exceeds a predetermined or configurable accuracy threshold. If so, processing branches to block 460; otherwise, processing continues with block 470. In one embodiment, accuracy is measured by setting aside 30% of the original dataset for testing and validation. For example, the learned model may be run against the unseen 30% dataset and the predictions may be checked with already existing SME validated labels. Accuracy may be measured based on the number of correct predictions divided by the total number of predictions. In one embodiment, the accuracy threshold is 90% and multiple training iterations are expected to be performed before such accuracy is achieved). The motivation to combine statement previously provided in the rejection of dependent claim 6 provided above, combining the Ben-Noon et al. reference and the Das et al. reference is applicable to dependent claim 7.
The Ben-Noon et al. reference as modified by Das et al. teaches all the limitations of claim 7. With respect to claim 8, Das et al. teaches the method of claim 7, wherein each of the instruction and the subsequent instruction is at least one multi-modal instruction, and wherein the at least one multi-modal instruction comprises an on-screen control, an on-screen guided instruction, a textual instruction, and an audio instruction (Paragraph 15 discloses numerous complex resolution workflows (which may also be referred to herein as troubleshooting workflows and/or decision trees). Furthermore, ideally, the chatbot should gracefully handle scenarios in which the issue the customer is asking about is not currently solvable by the chatbot, the chatbot is unable to identify the intent from the customer text after multiple attempts, the customer indicates his/her issue remains unresolved after the chatbot has exhausted its troubleshooting flows, the customer has presented an issue the chatbot has not seen or learned before (and therefore the chatbot is unable to identify the issue correctly), the effectiveness or accuracy of the resolution of a particular issue/problem has diminished over time (e.g., as a result of changes in the product line), among others. Layered on top of the aforementioned complexities, is the further issue of how to properly incorporate the product-specific domain expertise of subject matter experts (SMEs) into the relevant processes and phases (e.g., identification of the types of product issues to be addressed, curation of training data, AI model training, and operationalization)). The motivation to combine statement previously provided in the rejection of dependent claim 4 provided above, combining the Ben-Noon et al. reference and the Eberie et al. reference is applicable to dependent claim 8.
With respect to claim 9, Ben-Noon et al. teaches a system for providing real-time assistance to a user using a generative Artificial Intelligence (AI) model, the system comprising:
a processing circuitry (Paragraph 22 discloses processing and memory resources); and
a memory (Paragraph 22 discloses processing and memory resources) communicatively coupled to the processing circuitry, wherein the memory stores processor instructions, which when executed by the processing circuitry, cause the processing circuitry to:
receive, from a user device associated with the user, by the generative AI model, a user query corresponding to an activity (Paragraph 81 discloses a machine learning algorithm, and/or response to a query submitted to a generative AI), wherein:
the user query comprises one or more multi-modal inputs (Paragraph 81 discloses possible user need for help and prescribing substantive help responsive to the need. The inference may be based on, by way of example, identifying a feature of a frenetic search pattern exhibited by the user, an unusual user activity hiatus, a screen shot at a time of the hiatus);
the one or more multi-modal inputs comprise a live screen activity (Paragraph 81 discloses possible user need for help and prescribing substantive help responsive to the need. The inference may be based on, by way of example, identifying a feature of a frenetic search pattern exhibited by the user, an unusual user activity hiatus, a screen shot at a time of the hiatus); and
processing in real-time, by the generative AI model, the user query to determine a type of the user query based on the activity, the real-time processing including processing screen activities performed at the user device (Paragraph 90 discloses determines a substantially real-time value for a metric of user activity, optionally referred to as an activity temperature, that provides an indication of intensity of user interaction with MyCompany resources while using UE.sub.e). Ben-Noon et al. does not disclose the generative AI model is pretrained based on a set of predefined policies associated with an entity.
However, Das et al. teaches the generative AI model is pretrained based on a set of predefined policies associated with an entity (Paragraph 37 discloses filtering the historical case data to create a subset of the product support cases that are both remotely resolvable and non-critical. A neural network classification model may be trained with pre-labeled data for fields indicative of criticality and remote resolvability);
in response to the processing of the user query:
rendering instructions on a screen of the user device (Paragraph 76 discloses the chatbot may further attempt to help the customer to self-solve the issue by performing a search through a support knowledge article database based on the customer defined issue text and may return relevant links to relevant knowledge documents to the customer); and
receiving, by the generative AI model, on-screen control of the user device, wherein, upon receiving the on-screen control, the generative AI model performs an operation corresponding to the rendered instructions (Paragraph 38 discloses based on the limited set of product issue categories to be supported by the chatbot for each product line at issue, chatbot-specific approved workflows may be created for each issue category within the given product line with input from appropriate members of the product engineering teams).
Therefore, it would have been obvious before the effective filing data of invention was made to a person having ordinary skill in the art to modify Ben-Noon et al. with Das et al. This would have facilitated generative AI by using accuracy to improves its ability to assist a user.
With respect to claim 10, it is rejected on grounds corresponding to above rejected claim 2, because claim 10 is substantially equivalent to claim 2.
With respect to claim 11, it is rejected on grounds corresponding to above rejected claim 3, because claim 11 is substantially equivalent to claim 3.
With respect to claim 12, it is rejected on grounds corresponding to above rejected claim 4, because claim 12 is substantially equivalent to claim 4.
With respect to claim 13, it is rejected on grounds corresponding to above rejected claim 5, because claim 13 is substantially equivalent to claim 5.
With respect to claim 14, it is rejected on grounds corresponding to above rejected claim 6, because claim 14 is substantially equivalent to claim 6.
With respect to claim 15, it is rejected on grounds corresponding to above rejected claim 7, because claim 15 is substantially equivalent to claim 7.
With respect to claim 16, it is rejected on grounds corresponding to above rejected claim 8, because claim 16 is substantially equivalent to claim 8.
With respect to claim 17, Ben-Noon et al. teaches a non-transitory computer-readable medium storing computer-executable instructions providing real-time assistance to a user using a generative Artificial Intelligence (AD model, the stored instructions, when executed by a processor, cause the processor to perform operations comprises:
receiving, from a user device associated with the user, by the generative AI model, a user query corresponding to an activity (Paragraph 81 discloses a machine learning algorithm, and/or response to a query submitted to a generative AI), wherein:
the user query comprises one or more multi-modal inputs (Paragraph 81 discloses possible user need for help and prescribing substantive help responsive to the need. The inference may be based on, by way of example, identifying a feature of a frenetic search pattern exhibited by the user, an unusual user activity hiatus, a screen shot at a time of the hiatus);
the one or more multi-modal inputs comprise a live screen activity (Paragraph 81 discloses possible user need for help and prescribing substantive help responsive to the need. The inference may be based on, by way of example, identifying a feature of a frenetic search pattern exhibited by the user, an unusual user activity hiatus, a screen shot at a time of the hiatus); and
processing in real-time, by the generative AI model, the user query to determine a type of the user query based on the activity, the real-time processing including processing screen activities performed at the user device (Paragraph 90 discloses determines a substantially real-time value for a metric of user activity, optionally referred to as an activity temperature, that provides an indication of intensity of user interaction with MyCompany resources while using UE.sub.e). Ben-Noon et al. does not disclose the generative AI model is pretrained based on a set of predefined policies associated with an entity.
However, Das et al. teaches the generative AI model is pretrained based on a set of predefined policies associated with an entity (Paragraph 37 discloses filtering the historical case data to create a subset of the product support cases that are both remotely resolvable and non-critical. A neural network classification model may be trained with pre-labeled data for fields indicative of criticality and remote resolvability);
in response to the processing of the user query:
rendering instructions on a screen of the user device (Paragraph 76 discloses the chatbot may further attempt to help the customer to self-solve the issue by performing a search through a support knowledge article database based on the customer defined issue text and may return relevant links to relevant knowledge documents to the customer); and
receiving, by the generative AI model, on-screen control of the user device, wherein, upon receiving the on-screen control, the generative AI model performs an operation corresponding to the rendered instructions (Paragraph 38 discloses based on the limited set of product issue categories to be supported by the chatbot for each product line at issue, chatbot-specific approved workflows may be created for each issue category within the given product line with input from appropriate members of the product engineering teams).
Therefore, it would have been obvious before the effective filing data of invention was made to a person having ordinary skill in the art to modify Ben-Noon et al. with Das et al. This would have facilitated generative AI by using accuracy to improves its ability to assist a user.
Relevant Prior Art
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
US PG-PUB 20230418524 is directed to Utilizing Generative Artificial Intelligence To Improve Storage System Management [0318] the generative AI model may be able to implement the solution automatically without user interaction or intervention. The generative AI model may generate a prompt that is provided to the sender of the original content, asking for permission to implement the solution to the issue. Upon receiving the permission, the generative AI model may implement the solution. For example, if the solution to the issue is to upgrade the software on the storage system, the generative AI model may cause the storage system to upgrade the software upon receiving the permission.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to NICHOLAS E ALLEN whose telephone number is (571)270-3562. The examiner can normally be reached Monday through Thursday 830-630.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Boris Gorney can be reached at (571) 270-5626. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/N.E.A/Examiner, Art Unit 2154
/BORIS GORNEY/Supervisory Patent Examiner, Art Unit 2154