DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
1. Regarding the rejection under 35 U.S.C. § 101, Applicant's arguments filed 12/04/2025 have been fully considered but they are not persuasive.
Applicant first argues on pgs. 11-13 of the Remarks that the claims are patent eligible under Step 2A Prong 1. Specifically, Applicant argues that the limitations of “identifying, by a compiler, errors in the code; responsive to determining, based on the monitoring and the errors identified by the compiler, that an error rate associated with the plurality of inputs has exceeded a threshold, invoking, by one or more processor, a conversational large language model (LLM), wherein the threshold is based on a statistical deviation of an expected performance stored to a user profile of the user; querying, by one or more processors, the user regarding a mental state and a task of the user using the conversational LLM, wherein the query is based on the expected performance, the monitoring, and the errors identified by the compiler; and executing, by one or more processors, the one or more methods identified to assist the user” are not directed to mathematical concepts, certain methods of organizing human activity, or mental processes, and that the claim limitations are impossible to carry out solely in the mind (see pg. 13, 2st and 2nd para.). The Examiner respectfully disagrees with these arguments. The claimed invention contains several limitations which can be performed as mental processes in the human mind with the aid of pen and paper. A person can watch a user write down code during a coding session, writing down observations, and can identify errors they see (e.g., can notice the user is making a syntax error, and make note of the mistake using pen and paper). Based on their observations and the errors they identified, the person determines an error rate is higher than a threshold (e.g. determines to speak to user if they make same mistake three times), and bases the threshold off a statistical deviation of an expected performance they have written down for a user (sets the threshold based on how often the user typically makes this kind of mistake, based on information they have written down about user). Further, a person can then ask the user about their mental state and the task based on the expected performance, monitoring, and errors (e.g. say to user “I have noticed you keep making this syntax error. Are you frustrated and do you need help?”). The person then identifies methods to help the user based on their response (e.g. if a person says they are frustrated, decide a method to help, such as telling them the correct syntax). The person then helps the user using the identified method (e.g. teaches the user the correct syntax). Therefore, the claims recite abstract ideas in the form of mental processes.
Applicant further argues on pgs. 13-15 that the claims are patent eligible under Step 2A Prong 2. Specifically, Applicant argues that the above limitations discussed with regards to Step 2A Prong 1 integrate any alleged judicial exception into a practical application by reflecting an improvement to the functioning of a computer or technology by improving the data processing field by providing contextual conversational user assistance (see pg. 15, 1st and 2nd para.). The Examiner respectfully disagrees with these arguments. Under Step 2A Prong 2, additional elements are considered in combination to determine if the claims integrate the judicial exception into a practical application. The only additional elements in the claimed invention are the compiler and the conversational large language model. These components are recited at a high level of generality and amount to mere instructions to implement the judicial exception using a generic computer. They do not reflect an improvement to the functioning of a computer or technology as they are merely “using” these components to perform operations which can be performed mentally by a person with the aid of pen and paper. The additional elements do not impose any meaningful limits on practicing the abstract ideas, and thus the claims are not patent eligible under Step 2A Prong 2.
Applicant further argues on pgs. 15-17 that the claims are patent eligible under Step 2B. Specifically, Applicant argues that the claims are directed to adding specific limitations other than what is well-understood, routine, and conventional in the field, and provide useful application to the functionality of a computing device, and provides an improvement to the particular technological field. The Examiner respectfully disagrees with these arguments. As discussed above, the only additional elements merely apply a compiler and conversational large language model to the mental process outlined under Step 2A Prong 1 analysis. The additional elements do not integrate the judicial exception into a practical application as they do not provide an inventive concept. Thus, the claims are not patent eligible under Step 2B.
Hence, Applicant’s arguments are not persuasive.
2. Regarding the rejection under 35 U.S.C. § 103, Applicant’s arguments have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
3. Regarding the objections to claims 3, 10, and 17, Applicant has amended each claim to address the minor informalities. Accordingly, the objections are withdrawn.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
4. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Regarding claims 1, “A computer-implemented method” is recited, which is directed to one of the four statutory categories of invention (process) (Step 1: YES). However, the claims limitations, under their broadest reasonable interpretation, recite mental processes which fall into the category of abstract idea (Step 2A Prong 1: YES).
The following limitations, under their broadest reasonable interpretation, recite mental processes which can be performed by a human with the aid of pen and paper:
monitoring…a plurality of inputs, in real time, by a user…the plurality of inputs comprising code being written by the user during a coding session: a human listens and observes a user, such as watching the user during a coding session, and writes down information about the coding session
identifying…errors in the code: a person can look at code and determine errors (e.g. can see that the syntax is wrong)
responsive to determining, based on the monitoring and the errors identified…, that an error rate associated with the plurality of inputs has exceeded a threshold level, invoking…wherein the threshold is based on a statistical deviation of an expected performance stored to a user profile of the user: a human observes the user, and if the error rate is above a statistical deviation threshold (e.g. user has made the same syntax mistake twice), the person initiates a conversation with the user
querying…the user regarding a mental state and a task of the user, wherein the query is based on the expected performance, the monitoring, and the errors identified …: a human asks the user about how they are feeling mentally and about the task they are working on, and they ask the user based on expected performance, monitoring and the errors (e.g. if expected performance is a correct syntax, and based on monitoring user keeps making syntax errors, asking the user if they are potentially frustrated and if they need help with the syntax issues)
identifying…one or more methods to assist the user based on a response of the user to the querying: a human identifies how to help the user based on their response (e.g. if the user says they are frustrated and can’t spell a word correctly, identifying that they can assist by providing the correct spelling)
executing…the one or more methods identified to assist the user: a human carries out the methods identified (e.g. tells the user the correct spelling of a word they are struggling to write)
There are no additional elements which integrate the judicial exception into a practical application (Step 2A Prong 2: NO). The only additional elements are “by one or more processors”, “to a computing device”, “identifying, by a compiler”, “a conversational large language model (LLM)…”, and “using the conversational LLM….wherein the query is based on…errors identified by the compiler”. These limitations are recited at a high level of generality and amount to mere instructions to implement the judicial exception using generic computer components. Even when viewed in combination, the additional limitations fail to integrate the judicial exception into a practical application because they do not impose any meaningful limits on practicing the abstract idea. Therefore, the claims are directed to an abstract idea (Step 2A: YES).
There are no additional limitations which amount to significantly more than the judicial exception (Step 2B: NO). As discussed above, the additional limitations amount to mere instructions to implement the judicial exception using generic computer components. Mere instructions to implement the judicial exception using generic computer components cannot provide an inventive concept. Therefore, claim 1 is not patent eligible.
Regarding dependent claims 2-7, “The computer-implemented method” is recited, which is directed to one of the four statutory categories of invention (process) (Step 1: YES). However, the claims limitations, under their broadest reasonable interpretation, recite mental processes which fall into the category of abstract idea (Step 2A Prong 1: YES).
The following limitations, under their broadest reasonable interpretation, recite mental processes which can be performed by a human with the aid of pen and paper:
Claim 2:
subsequent to invoking …, detecting, …, a sentiment of the user; and selecting, …, a persona of a plurality of personas … to query the user based on the detected sentiment of the user: after initiating a conversation with the user, the human detects a sentiment of the user (e.g. observes body language and word choice) and decides to use a persona to query the user (e.g. determines the user is sad so uses a comforting ‘persona’)
Claim 2 contains the additional limitations “invoking the conversational LLM, detecting, by one or more processors…selecting, by one or more processors,…for the conversational LLM”, which amount to mere instructions to implement the judicial exception using generic computer components.
Claim 3:
subsequent to querying the user regarding the mental state and the task of the user …, processing, …, the response for an indication of frustration of the user; and processing, …, the response for an indication of fatigue of the user: after asking the user about their mental state and task, the human observes indications of frustration and fatigue in the user.
Claim 3 contains the additional limitation “using the conversational LLM, processing, by one or more processors…processing, by one or more processors”, which amounts to mere instructions to implement the judicial exception using generic computer components.
Claim 4:
wherein the indication of frustration includes a selection from the group consisting of: a Natural Language component, a speech, a language choice, a sentiment, and an expletive, and wherein the indication of fatigue includes a selection from the group consisting of: a spelling error, a process indicator, and a process blind indicator: a human determines that a user is frustrated based on factors such as their speech, language choice, and sentiment (e.g. using certain words), and determines that a user is fatigued based on factors such as spelling errors.
Claim 5:
analyzing, …, one or more factors associated with an action the user is executing: a human observes and analyzes factors associated with the action the user is performing (e.g. how urgent the task the user is completing, how critical the action the user is completing)
Claim 5 contains the additional limitation “by one or more processors”, which amounts to mere instructions to implement the judicial exception using generic computer components.
Claim 6:
wherein the one or more factors associated with the action the user is executing include a selection from the group consisting of: a degree of criticality of the action the user is executing, a timeline associated with the action the user is executing, and an urgency of the action the user is executing: a human observes and analyzes factors of a user completing a task such as criticality (e.g. how important is the task), a timeline associated with the action (e.g. considering the user’s progress up to a current time), and an urgency (e.g. how quickly the user needs to complete the task)
Claim 7:
wherein the one or more methods identified to assist the user comprise a selection from the group consisting of: helping the user with the task, chatting with the user to provide emotional support, and recommending that the user take a break: a human assists the user by helping the user with the task (e.g. correcting grammatical errors the user is making), chatting to provide emotional support (e.g. helping to improve the user’s mood if they are feeling upset), and recommending a break to the user.
Claims 2-7 contain no additional elements which integrate the judicial exception into a practical application (Step 2A Prong 2: NO). The only additional elements are those discussed above, which amount to mere instructions to implement the judicial exception using generic computer components. Even when viewed in combination, the additional limitations fail to integrate the judicial exception into a practical application because they do not impose any meaningful limits on practicing the abstract idea. Therefore, the claims are directed to an abstract idea (Step 2A: YES).
Claims 2-7 contain no additional limitations which amount to significantly more than the judicial exception (Step 2B: NO). As discussed above, the additional limitations amount to mere instructions to implement the judicial exception using generic computer components. Mere instructions to implement the judicial exception using generic computer components cannot provide an inventive concept. Therefore, claims 2-7 are not patent eligible.
Regarding claim 8, “A computer program produce” is recited, which is directed to one of the four statutory categories of invention (article of manufacture) (Step 1: YES). However, the claims limitations recite limitations similar to those in method claim 1, and thus under their broadest reasonable interpretation also recite mental processes which fall into the category of abstract idea (see analysis for claim 1 above) (Step 2A Prong 1: YES).
There are no additional elements which integrate the judicial exception into a practical application (Step 2A Prong 2: NO). The only additional elements are those recited similarly in claim 1, “A computer program product comprising: one or more computer readable storage media and program instructions stored on the one or more computer readable storage media to perform operations comprising”. These limitations are recited at a high level of generality and amount to mere instructions to implement the judicial exception using generic computer components. Even when viewed in combination, the additional limitations fail to integrate the judicial exception into a practical application because they do not impose any meaningful limits on practicing the abstract idea. Therefore, the claims are directed to an abstract idea (Step 2A: YES).
There are no additional limitations which amount to significantly more than the judicial exception (Step 2B: NO). As discussed above, the additional limitations amount to mere instructions to implement the judicial exception using generic computer components. Mere instructions to implement the judicial exception using generic computer components cannot provide an inventive concept. Therefore, claim 8 is not patent eligible.
Regarding dependent claims 9-14, “The computer program product” is recited, which is directed to one of the four statutory categories of invention (article of manufacture) (Step 1: YES). However, the claims limitations recite limitations similar to dependent claims 2-7, and thus under their broadest reasonable interpretation also recite mental processes which fall into the category of abstract idea (see analysis for claim 2-7 above) (Step 2A Prong 1: YES).
Claims 9-14 contain no additional elements which integrate the judicial exception into a practical application (Step 2A Prong 2: NO). The only additional elements amounts to mere instructions to implement the judicial exception using a generic computer. Even when viewed in combination, the additional limitations fail to integrate the judicial exception into a practical application because they do not impose any meaningful limits on practicing the abstract idea. Therefore, the claims are directed to an abstract idea (Step 2A: YES).
Claims 9-14 contain no additional limitations which amount to significantly more than the judicial exception (Step 2B: NO). As discussed above, the additional limitations amount to mere instructions to implement the judicial exception using generic computer components. Mere instructions to implement the judicial exception using generic computer components cannot provide an inventive concept. Therefore, claims 9-14 are not patent eligible.
Regarding claim 15, “A computer system” is recited, which is directed to one of the four statutory categories of invention (machine) (Step 1: YES). However, the claims limitations recite limitations similar to those in method claim 1, and thus under their broadest reasonable interpretation also recite mental processes which fall into the category of abstract idea (see analysis for claim 1 above) (Step 2A Prong 1: YES).
There are no additional elements which integrate the judicial exception into a practical application (Step 2A Prong 2: NO). The only additional elements are “A computer system comprising: a processor set; one or more computer readable storage media; and program instructions stored on the one or more computer readable storage media to cause the processor set to perform operations comprising:”. These limitations are recited at a high level of generality and amount to mere instructions to implement the judicial exception using generic computer components. Even when viewed in combination, the additional limitations fail to integrate the judicial exception into a practical application because they do not impose any meaningful limits on practicing the abstract idea. Therefore, the claims are directed to an abstract idea (Step 2A: YES).
There are no additional limitations which amount to significantly more than the judicial exception (Step 2B: NO). As discussed above, the additional limitations amount to mere instructions to implement the judicial exception using generic computer components. Mere instructions to implement the judicial exception using generic computer components cannot provide an inventive concept. Therefore, claim 15 is not patent eligible.
Regarding dependent claims 16-20, “The computer system” is recited, which is directed to one of the four statutory categories of invention (machine) (Step 1: YES). However, the claims limitations recite limitations similar to dependent claims 2-6, and thus under their broadest reasonable interpretation also recite mental processes which fall into the category of abstract idea (see analysis for claim 2-6 above) (Step 2A Prong 1: YES).
Claims 16-20 contain no additional elements which integrate the judicial exception into a practical application (Step 2A Prong 2: NO). The only additional elements in claims 9-14 are mere instructions to implement the judicial exception using a generic computer. Even when viewed in combination, the additional limitations fail to integrate the judicial exception into a practical application because they do not impose any meaningful limits on practicing the abstract idea. Therefore, the claims are directed to an abstract idea (Step 2A: YES).
Claims 16-20 contain no additional limitations which amount to significantly more than the judicial exception (Step 2B: NO). As discussed above, the additional limitations amount to mere instructions to implement the judicial exception using generic computer components. Mere instructions to implement the judicial exception using generic computer components cannot provide an inventive concept. Therefore, claims 16-20 are not patent eligible.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
5. Claims 1, 5-8, 12-15, and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Shetty et al. (US 2025/0136130, hereinafter Shetty) in view of Won et al. (US 2022/0415205 A1, hereinafter Won) and further in view of Zhang et al. (US 2024/0256423 A1, hereinafter Zhang).
Regarding claim 1, Shetty discloses A computer-implemented method comprising (para. 0026-0027): monitoring, by one or more processors (para. 0042 “For instance, various functions may be carried out by a processor executing instructions stored in memory.” In some embodiments, the systems, methods, and processes described herein may be executed using similar components, features, and/or functionalities to those of example autonomous vehicle 1000 of FIGS. 10A-10D, example computing device 1100 of FIG. 11, and/or example data center 1200 of FIG. 12.”), a plurality of inputs, in real time, by a user to a computing device (para. 0076 “The interior perception camera(s) 202 may be responsible for capturing one or more images (e.g., a video segment) of one or more portions of an interior section of an ego-machine—which may include an operator of the ego-machine—”; para. 0077 “For example, a DMS may employ computer vision and facial recognition algorithms to monitor the operator's face (or operator image data 204 representing the operator's face) in or near real-time. It tracks key facial features such as the eyes, eyelids, mouth, and/or head position. Additionally or alternatively, a DMS may incorporate eye-tracking technology to monitor a driver's eye movement. For example, it may track factors like blink rate, gaze direction, and eyelid closure duration. The DMS alternatively or additionally monitors the operator's head position and movements. Sudden jerks or unusual head positions may be signs of distraction or drowsiness.”; para. 0079 “The operator response handler 208 may be responsible for receiving and handling all operator responses …the operator response handler 208 receives a phrase utterance representing a response by the driver, such as “no, I'll be okay.””)…; responsive to determining, based on the monitoring…, that an error … associated with the plurality of inputs has exceeded a threshold (para. 0079 “Based at least on image pattern characteristics of such image data and/or an operator response/utterance handled by the operator response handler 208, particular embodiments generate a second score indicative of a second alertness level of the driver.”; para. 0032 “…a presentation of such output of the natural language characters includes synthesizing audio data as a phrase utterance that initiates a conversation with the driver based at least on the score corresponding to the alertness level. For example, in response to detecting that the operator's alertness level was classified at or below/above a designated threshold (e.g., at a KSS level 9 alertness)”), invoking, by one or more processors, a conversational large language model (LLM) (para. para. 0032 “…a presentation of such output of the natural language characters includes synthesizing audio data as a phrase utterance that initiates a conversation with the driver based at least on the score corresponding to the alertness level…”; para. 0082 “The language model(s) 226 may be responsible for taking one or more of the inputs (or partial inputs) provided by the operator alertness level detector 206, the operator response handler 208, the personalized information extractor 210, and/or the operator view probability component 228 in order to generate one or more corresponding natural language outputs.”; para. 0041 “As such, operator assistance may be provided based on using one or more language models (e.g., a Large Language Model (LLM)) to generate a natural language response.”), wherein the threshold is based on a statistical deviation of an expected performance…(Statistical models (GMMs and HMMs) used to determine deviation from expected performance (being alert): para. 0079 “Based at least on image pattern characteristics of such image data and/or an operator response/utterance handled by the operator response handler 208, particular embodiments generate a second score indicative of a second alertness level of the driver… In an illustrative example, particular embodiments may use a Gaussian Mixture Model (GMM) or Hidden Markov Model (HMM) to detect the alertness level of the driver based on detecting voice patterns associated with alertness or non-alertness, as described above.””); querying, by one or more processors, the user regarding a mental state and a task of the user using the conversational LLM (para. 0079 “For example, after an alertness level of the operator has been detected via the operator alertness level detector 206, it may feed a representation of the score as input into the language model(s) 226, which then produces a natural language response, “you are getting really tired, should I call your spouse?””), wherein the query is based on the expected performance, the monitoring… (query based on monitoring (collected operator image data (Fig. 2, 204) and an expected performance (deviation detected in operator image data via GMMs or HMMs, which is sent to language model, para. 0079 “In an illustrative example, particular embodiments may use a Gaussian Mixture Model (GMM) or Hidden Markov Model (HMM) to detect the alertness level of the driver based on detecting voice patterns associated with alertness or non-alertness, as described above.”)); identifying, by one or more processors, one or more methods to assist the user based on a response of the user to the querying (para. 0079 “Subsequently, the operator response handler 208 receives a phrase utterance representing a response by the driver, such as “no, I'll be okay.” Subsequently, in some embodiments, the operator alertness level detector 206 receives another set of image data representing one or more portions of the driver, such as through the DMS system described above. Based at least on image pattern characteristics of such image data and/or an operator response/utterance handled by the operator response handler 208, particular embodiments generate a second score indicative of a second alertness level of the driver.”; para. 0080 “Responsively, particular embodiments provide another representation (e.g., another natural language phrase) of the second score as input into the language model(s) 226 such that the model generates and presents additional natural language characters that represents yet another phrase utterance in a conversation with the operator”); and executing, by one or more processors, the one or more methods identified to assist the user (para. 0080 “For example, the presented output might be, “you still sound kind of tired. I'm going to call your wife if that's okay?” Such output may be based on receiving input via a GMM that has classified the driver phrase as 1 (representing very drowsy) and a hand-coded data structure that maps such classification into a natural language phrase, such as “driver is very drowsy.””; para. 0083 “For example, the natural language response generator 232 may generate a natural language sentence that reads, “Wake up! And let's play football trivia together” (based on inputs provided by the operator alertness level detector 206 and the personalized information extractor 210).”).
Shetty does not specifically disclose [responsive to determining, based on the monitoring…, that an] error rate associated with the plurality of inputs has exceeded a threshold.
[wherein the threshold is based on a statistical deviation of an expected performance] stored to a user profile of the user.
Won teaches [responsive to determining, based on the monitoring…, that an] error rate associated with the plurality of inputs has exceeded a threshold (para. 0082 “Both of the cognitive module 120 and the behavior module 122 can determine the number of questions in the task, the number of questions the user 110 answered correctly, and the number of questions the user answered incorrectly from the test question data 109 and the input data 105. These modules (120 and 122) can use these results to further calculate their respective metrics, as discussed below.”; para. 0133 “The knowledge server 104 can analyze one or more factors of the generated profile to determine the types of recommendations to provide to the user 110. For example, the knowledge server 104 can determine that the number sense metric, the apply/connections metric, and the concentration metric are allow below a minimum threshold of 50%. If the knowledge server 104 determines that any of these metrics are below a minimum threshold, the knowledge server 104 can obtain a set of recommendations that have been known to address and improve the user's 110 performance with these metrics.”)
[wherein the threshold is based on a statistical deviation of an expected performance] stored to a user profile of the user (determining whether a recommendation is needed (e.g. whether a metric has regressed, is lower than a previous metric threshold) is based on user metrics stored in a user profile (130): para. 0114 “A user, such as user 110, may have 100 user profiles in the user profiles database 130, for example. Each profile can illustrate different instances when user 110 performed a task and the resultant cognitive and behavioral metrics associated with the respective task. Each profile of the same user will contain the same user credentials. By having multiple user profiles, the knowledge server 104 can monitor the learning of user 110…The learning can illustrate, for example, that the user 110's grit has improved over time, the number sense has remained the same, the memory visualization has increased, the apply/connections has decreased over time, the test taking skills have increased, the concentration has increased, and the question comprehension metric has decreased over time, to name some examples…The knowledge server 104 can tailor recommendations to provide to the user depending on whether the user has a learning setback, e.g., regression, or if the user is in fact improving his/her learning, for example.”; para. 0139 “Alternatively, if the knowledge server 104 determines that user 110's grit score is decreasing by three points per month, then the knowledge server 104 can adjust its recommendations because the past tailored recommendations were not effective for that particular user as the user 110 is regressing.”).
Shetty and Won are considered to be analogous to the claimed invention as
they both are in the same field of user assistance via machine learning techniques. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Shetty to incorporate the teachings of Won in order to specifically determine based on the monitoring that an error rate associated with the plurality of inputs has exceeded a threshold, and to have the threshold be based on a statistical deviation of an expected performance stored to a user profile of the user. Doing so would be beneficial, as this would allow for monitoring a user over time and to tailor recommendations for each individual user (Won, para. 0141).
Shetty in view of Won does not specifically disclose:
[monitoring…a plurality of inputs…] the plurality of inputs comprising code being written by the user during a coding session;
identifying, by a compiler, errors in the code;
[response to determining, based on the monitoring] and the errors identified by the compiler, […invoking…a conversational large language model…];
[querying…wherein the query is based on…] the errors identified by the compiler.
Zhang teaches [monitoring…a plurality of inputs…] the plurality of inputs comprising code being written by the user during a coding session (para. 0079 “In some embodiments, a syntax phase takes in a student's program that has syntax and semantic errors, and forms chunks and prompts. Other inputs may include compiler or interpreter error messages, or a description of the task, or both.”);
identifying, by a compiler, errors in the code (para. 0079 “In some embodiments, a syntax phase takes in a student's program that has syntax and semantic errors, and forms chunks and prompts. Other inputs may include compiler or interpreter error messages, or a description of the task, or both.”);
[responsive to determining, based on the monitoring] and the errors identified by the compiler, […invoking…a conversational large language model…] (in response to a compiler error, invoke a LLM 208 using a prompt: para. 0079 “In some embodiments, a syntax phase takes in a student's program that has syntax and semantic errors, and forms chunks and prompts. Other inputs may include compiler or interpreter error messages, or a description of the task, or both. For each prompt, the embodiment queries the model 208.”);
[querying…wherein the query is based on…] the errors identified by the compiler (prompt is based on inputs including compiler error messages: para. 0079 “In some embodiments, a syntax phase takes in a student's program that has syntax and semantic errors, and forms chunks and prompts. Other inputs may include compiler or interpreter error messages, or a description of the task, or both. For each prompt, the embodiment queries the model 208. Software 302 replaces the chunk with the completion that the model suggested.”; Fig. 2, “Large Language Model Trained on Code 208”).
Shetty, Won, and Zhang are considered to be analogous to the claimed invention as they both are in the same field of user assistance using machine learning. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Shetty in view of Won to incorporate the teachings of Zhang in order to specifically monitor a plurality of input comprising code being written by the user during a coding session, to identify errors in the code using a compiler, to invoke a conversational large language model in response to the errors identified by the compiler, and to query the user based on the errors identified by the compiler. Doing so would be beneficial, as this would enable for software source code to be improved automatically for user programming tasks (para. 0007).
Regarding claim 5, Shetty in view of Won and further in view of Zhang discloses analyzing, by one or more processors, one or more factors associated with an action the user is executing (Shetty, para. 0076 “The interior perception camera(s) 202 may be responsible for capturing one or more images (e.g., a video segment) of one or more portions of an interior section of an ego-machine—which may include an operator of the ego-machine—”; para. 0077 “For example, a DMS may employ computer vision and facial recognition algorithms to monitor the operator's face (or operator image data 204 representing the operator's face) in or near real-time. It tracks key facial features such as the eyes, eyelids, mouth, and/or head position. Additionally or alternatively, a DMS may incorporate eye-tracking technology to monitor a driver's eye movement. For example, it may track factors like blink rate, gaze direction, and eyelid closure duration. The DMS alternatively or additionally monitors the operator's head position and movements. Sudden jerks or unusual head positions may be signs of distraction or drowsiness.”; para. 0079 “The operator response handler 208 may be responsible for receiving and handling all operator responses …the operator response handler 208 receives a phrase utterance representing a response by the driver, such as “no, I'll be okay.””; para. 0082 “The language model(s) 226 may be responsible for taking one or more of the inputs (or partial inputs) provided by the operator alertness level detector 206, the operator response handler 208, the personalized information extractor 210, and/or the operator view probability component 228 in order to generate one or more corresponding natural language outputs.”).
Regarding claim 6, Shetty in view of Won and further in view of Zhang discloses wherein the one or more factors associated with the action the user is executing include a selection from the group consisting of: a degree of criticality of the action the user is executing, a timeline associated with the action the user is executing, and an urgency of the action the user is executing (Shetty, operator alertness level: para. 0078 “The operator alertness level detector 206 is also generally responsible for providing a representation (e.g., a natural language sentence) of the alertness level to the language model(s) 226 for further processing, as described in more detail below. For example, the operator alertness level detector 206 may use an alertness level score as an index in a data structure to look up corresponding hand-coded natural language characters, such as “this operator has the highest alert level possible in the KSS scale.””; para. 0031 “For example, some embodiments map image data classifications to a Karolinska Sleepiness Scale (“KSS”), via a data structure, that contains (e.g., textual) representations of all 9 alert levels-“(1) extremely alert,” “(2) very alert,” “(3) alert,” “(4) rather alert,” “(5) nether alert nor sleepy,” “(6) some signs of sleepiness,” “(7) sleepy, but no effort to keep awake,” “(8) sleepy, some effort to keep awake,” and “(9) sleepy, great effort to keep awake, fighting sleep.””).
Regarding claim 7, Shetty in view of Won and further in view of Zhang discloses wherein the one or more methods identified to assist the user comprise a selection from the group consisting of: helping the user with the task, chatting with the user to provide emotional support, and recommending that the user take a break (Shetty, para. 0079 “For example, after an alertness level of the operator has been detected via the operator alertness level detector 206, it may feed a representation of the score as input into the language model(s) 226, which then produces a natural language response, “you are getting really tired, should I call your spouse?””; para. 0104 “For example, such response may be “your answer of Patrick Mahomes is correct. I'm glad you're more alert. Should I call a nearby hotel so you can sleep?””; para. 0111 “Responsively, the natural language response generator 132 generates a natural language response, such as “You appear to be very drowsy! May I play your favorite upbeat music?””).
Regarding claim 8, claim 8 is a computer program product claim with limitations similar to method claim 1, and is thus rejected under similar rationale.
Additionally, Shetty discloses A computer program product comprising: one or more computer readable storage media and program instructions stored on the one or more computer readable storage media to perform operations comprising: (para. 0115 “The methods may also be embodied as computer-usable instructions stored on computer storage media.”; para. 0260 “The computer-storage media may include both volatile and nonvolatile media and/or removable and non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules, and/or other data types. For example, the memory 1104 may store computer-readable instructions (e.g., that represent a program(s) and/or a program element(s), such as an operating system. Computer-storage media may include, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which may be used to store the desired information and which may be accessed by computing device 1100. As used herein, computer storage media does not comprise signals per se.”).
Regarding claim 12, claim 12 is rejected for analogous reasons to claim 5.
Regarding claim 13, claim 13 is rejected for analogous reasons to claim 6.
Regarding claim 14, claim 14 is rejected for analogous reasons to claim 7.
Regarding claim 15, claim 15 is a computer system claim with limitations similar to method claim 1, and is thus rejected under similar rationale.
Additionally, Shetty discloses A computer system (Fig. 11; para. 0256 “FIG. 11 is a block diagram of an example computing device(s) 1100 suitable for use in implementing some embodiments of the present disclosure.”) comprising: a processor set (Fig. 11, 1106 “CPU(s)”); one or more computer readable storage media (Fig. 11, 1104 “Memory”; para. 0259 “The memory 1104 may include any of a variety of computer-readable media.”); and program instructions stored on the one or more computer readable storage media (para. 0260 “For example, the memory 1104 may store computer-readable instructions (e.g., that represent a program(s) and/or a program element(s), such as an operating system.”) to cause the processor set to perform operations (para. 0262 “The CPU(s) 1106 may be configured to execute at least some of the computer-readable instructions to control one or more components of the computing device 1100 to perform one or more of the methods and/or processes described herein.”).
Regarding claim 19, claim 19 is rejected for analogous reasons to claim 5.
Regarding claim 20, claim 20 is rejected for analogous reasons to claim 6.
6. Claims 2-4, 9-11, and 16-18 are rejected under 35 U.S.C. 103 as being unpatentable over Shetty in view of Won and Zhang, and further in view of Birru et al. (US 2024/0095491, hereinafter Birru).
Regarding claim 2, Shetty in view of Won and further in view of Zhang does not specifically disclose subsequent to invoking the conversational LLM, detecting, by one or more processors, a sentiment of the user; and selecting, by one or more processors, a persona of a plurality of personas for the conversational LLM to query the user based on the detected sentiment of the user.
Birru teaches subsequent to invoking the conversationa lLLM (para. 0111 “GenAI model: The GenAI model may include, but not limited to, a Language model (LLM) 402 for text”) , detecting, by one or more processors (para. 0101 “The processor may, for example, be configured to perform the operations 302-312 by performing hardware implemented logical functions, executing stored instructions, or executing algorithms for performing each of the operations.”), a sentiment of the user (para. 0079 “The method 300 starts at step 302, where the virtual agent is ready to retrieve the user input.”; para. 0082 “To further elaborate, before generating the response, the virtual agent thoroughly analyzes the user's input. This analysis includes understanding the content, context, intent, and sentiment conveyed by the user across various modalities, such as text, speech, and visual cues.”); and selecting, by one or more processors, a persona of a plurality of personas for the conversational LLM to query the user based on the detected sentiment of the user (para. 0095 “The virtual agent uses this selected prompt as a foundation for its response to the user. However, it doesn't provide the prompt precisely. Instead, it rephrases it into a more comprehensive and empathetic response…”; para. 0121-0124 “The speech tone modifier may be capable of modifying speech tone of the virtual agent 416. This allows the virtual agent to convey the response with the desired emotional tone or style.[0122] The factual information retriever 418 component retrieves factual information from the LLM 402 to ensure that the response is grounded in accurate data and facts. The retrieved information is presented as natural language in order to respond back to the user.[0123] The face pose modifier 420 component focuses on adjusting the virtual agent's facial expression, which may be included in the visual aspect of the response.[0124] The components for modifying speech tone, retrieving factual responses, and adjusting facial expressions work together to generate a modified character of the virtual agent 104 via a character generator 422. This character represents the virtual agent's response and is designed to convey the information effectively.”).
Shetty, Won, Zhang, and Birru are considered to be analogous to the claimed invention as they are all in the same field of user assistance via machine learning. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Shetty in view of Won and further in view of Zhang to incorporate the teachings of Birru in order to: subsequent to invoking the LLM, detect a sentiment of a user, and to then select a persona of a plurality of personas for the conversational LLM to query the user based on the detected sentiment. Doing so would be beneficial, as this would allow the conversational LLM to respond in a personalized and contextually relevant manner, enhancing user experience (para. 0063).
Regarding claim 3, Shetty in view of Won and further in view of Zhang discloses processing, by one or more processors, the response for an indication of fatigue of the user (Shetty, para. 0079 “Based at least on image pattern characteristics of such image data and/or an operator response/utterance handled by the operator response handler 208, particular embodiments generate a second score indicative of a second alertness level of the driver.”; para. 0031 “For example, some embodiments map image data classifications to a Karolinska Sleepiness Scale (“KSS”), via a data structure, that contains (e.g., textual) representations of all 9 alert levels-“(1) extremely alert,” “(2) very alert,” “(3) alert,” “(4) rather alert,” “(5) nether alert nor sleepy,” “(6) some signs of sleepiness,” “(7) sleepy, but no effort to keep awake,” “(8) sleepy, some effort to keep awake,” and “(9) sleepy, great effort to keep awake, fighting sleep.””).
Shetty in view of Won and further in view of Zhang does not specifically disclose subsequent to querying the user regarding the mental state and the task of the user using the conversational LLM, processing, by one or more processors, the response for an indication of frustration of the user.
Birru teaches subsequent to querying the user regarding the mental state and the task of the user using the conversational LLM (para. 0111 “GenAI model: The GenAI model may include, but not limited to, a Language model (LLM) 402 for text”), processing, by one or more processors, the response for an indication of frustration of the user (para. 0129 “For example, as shown in present FIG. 5, after observing that the user is currently frustrated, the LLM is prompting what to do next. In particular, the LLM guides the designer 410 to design a prompt that may improve the user's mood.”; para. 0141-0144 “[0141] Jason (as your best friend): Aw, I can sense it from your tone. Tell me, what happened? [0142] User response with Mood: Frustrated and Interest level: Engaged [0143] User: Ugh, everything just seemed to go wrong today. I woke up late, spilled coffee on my shirt, and missed an important meeting! [0144] Jason (understanding): I feel you! Mornings like that are a nightmare. And missing a meeting can be stressful.”).
Shetty, Won, Zhang, and Birru are considered to be analogous to the claimed invention as they are all in the same field of user assistance via machine learning. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Shetty in view of Won and further in view of Zhang to incorporate the teachings of Birru in order to: subsequent to querying the user regarding the mental state and task of the user using the conversational LLM, processing the response for an indication of frustration of the user. Doing so would be beneficial, as this would allow the conversational LLM to respond in a personalized and contextually relevant manner, enhancing user experience (para. 0063).
Regarding claim 4, Shetty in view of Won and further in view of Zhang and further in view of Birru discloses wherein the indication of frustration includes a selection from the group consisting of: a Natural Language component, a speech, a language choice, a sentiment, and an expletive (Birru, para. 0082 “To further elaborate, before generating the response, the virtual agent thoroughly analyzes the user's input. This analysis includes understanding the content, context, intent, and sentiment conveyed by the user across various modalities, such as text, speech, and visual cues.”; para. 0085 “In scenarios where the user's input conveys emotional cues, such as sadness or frustration, the virtual agent may incorporate emotional intelligence.”); wherein the indication of fatigue includes a selection from the group consisting of: a spelling error, a process indicator, and a process blind indicator (Shetty, process indicator: para. “The operator alertness level detector 206 detects the alertness level of the operator of the ego-machine based on detecting patterns or associations within the operator image data 204. For example, a DMS may employ computer vision and facial recognition algorithms to monitor the operator's face (or operator image data 204 representing the operator's face) in or near real-time. It tracks key facial features such as the eyes, eyelids, mouth, and/or head position. Additionally or alternatively, a DMS may incorporate eye-tracking technology to monitor a driver's eye movement. For example, it may track factors like blink rate, gaze direction, and eyelid closure duration. The DMS alternatively or additionally monitors the operator's head position and movements. Sudden jerks or unusual head positions may be signs of distraction or drowsiness.”; para. 0031 “For example, some embodiments map image data classifications to a Karolinska Sleepiness Scale (“KSS”), via a data structure, that contains (e.g., textual) representations of all 9 alert levels-“(1) extremely alert,” “(2) very alert,” “(3) alert,” “(4) rather alert,” “(5) nether alert nor sleepy,” “(6) some signs of sleepiness,” “(7) sleepy, but no effort to keep awake,” “(8) sleepy, some effort to keep awake,” and “(9) sleepy, great effort to keep awake, fighting sleep.””).
Shetty, Won, Zhang, and Birru are considered to be analogous to the claimed invention as they are all in the same field of user assistance via machine learning. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Shetty in view of Won and further in view of Zhang to incorporate the teachings of Birru in order to have the indication of frustration include at one of a Natural Language component, a speech, a language choice, a sentiment, and an expletive. Doing so would be beneficial, as this would allow the conversational LLM to respond in a personalized and contextually relevant manner, enhancing user experience (para. 0063).
Regarding claim 9, claim 9 is rejected for analogous reasons to claim 2.
Regarding claim 10, claim 10 is rejected for analogous reasons to claim 3.
Regarding claim 11, claim 11 is rejected for analogous reasons to claim 4.
Regarding claim 16, claim 16 is rejected for analogous reasons to claim 2.
Regarding claim 17, claim 17 is rejected for analogous reasons to claim 3.
Regarding claim 18, claim 18 is rejected for analogous reasons to claim 4.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
Agrawal et al. (US 2025/0147832 A1): language model assisted error analysis, utilizing error messages to determine context for LLM prompt to fix error (Fig. 3)
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CODY DOUGLAS HUTCHESON whose telephone number is (703)756-1601. The examiner can normally be reached M-F 8:00AM-5:00PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Pierre-Louis Desir can be reached at (571)-272-7799. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/CODY DOUGLAS HUTCHESON/Examiner, Art Unit 2659
/PIERRE LOUIS DESIR/Supervisory Patent Examiner, Art Unit 2659