Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
This Office Action is sent in response to Application’s Communication received on 05/09/2023 for application number 18/195197. The Office hereby acknowledges receipt of the following and placed of record in file: Specification, Drawing, Abstract, Oath/Declaration, and Claims.
Claims (1-11), (12-14) and (15-20) are presented for examination.
Claim Rejections - 35 USC§ 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefore, subject to the conditions and requirements of this title.
Claims (1-3), (4-16, 18-20) and 17 are rejected under 35 U.S.C. 101 because the claimed
invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an
abstract idea) without significantly more.
Step 1: Claims (1-3), (4-16, 18-20) and 17 are drawn to a method each of which is within the four statutory categories (e.g., a process, a machine).
Step 2A - Prong One: In prong one of step 2A, the claims are analyzed to evaluate whether they recite a judicial exception.
Claim 1.
gathering, by a command prediction module of the application executing on a computing device, command data and user characteristic data for the user;
cleaning the command data to produce an input dataset;
applying, by the command prediction module, the input dataset to a trained recurrent neural network model, the trained recurrent neural network model configured to produce a Separate next command prediction for each of a plurality of different values of one or more user characteristics;
selecting, by the command prediction module, one or more recommended next commands from within the next command prediction produced for a value of the one or more user characteristic that corresponds to the user characteristic data for the user; and
displaying the one or more recommended next commands in a user interface of the application.
The limitations of “gathering, by a command prediction module…”, “cleaning the command data to produce…”, “selecting, by the command prediction module, one or more…” are defined as concepts that can practically be performed in mind, or human using pen and paper as a physical aid. Examples of mental processes include observations, evaluations, judgments, and opinions. These limitations, as drafted, are a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components. That is, other than reciting “computing device”, “processor”, “a memory” and “medium” nothing in the claim element precludes the step from practically being performed in the mind. For example, but for the “computer-readable storage medium” language, the claim encompasses gathering and evaluating the command data and the user characteristic data, cleaning the command data from unimportant commands to produce an input dataset and displaying recommended next commands. The mere nominal recitation of a generic computing does not take the claim limitation out of the mental processes grouping. Thus, under broadest reasonable interpretation the claim recites a mental process.
Step 2A Prong 2:
Claim 1 recites additional elements such as “applying, by the command prediction module, the input dataset to a trained recurrent neural …” and “displaying the one or more recommended next…” which are recited at a high level, the elements are merely reciting the words that pertain to a generic computer (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea, as discussed in MPEP § 2106.05(f). The “applying” step is an additional element amount to merely Instructions to Apply an Exception. The additional element amount to merely words “apply it” and are mere instructions to implement an abstract idea or other exception on a computer. The limitation does not integrate the judicial exception into a practical application, therefore; the addition of element does not amount to an inventive concept. The “displaying” step is an additional element, it is insignificant extra-solution activity as it encompasses mere data gathering.
Dependent claims (2-11), (12-14) and (15-20) fail to include any additional elements. In other words, each of the limitations/elements recited in respective dependent claims (2-11), (12-14) and (15-20) are further part of the abstract idea as identified by the Examiner for each respective dependent claim (i.e. they are part of the abstract idea recited in each respective claim).
The Examiner has therefore determined that the elements, or combination of additional elements, do not integrate the abstract idea into a practical application. Accordingly, the claims are directed to an abstract idea.
Step 2B: The claim does not provide an inventive concept (significantly more than the abstract idea). The claim is ineligible.
The “applying” step is considered merely words “apply it” and are mere instructions to implement an abstract idea or other exception on a computer. The “displaying” step is merely gathering and displaying data, therefore it is insignificant extra-solution activity. Both steps are recited at a high level of generality and amount to predicting next commands that is recited at high level of generality using a generic computer. Even when considered in combination, the additional elements represent mere instructions to apply an exception and insignificant extra-solution activity, which cannot provide an inventive concept.
The same rational applies to claims 12 and claim 15.
Dependent claims (2-11), (12-14) and (15-20) fail to include any additional elements. In other words, each of the limitations/elements recited in respective dependent claims (2-11), (12-14) and (15-20) are further part of the abstract idea as identified by the Examiner for each respective dependent claim (i.e. they are part of the abstract idea recited in each respective claim).
The Examiner has therefore determined that the elements, or combination of additional elements, do not integrate the abstract idea into a practical application. Accordingly, the claims are directed to an abstract idea.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 4-12, 14-15, 17-20 are rejected under AIA 35 U.S.C. 103(a) as being unpatentable over Zheng et al. US Patent Application Publication US 20220335043 A1 (hereinafter Zheng) in view of Aggarwal et al. US Patent Application Publication US 20220019909 A1 (hereinafter Aggarwal).
Regarding claim 1, Zheng teaches A method for recommending one or more next commands to a user of an application, comprising: gathering, by a command prediction module of the application executing on a computing device, command data and user characteristic data for the user ([0002], [0023-0023], [0041-0043] wherein Zheng suggests recommendation of commands for achieving a desired outcome, and wherein Zheng receives command data in the form of queries and contextual data that includes user data and information).
Zheng does not teach cleaning the command data to produce an input dataset; applying, by the command prediction module, the input dataset to a trained recurrent neural network model, the trained recurrent neural network model configured to produce a Separate next command prediction for each of a plurality of different values of one or more user characteristics; selecting, by the command prediction module, one or more recommended next commands from within the next command prediction produced for a value of the one or more user characteristic that corresponds to the user characteristic data for the user; and displaying the one or more recommended next commands in a user interface of the application.
However in analogous art of recommending next commands using recurrent neural networks, Aggarwal teaches cleaning the command data to produce an input dataset ([0021], [0039], [0041], [0058], [0068] wherein Aggarwal describes the steps of filtering out irrelevant commands and producing a representation of the command sequence) applying, by the command prediction module, the input dataset to a trained recurrent neural network model, the trained recurrent neural network model configured to produce a Separate next command prediction for each of a plurality of different values of one or more user characteristics ([0014], [0016], [0021], [0028], [0039-0041], [0058], [0068] wherein Aggarwal describes an analytics system that provide a command recommendation, such as “add device type identifier” as a next command recommendation, wherein the analytics system is based on one or more model architectures (e.g., recurrent neural network (RNN), convolutional neural networks (CNN), frequency models, Markov models, etc.). The analytics system is trained using log data of an application (e.g., log data of a particular user or a group of users) selecting, by the command prediction module, one or more recommended next commands from within the next command prediction produced for a value of the one or more user characteristic that corresponds to the user characteristic data for the user (Abstract, [0021], [0039-0040], [0048], [0050] wherein Aggarwal describes the steps of command engine selecting a next command recommendation from commands based on analysis that includes the level of user’s skills) displaying the one or more recommended next commands in a user interface of the application (Abstract, [0003], [0021], [0039-0042], [0050-0052], [0058], [0068] wherein Aggarwal displays the command recommendation on the interface).
It would have been obvious to a person in the ordinary skill in the art before the effective filing date of the claimed invention to combine Aggarwal with Zheng by incorporating the method of cleaning the command data to produce an input dataset of Aggarwal; applying, by the command prediction module, the input dataset to a trained recurrent neural network model, the trained recurrent neural network model configured to produce a Separate next command prediction for each of a plurality of different values of one or more user characteristics; selecting, by the command prediction module, one or more recommended next commands from within the next command prediction produced for a value of the one or more user characteristic that corresponds to the user characteristic data for the user of Aggarwal into the method of gathering, by a command prediction module of the application executing on a computing device, command data and user characteristic data for the user of Zheng for the purpose of filtering out irrelevant options that are provided to users, this leaves the user with a choice among a relatively smaller number of options (Aggarwal: [0021]).
Regarding claim 4, Zheng as modified by Aggarwal teaches wherein the applying further comprises: extracting a past command sequence from the input dataset; encoding the past command sequence as a plurality of vectors; and providing the plurality of vectors to an input layer of the trained recurrent neural network model ([0014], [0058], [0065] wherein Aggarwal describes to a machine-learning model that is trained using a machine-learning system to perform an assessment on log data that includes user behaviors, wherein the log data is extracted and wherein a multi-layered Long Short-Term Memory (LSTM) is used to encode the input sequences of commands into vectors of fixed dimensionality).
Regarding claim 5, Zheng as modified by Aggarwal teaches wherein the trained recurrent neural network model is configured to produce an associated confidence level for each command within each separate next command prediction, and the selecting further comprises selecting a command having a greatest confidence level or selecting each command having an associated confidence level above a given threshold ([0043], [0062], [0087] wherein Zheng categorizes application command recommendations based a confidence score, wherein the command recommendations may also be categorized into a highly likely category labeled as “best action” and another category labeled as “actions.” Best action may identify commands that are highly likely to correspond with the user's desired intent. This may be determined based on a confidence score provided by the multilingual command recommendation model. The remaining commands may also be displayed based on their ranking which may be determined based on a probability or confidence score. This reduces cognitive load and may enable the user to quickly and efficiently identify a desired command).
Regarding claim 6, Zheng as modified by Aggarwal teaches wherein the gathering further comprises: processing the command data to determine the user characteristics data, the processing to include comparing aspects of the command data to one or more thresholds ([0043] wherein Zheng calculating the probability selection for multiple commands, wherein the commands have a probability that is higher than a predetermined threshold may be selected as the recommended commands. The probability calculations may be used to rank the recommended commands. For example, commands having higher probabilities may be ranked higher. The selected commands along with their probability scores and/or their rankings may be provided as the output. Thus, the multilingual command recommendation model may receive a search query as an input and may provide one or more recommendations in a desired language as an output).
Regarding claim 7, Zheng as modified by Aggarwal teaches wherein the gathering further comprises soliciting the user to provide the user characteristics data in the user interface of the application ([0021-0025], [0039], [0045], [0051] wherein Aggarwal supports training user on the different functions that are available on a particular application that the recommendation system supports. When a novice user lacks the skills and knowledge to choose the different options provided, recommender systems act as a guide for the user in making a selection. Specifically, a user may make a click selection (e.g., select command or actions that registered in log data when a user interacts with the interface of the analytics system).
Regarding claim 8, Zheng as modified by Aggarwal teaches wherein the cleaning further comprises: removing commands on a predetermined list of commands from the command data; removing instances of sequential commands that occur more than a threshold number of times from the command data; or removing commands that occur less frequently than a threshold from the command data ([0039], [0058], [0060] wherein Aggarwal identifies total number of unique commands wherein certain commands can be dropped and discarded, and wherein the analytics system can filter out irrelevant commands, leaving primarily commands that are relevant to the user goal, and the analytics system can also provide guidance to a novice user to allow efficient data analysis)
Regarding claim 9, Zheng as modified by Aggarwal teaches comparing an actual next command selected by the user to the one or more recommended next commands; and in response to the actual next command not matching any of the one or more recommended next commands, there being less than a threshold number of previous incorrect predictions and one or more recommended next commands having a confidence level of above a threshold level, determining the user switched tasks ([0043], [0062], [0087] wherein Zheng categorizes application command recommendations based a confidence score, wherein the command recommendations may also be categorized into a highly likely category labeled as “best action” and another category labeled as “actions.” Best action may identify commands that are highly likely to correspond with the user's desired intent. This may be determined based on a confidence score provided by the multilingual command recommendation model. The remaining commands may also be displayed based on their ranking which may be determined based on a probability or confidence score. This reduces cognitive load and may enable the user to quickly and efficiently identify a desired command).
Regarding claim 10, Zheng as modified by Aggarwal teaches comparing an actual next command selected by the user to the one or more recommended next commands; and in response to the actual next command not matching any of the one or more recommended next commands, there being greater than a threshold number of previous incorrect predictions and one or more recommended next commands having a confidence level of above a threshold level, determining the user is having difficulty operations the application ([0043], [0062], [0087] wherein Zheng categorizes application command recommendations based a confidence score, wherein the command recommendations may also be categorized into a highly likely category labeled as “best action” and another category labeled as “actions.” Best action may identify commands that are highly likely to correspond with the user's desired intent. This may be determined based on a confidence score provided by the multilingual command recommendation model. The remaining commands may also be displayed based on their ranking which may be determined based on a probability or confidence score. This reduces cognitive load and may enable the user to quickly and efficiently identify a desired command), ([0021-0025], [0039], [0045], [0051] wherein Aggarwal supports training user on the different functions that are available on a particular application that the recommendation system supports. When a novice user lacks the skills and knowledge to choose the different options provided, recommender systems act as a guide for the user in making a selection. Specifically, a user may make a click selection (e.g., select command or actions that registered in log data when a user interacts with the interface of the analytics system).
Regarding claim 11, Zheng as modified by Aggarwal teaches comparing an actual next command selected by the user to the one or more recommended next commands; and in response to the actual next command matching one of the one or more recommended next commands, there being greater than a threshold number of previous correct predictions, and one or more recommended next commands having a confidence level of above a threshold level, determining the user well-understands how to operate the application ([0043], [0062], [0087] wherein Zheng categorizes application command recommendations based a confidence score, wherein the command recommendations may also be categorized into a highly likely category labeled as “best action” and another category labeled as “actions.” Best action may identify commands that are highly likely to correspond with the user's desired intent. This may be determined based on a confidence score provided by the multilingual command recommendation model. The remaining commands may also be displayed based on their ranking which may be determined based on a probability or confidence score. This reduces cognitive load and may enable the user to quickly and efficiently identify a desired command), ([0021-0025], [0039], [0045], [0051] wherein Aggarwal supports training user on the different functions that are available on a particular application that the recommendation system supports. When a novice user lacks the skills and knowledge to choose the different options provided, recommender systems act as a guide for the user in making a selection. Specifically, a user may make a click selection (e.g., select command or actions that registered in log data when a user interacts with the interface of the analytics system).
Regarding claim 12, Zheng teaches a computing device configured to recommend one or more next commands to a user of an application, the computing device comprising: a processor; and a memory coupled to the processor, the memory configured to maintain a command prediction module of the application that when executed on the processor is operable to ([0005], [0055], [0070]). The claim is similar in scope to claim 1 therefore the claim is rejected under similar rationale.
Regarding claim 14, Zheng as modified by Aggarwal teaches wherein the one or more user characteristics comprise a user skill level or a user industry sector (Abstract, [0021-0022], [0025], [0039-0040], [0045], [0051] wherein Aggarwal describes the steps of command engine selecting a next command recommendation from commands based on analysis that includes the level of user’s skills)
Regarding claim 15, Zheng teaches a computing device configured to recommend one or more next commands to a user of an application, the computing device comprising: a processor; and a memory coupled to the processor, the memory configured to maintain a command prediction module of the application that when executed on the processor is operable to ([0077]). The claim is similar in scope to claim 1 therefore the claim is rejected under similar rationale.
Regarding claim 17, the claim is similar in scope to claim 4 therefore the claim is rejected
under similar rationale.
Regarding claim 18, the claim is similar in scope to claim 5 therefore the claim is rejected
under similar rationale.
Regarding claim 19, the claim is similar in scope to claim 6 therefore the claim is rejected
under similar rationale.
Regarding claim 20, the claim is similar in scope to claim 8 therefore the claim is rejected
under similar rationale.
Claims 2-3 are rejected under AIA 35 U.S.C. 103(a) as being unpatentable over Zheng et al. US Patent Application Publication US 20220335043 A1 (hereinafter Zheng) in view of Aggarwal et al. US Patent Application Publication US 20220019909 A1 (hereinafter Aggarwal) and further in view of Yao et al. US Patent Application Publication US 20200086858 A1 (hereinafter Yao).
Regarding claim 2, Zheng and Aggarwal do not teach wherein the trained recurrent neural network model is configured to produce the separate next command predictions by clustering final hidden states from a last hidden layer of the trained recurrent neural network model, associating each cluster with a value of one or more user characteristics, and having the output layer of the trained recurrent neural network model produce the separate next command predictions based on the final hidden states from each cluster.
However in analogous art of recommending next commands using recurrent neural networks, Yao teaches wherein the trained recurrent neural network model is configured to produce the separate next command predictions by clustering final hidden states from a last hidden layer of the trained recurrent neural network model, associating each cluster with a value of one or more user characteristics, and having the output layer of the trained recurrent neural network model produce the separate next command predictions based on the final hidden states from each cluster ([0044], [0053], [0061-0062], [0071-0073], [0077], [0083], [0085], [0093] wherein Yao communicates commands based on predicting the surrounding environment of a vehicle, and wherein GRU is the gated recurrent units of the motion encoder with hidden state vector and an encoding module that merges and aggregates the final hidden states, wherein the final fused hidden state is an output as hidden state vector of GRU. Wherein the commands are updated for specific participant).
It would have been obvious to a person in the ordinary skill in the art before the effective filing date of the claimed invention to combine Yao with Aggarwal and Zheng by incorporating the method of wherein the trained recurrent neural network model is configured to produce the separate next command predictions by clustering final hidden states from a last hidden layer of the trained recurrent neural network model, associating each cluster with a value of one or more user characteristics, and having the output layer of the trained recurrent neural network model produce the separate next command predictions based on the final hidden states from each cluster of Yao into the method of gathering, by a command prediction module of the application executing on a computing device, command data and user characteristic data for the user of Aggarwal and Zheng for the purpose of providing commands to the neural network to convert sampled data into plurality of images frames that may include one or more past data points (Yao: [0056]).
Regarding claim 3, Zheng as modified by Aggarwal and Yao teach wherein the trained neural network model is a trained gated recurrent unit (GRU) neural network model and the last hidden layer is a last GRU layer ([0044], [0053], [0061-0062], [0071-0073], [0077], [0083], [0085], [0093] wherein Yao teaches a neural network model that is trained gated recurrent unit (GRU) with hidden layer that is last GRU layer).
Regarding claim 13, the claim is similar in scope to claim 2 therefore the claim is rejected
under similar rationale.
Regarding claim 16, the claim is similar in scope to claim 4 therefore the claim is rejected
under similar rationale.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to HASSAN MRABI whose telephone number is (571)272-8875. The examiner can normally be reached on Monday-Friday, 7:30am-5pm. Alt, Friday, EST.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Viker Lamardo can be reached on 571-270-5871. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/HASSAN MRABI/Examiner, Art Unit 2144