DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC§ 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
The claims recite a process of obtaining a plurality of user context, converting the plurality of user context to a plurality of embedding vectors, correlating between the plurality of embedding vectors, and creating a predictive association.
The claimed process is similar to a method of mental processes, particularly concepts performed in the human minds (including an observation, evaluation, judgement, opinion), which is one of the groupings of abstract ideas according to Prong One in Step 2A of the 2019 Patent Subject Matter Eligibility Guidance since the steps of obtaining data and converting data which allows processes such as correlating information and creating a predictive association--are directed to a series of thought processes (i.e. mental processes).
Also this judicial exception is not integrated into a practical application because creating is merely indicating allowing processes (e.g. association process) to happen, which does not mean the process of predicting will actually occur and result in a practical application.
The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. Additional elements (e.g. images and vocal instructions) are directed to types of information being manipulated. The types of information being manipulated does not impose a meaningful limit on the judicial exception, such that the claims are more than a drafting effort design to monopolize exception, because the claimed steps could be performed in a same manner to achieve the same outcome with other types of information other than the ones being used in the claims.
The additional processes (e.g. mapping… for machine-generated prompt augmentation, caching… for initializing a foundation model) are merely directed to intended usages since the processes are not being performed nor integrated into a practical application.
Hence, the claims do not include additional elements or the combination of the elements are sufficient to amount to significantly more than the judicial exception and fail to integrate the judicial exception into practical application according to Prong Two in Step 2A of the 2019 Patent Subject Matter Eligibility Guidance because the claimed elements or the combination do not impose any meaningful limits on practicing the abstract idea.
Further, in view of Step 2B of the 2019 Patent Subject Matter Eligibility Guidance, it is determined that the computing elements (such as an apparatus, comprising: a network interface configured to communicate with a user device; a processor; and a non-transitory computer-readable medium) in the claim amount to no more than usage of a generic computing system having a generic computing components-- such as processor-of a generic network, which fails to provide an inventive concept or significantly more than abstract idea because the elements do not necessary improve the functional of a computing system or an improvement to a technical field since network computing is well known.
Thus, for at least the reasoning above, the pending claims are not patent eligible.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Khemka (US 20230409615 A1).
Regarding Claim 1, Khemka discloses a method, comprising:
obtaining a plurality of user context collected during online operation according to a real-time budget (Fig. 2; [0088]-[0089]: the entity resolution module 212 may perform user-facilitated disambiguation (e.g., getting real-time user feedback from users); [0112]: In particular embodiments, a task state may be a data structure persistent cross interaction turns and updates in real time to capture the state of the task during the whole interaction; Table 9);
converting the plurality of user context to a plurality of embedding vectors (Fig. 3; [0061]: Federated user representation learning may personalize federated learning models by learning task-specific user representations (i.e., embeddings); [0081]: The intent classifier 336b may take the user input as input and formulate it into a vector);
correlating between the plurality of embedding vectors to identify a pattern during offline operation according to a best-effort budget (Figs. 1-3; [0078]-[0081]: In particular embodiments, the capability of reasoning may enable the assistant system 140 to, for example… learn interaction patterns and preferences from users' historical behavior… The intent classifier 336b may then calculate probabilities of the user input being associated with different predefined intents based on a vector comparison; [0071]: the arbitrator 226a may rank and select a best result for responding to the user input); and
creating a predictive association based on the pattern ([0078]: In particular embodiments, the capability of reasoning may enable the assistant system 140 to… learn interaction patterns and… generate highly predictive proactive suggestions based on micro-context understanding).
Regarding Claim 2, Khemka discloses the method of claim 1, where the plurality of user context comprises images and vocal instructions (Fig. 1; [0031]: In particular embodiments, the user may interact with the assistant system 140 by providing user input to the assistant application 136 via various modalities (e.g., audio, voice, text, vision, image, video, gesture, motion, activity, location, orientation)).
Regarding Claim 3, Khemka discloses the method of claim 1, where the pattern is identified based on a temporal pattern, a spatial pattern, or an activity pattern (Fig. 1; [0126]: In particular embodiments, the assistant layer may comprise user experience (UX) guidelines for how the assistant system 140 expects users to interact with the assistant system 140, and provide rules, patterns… the assistant layer may be considered a distinct spatial layer).
Regarding Claim 4, Khemka discloses the method of claim 1, where the predictive association comprises a trigger condition and a response, and where the method further comprises configuring a user device to execute the response responsive to the trigger condition. ([0109]-[0110]: In particular embodiments, the personalized language model may also be used to predict what words a user is most likely to say given a context… These updates may be consumed by the dialog manager 216 to trigger proactive actions based on context… As an example and not by way of limitation, receiving a message may be a social event, which may trigger the task of reading the message to the user).
Regarding Claim 5, Khemka discloses the method of claim 1, where the predictive association comprises a mapping between at least two embedding vectors for machine-generated prompt augmentation ([0098]: In particular embodiments, mapping events to actions may result in several technical advantages for the assistant system 140; [0257]-[0259]: Based on candidate mapping, the assistant system 140 may learn a language model on the user typed keystrokes… Our system may employ a fine-tuned RoBERTa model to embed entity mentions and entity names into a common vector space. It may capture semantic similarities).
Regarding Claim 6, Khemka discloses the method of claim 1, where the predictive association comprises caching a custom session state for initializing a foundation model (Figs. 1-3; [0078]-[0080]: In particular embodiments, the capability of reasoning may enable the assistant system 140 to… generate highly predictive proactive suggestions… The CU object generator 314 may generate particular CU objects relevant to the user input. The CU objects may comprise dialog-session data and features associated with the user input; [0091]: In particular embodiments, the dialog intent resolution 356 may resolve the user intent associated with the current dialog session based on dialog history between the user and the assistant system 140. [0331]: This disclosure contemplates processor 1902 including any suitable number of any suitable internal caches, where appropriate).
Regarding Claim 7, Khemka discloses the method of claim 1, where the predictive association is characterized by an association strength ([0053]: In particular embodiments, each resolved entity may also be associated with a confidence score), and where the method further comprises periodically updating the association strength based on repetition of use (Fig. 2; [0082]-[0088]: The NLU module 210 may further process information from these different sources by identifying and aggregating information, annotating n-grams of the user input, ranking the n-grams with confidence scores based on the aggregated information).
Regarding Claim 8, Khemka discloses an apparatus, comprising:
a network interface configured to communicate with a user device (Fig. 1; [0044]: The social-networking system 160 may also include suitable components such as network interfaces);
a processor ([0330]: In particular embodiments, computer system 1900 includes a processor 1902); and
a non-transitory computer-readable medium comprising instructions that when executed by the processor cause the processor to ([0337]: Herein, a computer-readable non-transitory storage medium or media may include one or more semiconductor-based or other integrated circuits):
obtain a plurality of user context collected by the user device (Fig. 2; [0088]-[0089]: The offline aggregators may process a plurality of data associated with the user that are collected from a prior time window; [0107]: FIG. 4 illustrates an example task-centric flow diagram 400 of processing a user input… multi-modal experiences that are initiated on understanding user context; Table 9);
convert the plurality of user context to a plurality of embedding vectors (Fig. 3; [0061]: Federated user representation learning may personalize federated learning models by learning task-specific user representations (i.e., embeddings); [0081]: The intent classifier 336b may take the user input as input and formulate it into a vector);
identify a user-specific pattern from the plurality of embedding vectors ([0078]-[0081]: In particular embodiments, the capability of reasoning may enable the assistant system 140 to, for example… learn interaction patterns and preferences from users' historical behavior… The intent classifier 336b may then calculate probabilities of the user input being associated with different predefined intents based on a vector comparison); and
create a predictive association based on the user-specific pattern ([0078]: In particular embodiments, the capability of reasoning may enable the assistant system 140 to… learn interaction patterns and… generate highly predictive proactive suggestions based on micro-context understanding).
Regarding Claim 9, Khemka discloses the apparatus of claim 8, where the user device is constrained by real-time scheduling during online operation, and where the processor executes the instructions with best-effort scheduling ([0068]: In particular embodiments, multiple device-specific implementations (e.g., real-time calls for a client system 130 or a messaging application on the client system 130) may be handled internally by a single agent 228a; [0093]: Context tracking may comprise aggregating real-time stream of events into a unified user state. Interaction management may comprise selecting optimal action in each state).
Regarding Claim 10, Khemka discloses the apparatus of claim 8, where the plurality of user context comprises instantaneous user context captured at specific time instants (Figs. 1-2: [0062]: In particular embodiments, the dialog state tracker 218a may track state changes over time as a user interacts with the world and the assistant system 140 interacts with the user; [0068]: (e.g., real-time calls for a client system 130 or a messaging application on the client system 130) may be handled internally by a single agent 228a).
Regarding Claim 11, Khemka discloses the apparatus of claim 10, where the user-specific pattern is identified based on a temporal pattern ([0088]: For each entity, the entity resolution module 212 may employ matching similarly to how friends are matched (i.e., phonetic). In particular embodiments, scoring may comprise a temporal decay factor associated with a recency with which the name was previously mentioned).
Regarding Claim 12, Khemka discloses the apparatus of claim 8, where the plurality of user context comprises persistent user context that is retrieved from a user-specific database (Fig. 4; [0112]: The task tracker 410 may track the task state associated with an assistant task. In particular embodiments, a task state may be a data structure persistent cross interaction turns and updates in real time to capture the state of the task during the whole interaction).
Regarding Claim 13, Khemka discloses the apparatus of claim 12, where the instructions further cause the processor to store the predictive association within the user-specific database ([0037]-[0038]: The social-networking system 160 may generate, store, receive, and send social-networking data, such as, for example, user profile data, concept-profile data, social-graph information, or other suitable data related to the online social network… [0038]: In particular embodiments, the social-networking system 160 may store one or more social graphs in one or more data stores 164; [0109]: In particular embodiments, the personalized language model may also be used to predict what words a user is most likely to say given a context).
Regarding Claim 14, Khemka discloses the apparatus of claim 8, where the predictive association comprises a trigger condition and a response, and where the instructions further cause the processor to configure the user device to execute the response responsive to the trigger condition ([0109]-[0110]: In particular embodiments, the personalized language model may also be used to predict what words a user is most likely to say given a context… These updates may be consumed by the dialog manager 216 to trigger proactive actions based on context… As an example and not by way of limitation, receiving a message may be a social event, which may trigger the task of reading the message to the user).
Regarding Claim 15, Khemka discloses a method, comprising:
obtaining a first set of user context and a second set of user context, where the first set of user context and the second set of user context have a generic association strength (Fig. 1; ([0253]: The assistant system 140 may first store user typed data on-device… The assistant system 140 may then use text classification model to classify the historical text into buckets that correspond to the static smart-reply candidates);
identifying a user-specific predictive association between the first set of user context and the second set of user context ([0109]: the personalized language model may also be used to predict what words a user is most likely to say given a context;[0253]: This may create a parallel corpus of data that looks like, e.g., “Yeah!” to “yes”);
creating a user-specific association strength, a real-time trigger condition, and a real-time response, based on the user-specific predictive association ([0253]: The assistant system 140 may then fine-tune the sequence-to-sequence text style transfer model on the parallel corpus on-device. The assistant system 140 may then use the trained model to convert the smart-reply candidates into typing style of the user); and
updating the user-specific association strength, the real-time trigger condition, or the real-time response, based on a real-time trigger event ([0112]: [0112]: In particular embodiments, a task state may be a data structure persistent cross interaction turns and updates in real time to capture the state of the task during the whole interaction; [0253]: The assistant system 140 may further map the static reply to the newly generated style transfer replies if they are semantically similar).
Regarding Claim 16, Khemka discloses the method of claim 15, where the first set of user context comprise labels from image-to-text analysis of images captured with the second set of user context (Fig. 1: [0076]: In particular embodiments, the assistant system 140 may use supplemental signals such as, for example, optical character recognition (OCR) of an object's labels).
Regarding Claim 17, Khemka discloses the method of claim 15, where the first set of user context comprise labels from speech-to-text analysis of vocal instructions with the second set of user context (Figs. 1-2; [0047]: For example, the client system 130 and the server associated with assistant system 140 may both perform automatic speech recognition (ASR) and natural-language understanding (NLU) processes… The ASR module 208a may allow a user to dictate and have speech transcribed as written text).
Regarding Claim 18, Khemka discloses the method of claim 15, where the first set of user context are retrieved from cached history data ([0253]: the static smart-reply candidates) and the second set of user context are captured in real-time ([0253]:The assistant system 140 may first store user typed data on-device with a text-to-language (TTL) of a period of days (e.g., 90 days)).
Regarding Claim 19, Khemka discloses the method of claim 15, where the user-specific predictive association is identified in high dimensional space at best-effort ([0256]: For example, the user typed “No sweat” so the assistant system 140 may find what is the best reply in a list of candidates which closely matches).
Regarding Claim 20, Khemka discloses the method of claim 15, where the user-specific association strength is updated at best-effort from a plurality of previously captured real-time trigger events ([0256]: The assistant system 140 may perform the most semantically similar matching with available smart-reply candidates, e.g., based on each of the explicitly typed keys. For example, the user typed “No sweat” so the assistant system 140 may find what is the best reply in a list of candidates which closely matches).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHIRLEY D. HICKS whose telephone number is (571)272-3304. The examiner can normally be reached Mon - Fri 7:30 - 4:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Charles Rones can be reached on (571) 272-4085. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/S.D.H./Examiner, Art Unit 2168
/CHARLES RONES/Supervisory Patent Examiner, Art Unit 2168