DETAILED ACTION
This office action is responsive to communication(s) filed on 3/5/2026.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 3/9/2026 has been entered.
Claims Status
Claims 1-2 and 4-20 are pending.
Claims 1 and 8 are independent.
Claims 1-2 and 4-7 are currently being examined.
Claim 8-20 are withdrawn for being directed to nonelected invention.
Claim 3 is newly canceled.
Claim 1 is newly amended.
Claim Rejections - 35 USC § 112(b)
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-2 and 4-7 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 1 recites “a machine learning engine comprising at least one sentence extraction model” and then also mentions “executing, by a sentence extraction model in communication with the machine learning engine, a language model”. Here, it is unclear if the machine learning engine “comprises” or is simply “in communication with the machine learning engine”. For purposes of compact prosecution only, the examiner interprets the limitation(s) as being directed to a machine learning engine which includes a sentence extraction model. Correction required.
Claims 2 and 4-7 are also rejected, as they depend upon claim 1.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-2 and 4-6 is/are rejected under 35 U.S.C. 103 as being unpatentable over Grieves; Jason et al. (hereinafter Grieves – US 20130339283 A1) in view of Shevchenko; Oleksiy et al. (hereinafter Shevchenko – US 10594757 B1).
Independent Claim 1:
Grieves teaches:
A computer-implemented method for generating and providing a user input recommendation, the method comprising:
receiving, by a recommendation generation engine (prediction generator 140), an indication that a message addressed to a user has been received; (based on a text received by a user [an indication that a message addressed to a user has been received], a candidate prediction generator generates string predictions, ¶ 33 and fig. 2. The text is message received via messaging apps like SMS or chat apps, ¶ 23)
analyzing, by the recommendation generation engine, at least one word within the message, wherein analyzing further comprises executing a[n] …engine comprising at least one sentence extraction model; (the received text string 230 [at least one word within the message] is used by [analyzing] the generator to generate the predictions, ¶ 33, including sentences, ¶ 25)
determining, by the […] engine, based upon previous user input of the user and on the analyzing of the at least one word, a type of message of the received message; (determining a message type by using an engine to analyze current message words against historical user data, allowing the system to identify the semantic intent (e.g., a "question" [type of message of the received message]) and suggest an appropriate, previously used response, as described in ¶¶ 28 and 82. A question is widely recognized as a type of communication/message because it is a fundamental tool for establishing interaction, gathering information, and building relationships between people. Unlike a statement, which transmits information, a question acts as a "pull" mechanism—eliciting a response and engaging the other person in a two-way dialogue)
executing, by a sentence extraction model in communication with the […] engine, a language model for analyzing at least one message composed and sent out by the user, […]; (a candidate prediction generator, functioning as a language model engine, analyzes received user-composed text to produce context-aware string predictions based on historical mapping, ¶ 82. the system analyzes historical "conversations that a user had via SMS applications, chat applications, and/or email applications," which includes ¶¶ 26 and 62, which confirms the capture and analysis of messages previously composed and sent out by the user. For purposes of compact prosecution only, the examiner interprets the limitation(s) as being directed to an engine which includes a sentence extraction model.)
extracting, by the… engine, text from a previously generated user input set used by the user in responding to [a] type of message, responsive to the execution of the language model; (the candidate prediction generator 140 extracts specific phrases from "historical information"—which constitutes a previously generated user input set—to populate predictions based on matching the current incoming message, as demonstrated by the system recognizing "How are you?" [a question – type of message] to suggest a previously used response, ¶¶ 28 and 82)
generating… a plurality of candidate input recommendations, (generate string predictions [a plurality of candidate input recommendations], ¶ 33)
[…];
identifying, by the recommendation generation engine, that the …[candidates are] associated with a confidence score that satisfies a threshold level of confidence; (the number of candidates selected and displayed to a user is limited to those that meet a minimum confidence threshold, ¶¶ 29 and 43 and fig. 3)
and modifying, by the recommendation generation engine, a graphical user interface displayed to the user to include a display of the at least one [candidate] associated with the confidence score that satisfies a threshold level of confidence. (the selected string predictions are placed [modifying] in a user interface, ¶ 49 and figs. 4A-B)
Grieves does not appear to expressly teach, but Shevchenko teaches:
that the engine comprises “a machine learning” engine (AIA [artificial intelligence ], cols 46:66-47:10, utilizes machine learning techniques to classify communications, cols 57:63-58:24. machine learning models take into consideration past communication content, such as context, including user cases, and generate predictions for responses, cols 61:62-62:41 and 66:27-61. machine learning models take into consideration past communication content, such as context, including user cases, and generate multiple prediction/suggestions [candidates] for responses, cols 61:62-62:41)
the language model selected based upon the type of message of the received message; (“The models may be either general-purpose or context-, receiver- or reaction-specific. E.g., context-specific models may be trained on reaction data sets consisting of records of communication acts in a certain use case.”, col 67:15-19, and contextual information includes message/communication type, col 81:25-27)
the plurality including at least one template for use in generating a response to the message, the template including the extracted text; (Different communication templates are used for different use-cases, such as communication types, e.g., introductions, greetings, invitations, etc., col 63:6-45. A question is widely recognized as a type of communication [“communication type”] because it is a fundamental tool for establishing interaction, gathering information, and building relationships between people. Unlike a statement, which transmits information, a question acts as a "pull" mechanism—eliciting a response and engaging the other person in a two-way dialogue)
that the identifying of the candidates associated with the confidence score is of “at least one template” (Shevchenko suggestions based on communication type templates, col 63:6-45, Grieves selected and displayed to a user is limited to those that meet a minimum confidence threshold, ¶¶ 29 and 43 and fig. 3)
and that the displaying is of the at least one “template” (Shevchenko suggestions based on communication type templates, col 63:6-45, Grieves selected and displayed to a user is limited to those that meet a minimum confidence threshold, ¶¶ 29 and 43 and fig. 3)
Accordingly, it would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to modify the method of Grieves to include that the engine comprises “a machine learning”, the language model selected based upon the type of message of the received message, the plurality including at least one template for use in generating a response to the message, the template including the extracted text, that the identifying of the candidates associated with the confidence score is of “at least one template”, and that the displaying is of the at least one “template”, as taught by Shevchenko.
One would have been motivated to make such a combination in order to improve the efficiency and effectiveness of communications afforded by the method, Shevchenko cols 55:19:41, e.g., by applying an adaptive prediction model that provides contextual insights and can learn and improve over time, Shevchenko col 48:19-39.
Claim 2:
The rejection of claim 1 is incorporated. Grieves further teaches:
further comprising analyzing, by a sentence extraction model in communication with the recommendation generation engine, at least one message previously addressed to the user. (the generator includes one more models and the string prediction may include complete sentences [sentence extraction model], ¶ 62, and the prediction may be based on historical information of text received by one or more users [t least one message previously addressed to the user])
Claim 4:
The rejection of claim 1 is incorporated. Grieves teaches:
wherein determining the plurality of candidate input recommendations further comprises determining a plurality of candidate input recommendations for use in composing a response to the received message. (the candidate predictions are suggested responses to the received message, Abstract)
Claim 5:
The rejection of claim 1 is incorporated. Grieves further teaches:
wherein modifying further comprises prepopulating the graphical user interface with a pre-written message responding to the received message. (the suggested string phrases are provided before characters are provided by the user [pre-written message responding to the received message], ¶ 6)
Claim 6:
The rejection of claim 1 is incorporated. Grieves further teaches:
wherein determining further comprises identifying a template for use in prepopulating, in conjunction with at least one of the plurality of candidate input recommendations, the graphical user interface with a pre-written message responding to the received message. (suggested strings are populated according to display parameters 350 [a template], ¶¶ 48 and 50.the suggested string phrases are provided “before” characters are provided by the user [a pre-written message responding to the received message], ¶ 6)
Claim(s) 7 is/are rejected under 35 U.S.C. 103 as being unpatentable over Grieves (US 20130339283 A1) in view of Shevchenko (US 10594757 B1), as applied to claim 1 above, and further in view of Celik; Feyzi et al. (hereinafter Celik – US 20160127534 A1).
Claim 7:
The rejection of claim 1 is incorporated. Grieves further teaches:
wherein identifying further comprises identifying, by the recommendation generation engine, prior to displaying the received message to the user, a subset of the plurality of candidate input recommendations, each of the subset associated with a confidence score that satisfies a threshold level of confidence. (the total number of predictions are reduced to a minimum subset of predictions that meet a confidence threshold, ¶ 29)
Grieves does not appear to expressly teach, but Celik teaches:
that the identifying is “prior to displaying the received message to the user”(a process wherein a message is intercepted before reaching its destination [prior to displaying the received message to the user”], calculates a list of predictive message responses that corresponds directly to the message that was received, then forwards a package containing the list of predictive message responses and the message to the destination, ¶ 205 and fig. 6B).
Accordingly, it would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to modify the method of Grieves to include that the identifying is “prior to displaying the received message to the user”, as taught by Celik.
One would have been motivated to make such a combination in order to improve the efficiency of the process and enable users to more quickly send messages to each other, Celik ¶ 97.
Response to Arguments
Previous objection of Claim 1 is overcome due to claim amendment.
Applicant's 103 arguments have been fully considered but they are not persuasive.
First, the applicant alleges that Grieves doesn’t suggest extracting text from previously generate user input in responding to messages of the same type of the received message. Remarks Page 7.
The examiner respectfully disagrees. As explained above, the candidate prediction generator 140 extracts specific phrases from "historical information"—which constitutes a previously generated user input set—to populate predictions based on matching the current incoming message, as demonstrated by the system recognizing "How are you?" [a question – type of message] to suggest a previously used response, ¶¶ 28 and 82.
Second, the applicant attacks Grieves and Shevchenko for allegedly failing to teach that templates are selected based on type of currently received message and populated with text from previously generated input. Remarks Pg(s) 7-8.
The examiner respectfully disagrees. Grieves is not relied on to teach templates. However, it does teach populating/generating recommendations based on the past input, as explained in response and 103 rejection sections above. Furthermore, Grieves with Shevchenko teaches the template related limitations. As explained above, Shevchenko teaches that different communication templates are used for different use-cases, such as communication types, e.g., introductions, greetings, invitations, etc., col 63:6-45. A question is widely recognized as a type of communication [“communication type”] because it is a fundamental tool for establishing interaction, gathering information, and building relationships between people. Unlike a statement, which transmits information, a question acts as a "pull" mechanism—eliciting a response and engaging the other person in a two-way dialogue.
Third, the applicant relies on the argument(s) above to allege patentability of the remaining claims. Remarks Pg(s) 8-9.
The examiner respectfully disagrees for the reason(s) presented above.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Below is a list of these references, including why they are pertinent:
Kandur Raja; Barath Raj et al. US 20170351342 A1, pertinent to claim 1 for disclosing that an LM 142 and response predictor 140 analyze user input features (unigrams, bigrams, trigrams) combined with the contextual category of a received message to suggest relevant, context-aware, follow-up responses like "apologize" or "apology", ¶ 130 and fig. 6B.
Pham; Hung (US 11303590 B2), pertinent to claim 1 for disclosing providing suggested response for a received message based on sematic concepts, col 23:27-40 and fig. 4.
Zhao; Bing et al. (US 10721190 B2), pertinent to claim 1 for disclosing generating recommended responses based on a received message and the historical message data, cols 6:48-7:4.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to GABRIEL S MERCADO whose telephone number is (408)918-7537. The examiner can normally be reached Mon-Fri 8am-5pm (Eastern Time).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kieu Vu can be reached at (571) 272-4057. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Gabriel Mercado/ Primary Examiner, Art Unit 2171