DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Objections
Claim 18 is objected to because of the following informalities: in claim 18, “receive, a first time” should be amended to “receive, at a first time”. Appropriate correction is required.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-3, 6, 8-11, 14, and 16-20 are rejected under 35 U.S.C. 103 as being unpatentable over Coker (US 8370143) in view of Tadayon et al. (US 2013/0166657).
Claim 1
Coker teaches a method comprising: receiving, by a computing system and at a first time, a first portion of text of a first electronic message (for example, 306 of Fig. 3);
predicting, by the computing system and based on the first portion of text of the first electronic message, a first candidate portion of computer-generated text (Col. 13, lines 5-8, The text processing system 640 may include a text suggester computing subsystem 646 that uses the received text to provide text suggestions.);
outputting, for display at a second time, the predicted first candidate portion of computer-generated text in the first electronic message (options 304 of Fig. 3);
receiving, by the computer system and at a third time, a second portion of text of the first electronic message (for example in the above example in Fig. 3 "We are planning to go at noon", the second portion could be “to” etc. following the word “planning”. Col.6 lines 53-55; Note the claim also does not specify that the third time is after the second time);
determining, by the computing system and at a fourth time that is after the third time, whether the first electronic message is directed to a sensitive topic based on a modification to the first electronic message performed between the third time and the fourth time (for example, continuing with the message above "We are planning to go at noon" or any message the user enters, and determining that the text randomness exceeds the threshold 413 of Fig. 4; Col. 4, lines 45-48, some non-prose text is associated with greater randomness in comparison to the randomness of prose text (e.g., ordinary English conversation); Col. 1, lines 25-28, The textual content may include non-prose text (e.g., credential data that includes a seemingly random collection of alphanumeric characters).),
wherein determining whether the first electronic message is directed to a sensitive topic comprises determining that the first electronic message is directed to a sensitive topic using a machine learning model (Col. 4, lines 23-24, a learning system may, over time, promote such selected suggestions over unselected suggestions.); and
responsive to determining that the first electronic message is directed to a sensitive topic, refraining from outputting subsequent candidate portions of computer-generated text related to the sensitive topic in the first electronic message (416 of Fig. 4; Col. 10, lines 10-15, If the randomness value exceeds the randomness threshold (e.g., if the value is above the threshold value) then the portion of text is not sent to the text processing service (box 416). In other words, a transmission of the portion of text to the text processing service may be cancelled or prevented. Col. 10, lines 28-34, The text that is sent to the text processing system…may be used as input to a runtime process of the text processing system. Such runtime processes may include word suggestion for partially completed words; Examiner notes if the text is associated with a sensitive topic (high randomness), then by NOT forwarding to the text processing service, subsequent suggestions to the portion if text is prevented).
Still Coker may not clearly detail determining that the first electronic message is directed to a sensitive topic using a machine learning model.
Tadayon teaches email suggestion system that determines that the first electronic message is directed to a sensitive topic using a machine learning model ([0100], together with some other emails (which are already in text format) go through pattern recognition and matching modules (with optional training module, using neural network and Fuzzy logic modules), to use keywords (e.g. "Confidential" or "private" or "secret") and flags or tags (e.g. identify the email as confidential, in the header, subject line, or as a flag turned on, as a property of the email), as the indicators, to distinguish the confidential emails (or the like). )
Thus it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to incorporate keywords as well as neural network to determine types of content as taught by Tadayon with the word replacement method of Coker, because doing so would have provided a way to warn users of the nature of the message automatically based on confidentiality indicators ([0099] of Tadayon).
Claim 2
Coker of the combination teaches the method of claim 1, wherein the electronic message comprises a chat conversation (Fig. 3).
Claim 3
Coker of the combination teaches the method of claim 1, wherein predicting the first candidate portion of computer-generated text comprises predicting the first candidate portion of computer-generated text using a machine learning model (Col. 4 ,lines 23-24, a learning system may, over time, promote such selected suggestions over unselected suggestions.).
Claim 6
Coker of the combination teaches the method of claim 1, further comprising: performing one or both of spelling or grammar correction on the first portion of text, wherein predicting the first candidate portion of text comprises: predicting, based on the corrected first portion of text, the first candidate portion of text (Col. 6, lines 38-40, The processing by the text processing system may include execution of a spell checking procedure).
Claim 8
Coker of the combination teaches the method of claim 1, wherein determining that the electronic message is directed to a sensitive topic comprises: determining, based on one or both of content of the electronic message or header fields of the electronic message, that the electronic message is directed to a sensitive topic (abstract: A computing system receives text that represents content input by a user. A computing system determines a randomness level for a portion of the text. Col.1 , lines 23-31, Textual content specified by the user input may be placed into a document during document composition, or into a field of a web form. The textual content may include non-prose text (e.g., credential data that includes a seemingly random collection of alphanumeric characters)…Further, the non-prose text may include information that the user may not want to share with such text processing services. Col. 4, lines 45-48, some non-prose text is associated with greater randomness in comparison to the randomness of prose text (e.g., ordinary English conversation);).
Claims 9-11
These claims recite substantially the same limitations as those provided in claims 1-3 above, and therefore they are rejected for the same reasons.
Claim 14
This claim recites substantially the same limitations as those provided in claim 6 above, and therefore it is rejected for the same reasons.
Claim 16
This claim recites substantially the same limitations as those provided in claim 8 above, and therefore it is rejected for the same reasons.
Claim 17
Tadayon of the combination teaches the computing system of claim 16, wherein, to determine that the electronic message is directed to a sensitive topic, the one or more processors are configured to: determine, based on the content of the electronic message and with a ML model, whether the electronic message is directed to a sensitive topic ([0100], together with some other emails (which are already in text format) go through pattern recognition and matching modules (with optional training module, using neural network and Fuzzy logic modules), to use keywords (e.g. "Confidential" or "private" or "secret") and flags or tags (e.g. identify the email as confidential, in the header, subject line, or as a flag turned on, as a property of the email), as the indicators, to distinguish the confidential emails (or the like). )
Claim 18
This claim recites substantially the same limitations as those provided in claim 1 above, and therefore it is rejected for the same reasons.
Claim 19
This claim recites substantially the same limitations as those provided in claim 17 above, and therefore it is rejected for the same reasons.
Claim 20
This claim recites substantially the same limitations as those provided in claim 2 above, and therefore it is rejected for the same reasons.
Claims 4-5 and 12-13 are rejected under 35 U.S.C. 103 as being unpatentable over Coker (US 8370143) in view of Tadayon et al. (US 2013/0166657) and Sapoznik et al. (US 9,805,371).
Claim 4
Coker teaches the method of claim 1, wherein predicting the first candidate portion of text comprises: predicting, by the computing system and using a machine learning model, one or more candidate portions of text to follow the first portion of text (See Fig. 3 showing suggestions 304 following the first portion of text “plan”; Col. 4 ,lines 23-24, a learning system may, over time, promote such selected suggestions over unselected suggestions.),
the one or more candidate portions of text including the first candidate portion of computer-generated text (for example, “planning” in 304 of Fig. 3).
Still Coke in view of Tadayon does not clearly detail wherein the first candidate portion of computer-generated text includes a token; and modifying the first candidate portion of computer-generated text to generate a modified first candidate portion of text by replacing the token with text determined based on one or both of context of the first electronic message or information about a user editing the first electronic message, wherein outputting the predicted first candidate portion of computer-generated text comprises outputting, for display, the modified first candidate portion of text.
Sapoznik teaches wherein the first candidate portion of computer-generated text includes a token; and modifying the first candidate portion of computer-generated text to generate a modified first candidate portion of text by replacing the token with text determined based on one or both of context of the first electronic message or information about a user editing the first electronic message (Col. 28, lines 42-52, In some implementations, the suggested responses may include tokens that indicate types of information to be inserted. For example, possible tokens may indicate the name, gender, address, email address, or phone number of the customer. These tokens may be indicated using special symbols, such as “>name<” for the customer's name. Where a suggested response includes such a token, a post-processing operation may be performed to replace the token with the corresponding information about the customer. For example, a token “>name<” may be replaced with the customer's name before suggesting the response to the CSR. Col. 40, lines 26-34, For this example, suggested response 1821 also includes information about the customer, the customer's email address. A suggested response may include a special token that indicates a particular type of information, and the token may be replaced by the corresponding information about the customer. For example, a suggested response may include a token “>email address<” and in presenting the suggested response to the CSR, the special token may be replaced with the actual email address of the customer.),
wherein outputting the predicted first candidate portion of computer-generated text comprises outputting, for display, the modified first candidate portion of text (1821 of Fig. 18D).
Thus it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to incorporate utilization of tokens in suggestions of Sapoznik with the word replacement method of Coker combination, because doing so would have provided addition features in suggesting responses so that more relevant responses may be suggested (Col. 40, lines 23-25 of Sapoznik).
Claim 5
The combination teaches the method of claim 4, further comprising: receiving a corpus of text; modifying the corpus of text to generate a modified corpus of text by replacing fields in the corpus with corresponding tokens; and training the machine learning model using the modified corpus of text (Col. 4, lines 20-24 of Coker, For example, selections that a user makes in response to suggestions from the system can be used to infer that the selected suggestion is a better suggestion than unselected suggestions, and a learning system may, over time, promote such selected suggestions over unselected suggestions; Col. 28, lines 54-63 of Sapoznik, For example, existing customer support session logs may be used to train these parameters. For example, an RNN and/or a logistic regression classifier may be trained by minimizing the cross entropy between the negative log likelihood of the training corpus and encoded word input using stochastic gradient descent. Examiner also notes such description of training the machine learning model using modified corpus of text including replaced fields in place of tokens is not explicitly described in the specification.).
Claims 12-13
These claims recite substantially the same limitations as those provided in claims 4-5 above, and therefore they are rejected for the same reasons.
Claims 7 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Coker (US 8370143) in view of Tadayon et al. (US 2013/0166657) and Jacobsen et al. (US 2005/0154692).
Claim 7
Coker in view of Tadayon teaches the method of claim 1, except wherein sensitive topics include one or more of death, funeral, crime, job loss, job rejection, and academic rejection.
Jacobsen teaches in [0042], “In the example embodiment of FIG. 1, the predictive model 150 is used to predict the likelihood of a loan default by a consumer, using the unstructured content 110 records associated with a consumer as an input. The model 150 would learn which tokens (or features) in the unstructured content were predictive of default, such as "bankruptcy," "death," "illness", along with each token's associated weight or importance by using a set of training records in a supervised learning algorithm.”
Thus it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to incorporate the prediction model of Jacobsen with the word replacement method of Coker combination, because doing so would have provided ability to selectively determine which content transformation rules to apply to input data in predictive modeling systems based on the rules' likelihood of improving the predictive model on new data ([0008] of Jacobsen).
Claim 15
This claim recites substantially the same limitations as those provided in claim 7 above, and therefore it is rejected for the same reasons.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 1-20 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims of U.S. Patent No. 11,755,834. Although the claims at issue are not identical, they are not patentably distinct from each other because the instant set of claims are generally broader than the parent claims. See sample provided below:
Instant Application
US 11,755,834
A method comprising:
receiving, by a computing system and at a first time, a first portion of text of a first electronic message;
predicting, by the computing system and based on the first portion of text of the first electronic message, a first candidate portion of computer-generated text; outputting, for display at a second time, the predicted first candidate portion of computer-generated text in the first electronic message;
receiving, by the computer system and at a third time, a second portion of text of the first electronic message;
determining, by the computing system and at a fourth time that is after the third time, whether the first electronic message is directed to a sensitive topic based on a modification to the first electronic message performed between the third time and the fourth time, wherein determining whether the first electronic message is directed to a sensitive topic comprises determining that the first electronic message is directed to a sensitive topic using a machine learning model; and responsive to determining that the first electronic message is directed to a sensitive topic, refraining from outputting subsequent candidate portions of computer-generated text related to the sensitive topic in the first electronic message.
Claim 1 (Currently amended): A method comprising: receiving, by a computing system and at a first time, a first portion of text of a body of a first e-mail message being edited;
predicting, by the computing system and based on the first portion of text of the first e- mail message, a first candidate portion of text to follow the first portion of text of the first e-mail message; outputting, for display at a second time, the predicted first candidate portion of text for optional selection to append to the first portion of text of the first e-mail message; selectively appending, based on user input and at a third time, the predicted first candidate portion of text to the first portion of text of the first e-mail message;
determining, by the computing system and at a fourth time that is after the third time, whether the first e-mail message is directed to a sensitive topic based on a modification to the body of the first e-mail message performed between the third time and the fourth time, wherein determining whether the first e-mail message is directed to a sensitive topic comprises determining that the first e-mail message is directed to a sensitive topic using a machine learning model; responsive to determining that the first e-mail message is directed to a sensitive topic, refraining from outputting subsequent candidate portions of text for optional selection to append to text in the first e-mail message; responsive to determining that the first e-mail message is not directed to a sensitive topic, outputting, between the fourth time and a fifth time that is after the fourth time, subsequent candidate portions of text for optional selection to append to text in the first e-mail message; sending, at the fifth time, the first e-mail message, wherein refraining from outputting subsequent candidate portions of text for optional selection to append to text in the first e-mail message comprises refraining, between the fourth time and the fifth time, from outputting subsequent candidate portions of text for optional selection to append to text in the first e-mail message; receiving, by the computing system and at a sixth time that is after the fifth time, a first portion of text of a body of a second e-mail message being edited; predicting, by the computing system and based on the first portion of text of the second e- mail message, a first candidate portion of text to follow the first portion of text of the second e- mail message; and outputting, for display at a seventh time that is after the sixth time, the predicted first candidate portion of text for optional selection to append to the first portion of text of the second- email message.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to THOMAS H MAUNG whose telephone number is (571)270-5690. The examiner can normally be reached Monday-Friday, 9am-6pm, EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Carolyn R. Edwards can be reached on 1-(571) 2707136. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/THOMAS H MAUNG/Primary Examiner, Art Unit 2692
/CAROLYN R EDWARDS/Supervisory Patent Examiner, Art Unit 2692