Prosecution Insights
Last updated: April 19, 2026
Application No. 18/587,369

SYSTEMS AND METHODS OF ARTIFICIALLY INTELLIGENT SENTIMENT ANALYSIS

Final Rejection §103§DP
Filed
Feb 26, 2024
Examiner
CASTILLO-TORRES, KEISHA Y
Art Unit
2659
Tech Center
2600 — Communications
Assignee
Early Warning Services LLC
OA Round
4 (Final)
74%
Grant Probability
Favorable
5-6
OA Rounds
3y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 74% — above average
74%
Career Allow Rate
80 granted / 108 resolved
+12.1% vs TC avg
Strong +30% interview lift
Without
With
+30.5%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
32 currently pending
Career history
140
Total Applications
across all art units

Statute-Specific Performance

§101
26.2%
-13.8% vs TC avg
§103
42.9%
+2.9% vs TC avg
§102
15.1%
-24.9% vs TC avg
§112
8.8%
-31.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 108 resolved cases

Office Action

§103 §DP
DETAILED ACTION This communication is in response to the Amendments and Arguments filed on 05/16/2025. Claim 23 has been newly added by the Applicant. Claims 1, 3, 7-8, 10, 14-15, 17, and 20-21 canceled by Applicant. Claim(s) 2, 4-6, 9, 11-13, 16, 18-19, and 22-23 are pending and have been examined. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments and Amendments Amendments to the claims by the Applicant have been considered and addressed below. With respect to the Claim objections, 35 USC § 103 rejections, the Applicant provides several arguments in which the Examiner will respond accordingly, below. Claim Objection(s) Arguments in page 8 of Remarks filed on 12/08/2025. Examiner’s Response to Arguments: Applicant’s arguments with respect to the Claim Objections have been fully considered and are persuasive. The Claim Objections of independent claim 22 has been withdrawn. 35 USC § 103 rejection(s) Arguments in pages 8-10 of Remarks filed on 12/08/2025. Examiner’s Response to Arguments: Applicant’s arguments with respect to claim(s) 2, 9, and have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Hence, a new ground of rejection for independent claims 2, 9, and 16 has been made in view of Fang et al. (US 10037491 B1) further in view of L'Huillier et al. (US 9317566 B1) and Reddy et al. (US 20150199967 B1), where the limitations argued not to be taught by Fang et al. (i.e., associated with the unrecognized polarity) are considered to be taught by Reddy et al. Please refer to updated 35 U.S.C. § 103 rejections for claims 2, 9, and 16, below. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 2, 4-6, 9, 11-13, 16, 18-19, and 22, rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-2, 4, 6, 9, 12, and 14 of U.S. Patent No. US 11941362 B2. Please see the claim mapping as well as the claim mappings for the individual claims in the tables below. Instant Application Issued Patent/Application U.S. Application No. 16/859,301 (US 11941362 B2) Claim mapping 2, 9, and 16 1, 6, and 12 4, 11, and 18 2, 9, and 14 5, 12, and 19 - 6 and 13 4 Instant Application Issued Patent/Application U.S. Application No. 16/859,301 (US 11941362 B2) Claim 2: Claim 1: 2. (New) A method of providing sentiment analysis using a computer system comprising: receiving, by a computer system an input phrase provided to the website through an interaction with an input device; 1. A method of providing sentiment analysis, comprising: receiving, at the computer system, an input from a website, the input comprising a text string including an input phrase; determining, by the computer system executing the backend service model a first polarity of the input phrase by comparing the input phrase to a plurality of phrases whose polarity has been previously identified, wherein the polarity is unrecognized when the input phrase does not match one or more phrases of the plurality of phrases; and determining, by the computer system, the polarity of the sentiment of the input phrase by comparing the input phrase to the plurality of phrases, wherein the computer system determines that the polarity of the sentiment is unrecognized when the input phrase does not correspond to the one or more phrases of the plurality of phrases; and transmitting, by the computer system to the website a command that causes the website to display a text- based message that is selected based on the polarity of the input phrase. sending, by the computer system, a command to the website that causes the website to display predetermined content based on the determined polarity of the sentiment of the phrase, wherein: the predetermined content comprises a predetermined text-based message that is selected based at least in part on the polarity of the sentiment of the input phrase; training the computer system, a backend service model, to identify the polarity of the phrases. training a computer system to identify a polarity of a sentiment for a plurality of phrases; wherein when the polarity of the input phrase is unrecognized, the text-based message comprises a generic text-based message that is not tied to a particular polarity. when the polarity of the sentiment of the input phrase is negative, the predetermined text-based message comprises contact information of an entity associated with the input phrase; and when the polarity of the sentiment of the input phrase is unrecognized, the predetermined text-based message comprises a generic text-based message that is not tied to a particular polarity of sentiment. *Note: Main differences between instant application and issued patent/application are underlined/strikethrough. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 2, 4, 9, 11, 16, 18 and 22-23 is/are rejected under 35 U.S.C. 103 as being unpatentable over Fang et al. (US 10037491 B1) further in view of L'Huillier et al. (US 9317566 B1) and Reddy et al. (US 20150199967 B1). As to independent claims 2, 9, and 16, Fang et al. teaches: 2. (New) A method of providing sentiment analysis (see Col 2, line 20: “Context-based sentiment analysis is disclosed.”) comprising: training, by a computer system, a backend service model to identify a polarity of a plurality of phrases (see Col. 4, lines 22-43: “In this example, context-sensitive comment analyzer 200 includes an optional initial classifier 202, a context-sensitive comment data detector 204, a context type classifier 206, and a comment classifier comprising a positive model-based comment classifier 208 and a negative model-based comment classifier 210. […]” Col. 5, lines 29-38: “In some embodiments, a comment is preprocessed and divided into multiple pieces. Process 200 analyzes an individual piece of comment data. For example, in some embodiments, each sentence in a multi-sentence comment is a separate piece of data to be analyzed. Other ways of processing data can be used, such as grouping several individual sentences to form a piece of comment data, further segmenting a sentence into multiple pieces of sub-clauses and/or phrases, etc.” and Col. 8, line 57 - Col. 9, line 22: “In some embodiments, a context such as a question/posting/topic may not be manually classified ahead of time, and new context can be generated and used. Classifier 206 is implemented using a machine learning technique similar to what is described in FIGS. 3A-3B. A training set of manually labeled sample data (e.g., a set of sample survey questions manually classified using a context configuration tool described above) is used to train a learning machine model such as an SVM, a Bayesian classifier, a decision tree, etc. The inputs of the training set are features of the context information, and the outputs are the known classifications of the context (i.e., whether the context is positive sentiment-entailing, negative sentiment-entailing or neutral). The adaptation process is performed to train the learning machine model. The features can include various textual queues such as keywords or key phrases used in the question or topic. In some embodiments, a dictionary of keywords or key phrases that map individual entries to either a positive sentiment-entailing type or a negative sentiment-entailing type is pre-established, and N-gram analysis is used to extract the keywords or key phrases from the context. The extracted keywords or key phrases are looked up in the dictionary and scored (e.g., based on counts of these two types) to determine whether the context is positive sentiment-entailing or negative sentiment-entailing...”); receiving, by the computer system from a website, an input phrase provided to the website through an interaction with an input device (see Col 7, lines 28-49: “In some embodiments, the context is distinct from the comment being analyzed. For example, the context can include the text of a question (e.g., a survey question) made by a first user (e.g., a surveyor) and the comment can include the answer supplied by a second user (e.g., a respondent) in response to the question. For example, in response to the question of “What improvements would you like to see on your next visit?” a survey respondent supplies the answer “faster service.” The text of the question is the context in this example. In some embodiments, the context includes the text of a topic that is introduced on a blog, a social networking site, or a website, and the comment includes the text of the follow up postings that are made in response. For example, on a restaurant's website, a request is posted stating “Please send us your ideas for improvements,” and website users can send in comments such as “shorter delivery time,” “better online order forms,” etc., in response. The text of the initial request is the context in this example. As another example, the restaurant may post on their Facebook® page the same posting, and Facebook® users can make comments in response. The text of such postings is the context in these examples.”); determining, by the computer system executing the backend service model, a first polarity of the input phrase by comparing the input phrase to a plurality of phrases whose polarity has been previously identified (see Col. 5 lines 29-38 and Col. 5, lines 43-67: “In some embodiments, a comment is preprocessed and divided into multiple pieces. Process 200 analyzes an individual piece of comment data. For example, in some embodiments, each sentence in a multi-sentence comment is a separate piece of data to be analyzed. Other ways of processing data can be used, such as grouping several individual sentences to form a piece of comment data, further segmenting a sentence into multiple pieces of sub-clauses and/or phrases, etc. In some embodiments, the initial classification is performed using a conventional sentiment analyzer that makes an initial assessment of the sentiment associated with the piece of comment data. The conventional sentiment analyzer can include a static analyzer employing a standard model that is based on the text of the comment data only. The implementation of such a conventional sentiment analyzer is known to those skilled in the art. Initial Classifier 202 makes an initial determination of the sentiment of the piece of comment data as positive, negative, neutral, or mixed. For example, in an example standard model, if a sentence has certain words such as “excellent,” it is deemed to indicate a positive sentiment; if the sentence has certain words such as “poor,” it is deemed to indicate a negative sentiment; and if the sentence has certain words such as “average,” it is deemed to indicate a neutral sentiment. Moreover, a sentence that includes both “excellent” and “poor” (e.g., “The food was excellent but the service was poor.”) is deemed to indicate a mixed sentiment. In this example, a piece of comment data identified as positive, negative, or neutral is further processed. Specifically, the context of that data is identified by the context-sensitive comment data detector 204. A piece of comment data that is identified as mixed is not further processed and its final sentiment classification is mixed.”), However, Fang et al. does not explicitly teach but L'Huillier et al. teaches: transmitting, by the computer system to the website, a command that causes the website to display a text- based message that is selected based on the polarity of the input phrase (see Figs. 8A-8B (“Great for pasta Average Sentiment score 8/10”) and Col. 4, lines 8-21 and Col. 20, line 59-67: “As used herein, the term “consumer interface” refers to any digitally rendered user interface displayed on a visual display device for enabling a consumer to interface with a promotion and marketing service. An exemplary consumer interface may enable a consumer to view one or more promotions, purchase one or more promotions, share one or more promotions with other consumers, receive messages and/or promotions from other consumers, receive messages from the promotion and marketing service, and the like. Exemplary consumer interfaces may be rendered in any desired form including, but not limited to, as a mobile application for display on a mobile computing device (e.g., a smartphone), a webpage or website for display on a mobile or non-mobile computing device via the Internet, and the like. […] FIGS. 8A and 8B illustrate exemplary user interfaces 800a and 800b, respectively, each recommending a particular merchant to a consumer and including an attribute descriptor for the merchant and an associated overall sentiment score determined based on consumer reviews for that merchant (e.g., by averaging sentiment scores for the consumer reviews for that merchant). For example, FIG. 8A indicates an Italian restaurant, an attribute descriptor of “pasta” and an overall sentiment score of “8 out of a total of 10.” FIG. 8B indicates a Japanese restaurant, an attribute descriptor of “sushi” and an overall sentiment score of “9 out of a total of 10.””) Fang et al. and L'Huillier et al. are both considered to be analogous to the claimed invention because they are in the same field of endeavor in sentiment analysis. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Fang et al. to incorporate the teachings of L'Huillier et al. of transmitting, by the computer system to the website, a command that causes the website to display a text- based message that is selected based on the polarity of the input phrase which provides the benefit of enabling a consumer to interface with a promotion and marketing service (Col. 4, lines 8-21 of L'Huillier et al.). However, Fang et al. in combination with L'Huillier et al. do not explicitly teach, but Reddy et al. does teach: wherein the polarity is unrecognized when the input phrase does not match one or more phrases of the plurality of phrases (see Fig. 1 (136: query understanding and response system) and ¶ [0010 and 0066]: “[0010] In accordance with another implementation, causing the response to the utterance to be generated includes matching the utterance to a particular utterance type within a hierarchical tree of utterance types, each utterance type in the hierarchical tree of utterance types having one or more responses associated therewith, and selecting the response to the utterance from among the response(s) associated with the particular utterance type. [0066] However, if query understanding and response system 136 fails to match the recognized or corrected utterance to any of the tasks within the predefined set, then query understanding and response system 136 may further analyze the words of the utterance to determine how such utterance should be handled thereby. For example, query understanding and response system 136 may determine that the utterance should be handled by conducting a Web search or by offering the user with an opportunity to conduct a Web search. In this case, the utterance may be handled by specialized logic for facilitating Web searching that is internal and/or external to query understanding and response system 136. Alternatively, query understanding and response system 136 may determine based on an analysis of the words of the utterance that the utterance comprises a chit-chat type utterance, which as noted above is an utterance intended to engage with a persona of digital personal assistant 130.”) wherein when the polarity of the input phrase is unrecognized (see see Fig. 1 (136: query understanding and response system) and ¶ [0010 and 0066] citations as in limitation above: “[0066] However, if query understanding and response system 136 fails to match the recognized or corrected utterance to any of the tasks within the predefined set, then query understanding and response system 136 may further analyze the words of the utterance to determine how such utterance should be handled thereby. […] Alternatively, query understanding and response system 136 may determine based on an analysis of the words of the utterance that the utterance comprises a chit-chat type utterance, which as noted above is an utterance intended to engage with a persona of digital personal assistant 130.” and further ¶ [0074]: “The foregoing approach to identifying suitable responses to chit-chat type utterances is advantageous in that it allows responses to be defined for both broad groups of chit-chat type utterances as well as more narrow groups within the broader groups. By way of example, for the node "Microsoft" within the node "Sys-opinion," very specific responses to chit-chat type utterances can be crafted (e.g., "I think Microsoft is great!"), since the system has a high level of confidence that the user is asking for the opinion of digital personal assistant 130 about Microsoft. In contrast, for the node "Sys-opinion," a more generic response to chit-chat type utterances can be crafted (e.g., "No comment" or "I'd rather not say"), since the system has a high level of confidence that the user is asking for the opinion of digital personal assistant 130, but cannot determine the subject matter about which an opinion is being sought.” the text-based message comprises a generic text-based message that is not tied to a particular polarity (see ¶ [0074] citation as in limitation above. [i.e., (e.g., "I think Microsoft is great!") versus (e.g., "No comment" or "I'd rather not say")]). Fang et al., L'Huillier et al., and Reddy et al. are considered to be analogous to the claimed invention because they are in the same field of endeavor in sentiment/emotion analysis. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Fang et al. in combination with L'Huillier et al. to incorporate the teachings of Reddy et al. of wherein the polarity is unrecognized when the input phrase does not match one or more phrases of the plurality of phrases and wherein when the polarity of the input phrase is unrecognized, the text-based message comprises a generic text-based message that is not tied to a particular polarity which provides the benefit of improve its performance over time through continued interaction with the user ([0056] of Reddy et al.). Regarding claim 9, Fang et al. in combination with L’Huillier et al. and Reddy et al. teach the limitations as in claim 2, above. Fang et al. further teaches: 9. (New) A system comprising: one or more computing devices (see Col. 4, lines 22-43: “In this example, context-sensitive comment analyzer 200 includes an optional initial classifier 202, a context-sensitive comment data detector 204, a context type classifier 206, and a comment classifier comprising a positive model-based comment classifier 208 and a negative model-based comment classifier 210. Details of the components are described below. The components described herein can be implemented as software components executing on one or more computer processors, as hardware such as programmable logic devices and/or Application Specific Integrated Circuits designed to perform certain functions, or a combination thereof. In some embodiments, the components can be embodied by a form of software products which can be stored in a nonvolatile storage medium (such as optical disk, flash storage device, mobile hard disk, etc.), including a number of instructions for making a computer device (such as personal computers, servers, network equipment, etc.) implement the methods described in the embodiments of the present application. The components may be implemented on a single device or distributed across multiple devices. The functions of the components may be merged into one another or further split into multiple sub-components.”); and memory storing instructions (see Col. 1, lines 53-67: “The invention can be implemented in numerous ways, including as a process; an apparatus; a system; a composition of matter; a computer program product embodied on a computer readable storage medium; and/or a processor, such as a processor configured to execute instructions stored on and/or provided by a memory coupled to the processor.”), the instructions being executable by the one or more computing devices, wherein the one or more computing devices (see Col. 1, lines 53-67 and Col. 4, lines 22-43 citations as in limitation(s) above.) are configured to: [the limitations as in claim 2, above.] Regarding claim 16, Fang et al. in combination with L’Huillier et al. and Reddy et al. teach the limitations as in claim 2, above. Fang et al. further teaches: 16. (New) A non-transitory computing-device readable storage medium on which computing-device readable instructions of a program are stored that (see Col. 1, lines 53-67 and Col. 2, lines 53-67 citations as in claim 9 above.: “The invention can be implemented in numerous ways, including as a process; an apparatus; a system; a composition of matter; a computer program product embodied on a computer readable storage medium; and/or a processor, such as a processor configured to execute instructions stored on and/or provided by a memory coupled to the processor.”), when executed by: one or more computing devices (see Col. 1, lines 53-67 and Col. 4, lines 22-43 citations as in claim 9 above.), cause the one or more computing devices to perform a method (see Col. 1, lines 53-67 and Col. 4, lines 22-43 citations as in claim 9 above.) comprising: [the limitations as in claim 2, above.] Regarding claims 4, 11, and 18, Fang et al. in combination with L’Huillier et al. and Reddy et al. teaches all of the limitations as in claims 2, 19, and 16, above. Fang et al. further teaches: 4, 11, and 18. (New) The method/system/non-transitory computing-device readable storage medium of claims 2, 19, and 16, wherein training the computer system (see Col. 4, lines 22-43, Col. 5, lines 29-38, and Col. 8, line 57 - Col. 9, line 22 citations as in claims 3, 10, and 17, above.) comprises: L’Huillier et al. further teaches: classifying each of a plurality of comments as being positive or negative based on scores associated with each of the plurality of comments (see Col. 11, lines 50-54, Col. 11, line 61- Col. 12, line 5, Col. 15, lines 12-17, Col. 16, lines 24-34, Col. 17, lines 11-27: “In one embodiment, a trained machine learning system may be used. In another embodiment, a set of rules based on parameterized heuristics which depend on thresholds may be used. The parameters may be trained by controlling performance metrics within a training/testing dataset. […] A consumer review for a commercial entity/object may include one or more textual units (e.g., sentences, phrases), each textual unit including a sentiment regarding the commercial entity/object or regarding an attribute descriptor of the commercial entity/object. The sentiment scoring engine 214 may generate a sentiment score for each textual unit in a consumer review. For example, the sentiment scoring engine 214 may take as input the entire text of a consumer review, and output a list (or array or other suitable data structure) of the sentiments scores for the different textual units of the consumer review. […] In step 412, the apparatus 200 may generate a word polarity score of the word using the sentiment scoring engine 214. The word polarity score is a numerical score that indicates whether a sentiment expressed by a word, phrase, sentence, paragraph or set of paragraphs is positive, negative or neutral or unknown. […] …In another embodiment, a trained machine learning system may be used to generate the sentiment score based on the word polarity score, the word negation score and the word intensity score. […] In this manner, as illustrated in FIG. 4, the apparatus 200 may generate sentiment scores for each textual unit of a consumer review. […] FIG. 5 is a flowchart illustrating an exemplary computer-implemented method 500 of generating a sentiment score for a consumer review based on sentiment scores for the different textual units in the consumer review.”); generating the plurality of phrases from the plurality of comments (see Col. 14, lines 39-57: “) In step 402, the apparatus 200 may receive a consumer review (e.g., from a non-transitory computer-readable storage device, from an external consumer computing device, via a network device, or the like). In step 404, the apparatus 200 may create a data structure (e.g., an array named spList) for storing sentiment scores for the different textual units in the consumer review. In step 406, the apparatus 200 may programmatically parse the consumer review to split the consumer review into its different constituent textual units, for example, by detecting and splitting a series of words along period punctuations. In step 408, for each textual unit, the apparatus 200 may create a data structure (e.g., an array named wsList) for storing the sentiment scores for the different words in each textual unit, and the apparatus 200 may save the data structure on a non-transitory computer-readable storage device. In step 410, the apparatus 200 may programmatically parse each textual unit and split each textual unit into its different constituent words, for example by detecting and splitting the textual unit along spaces.”); and assigning each of the plurality of phrases the polarity based at least in part on the scores associated with each of the plurality of comments (see Col. 17, lines 39-64: “In step 502, the apparatus 200 may receive or access sentiment scores for the different textual units in a consumer review from a non-transitory computer-readable storage device, from an external consumer computing device via a network device, or the like. In step 504, the sentiment scoring engine 214 of the apparatus 200 may determine if the minimum of the sentiment scores exceeds (or is equal to) a predetermined positive minimum threshold and if the average of those scores exceeds (or is equal to) a predetermined positive average threshold. If so, then in step 506, the apparatus 200 may determine that the sentiment for the consumer review is “positive.” An exemplary positive minimum threshold may be about 0.3, but is not limited to this exemplary value. An exemplary positive average threshold may be about 1.5, but is not limited to this exemplary value. Otherwise, in step 508, the sentiment scoring engine 214 of the apparatus 200 may determine if the maximum of those scores is lower than (or is equal to) a predetermined negative maximum threshold and if the average of those scores is lower than (or is equal to) a predetermined negative average threshold. If so, then in step 510, the apparatus 200 may determine that the sentiment for the consumer review is “negative.” An exemplary negative maximum threshold may be about −0.1, but is not limited to this exemplary value. An exemplary negative average threshold may be about −2.0, but is not limited to this exemplary value.”). Fang et al., L'Huillier et al. and Reddy et al. and are considered to be analogous to the claimed invention because they are in the same field of endeavor in sentiment/emotion analysis. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Fang et al. in combination with L'Huillier et al. and Reddy et al. to further incorporate the teachings of L'Huillier et al. of classifying each of a plurality of comments as being positive or negative based on scores associated with each of the plurality of comments; generating the plurality of phrases from the plurality of comments; and assigning each of the plurality of phrases the polarity of the sentiment based at least in part on the scores associated with each of the plurality of comments which provides the benefit of enabling a consumer to read individual reviews and ratings, and may be allowed to view the weights associated with the ratings (Col. 9, lines 6-25 of L'Huillier et al.). Regarding claim 22, Fang et al. in combination with L’Huillier et al. and Reddy et al. teaches the limitations as in claim 2, above. Fang et al. further teaches: 22. (New) The method of claim 2, wherein: the backend service mode includes a machine learning model (see Col. 4, lines 22-48: “In this example, context-sensitive comment analyzer 200 includes an optional initial classifier 202, a context-sensitive comment data detector 204, a context type classifier 206, and a comment classifier comprising a positive model-based comment classifier 208 and a negative model-based comment classifier 210. […] (21) As will be described in greater detail below, in some embodiments, components such as context-sensitive comment data detector 204, context type classifier 206, and/or comment classifiers 208 and 210 are implemented based on machine learning techniques.” and Col. 8, line 57 - Col. 9, line 22: “In some embodiments, a context such as a question/posting/topic may not be manually classified ahead of time, and new context can be generated and used. Classifier 206 is implemented using a machine learning technique similar to what is described in FIGS. 3A-3B. A training set of manually labeled sample data (e.g., a set of sample survey questions manually classified using a context configuration tool described above) is used to train a learning machine model such as an SVM, a Bayesian classifier, a decision tree, etc.); training the backend service model includes training the machine learning model to identify the polarity of the plurality of phrases (see Col. 4, lines 22-48 and Col. 8, line 57 - Col. 9, line 22 citations as in limitation above (…a context type classifier 206, and a comment classifier comprising a positive model-based comment classifier 208 and a negative model-based comment classifier 210…) and further: Col. 5, lines 29-38: “In some embodiments, a comment is preprocessed and divided into multiple pieces. Process 200 analyzes an individual piece of comment data. For example, in some embodiments, each sentence in a multi-sentence comment is a separate piece of data to be analyzed. Other ways of processing data can be used, such as grouping several individual sentences to form a piece of comment data, further segmenting a sentence into multiple pieces of sub-clauses and/or phrases, etc.”); and determining the first polarity includes determining the first polarity using the machine learning model (see Col. 4, lines 22-48, Col. 5, lines 29-38, and Col. 8, line 57 - Col. 9, line 22 citations as in limitations above (…a context type classifier 206, and a comment classifier comprising a positive model-based comment classifier 208 and a negative model-based comment classifier 210… […] …context type classifier 206, and/or comment classifiers 208 and 210 are implemented based on machine learning techniques).) Regarding claim 23, Fang et al. in combination with L’Huillier et al. and Reddy et al. teaches the limitations as in claims 2, above. Fang et al. further teaches: 23. (New) The method of claim 2, wherein: training the backend service model includes training the backend service model to identify a second polarity of a first plurality of terms (see Col. 4, lines 22-48 and Col. 8, line 57 - Col. 9, line 22 citations as in claim 22 above (…a context type classifier 206, and a comment classifier comprising a positive model-based comment classifier 208 and a negative model-based comment classifier 210…); the input phrase includes a second plurality of terms (see Col. 4, lines 22-43: “In this example, context-sensitive comment analyzer 200 includes an optional initial classifier 202, a context-sensitive comment data detector 204, a context type classifier 206, and a comment classifier comprising a positive model-based comment classifier 208 and a negative model-based comment classifier 210. […]” Col. 5, lines 29-38: “In some embodiments, a comment is preprocessed and divided into multiple pieces. Process 200 analyzes an individual piece of comment data. For example, in some embodiments, each sentence in a multi-sentence comment is a separate piece of data to be analyzed. Other ways of processing data can be used, such as grouping several individual sentences to form a piece of comment data, further segmenting a sentence into multiple pieces of sub-clauses and/or phrases, etc.”); and determining the first polarity includes comparing at least one term of the first plurality of terms to the second plurality of terms to determine that a second polarity of the at least one term is unrecognized when the at least one term does not match one or more terms of the first plurality of terms (see Fig. 1 (136: query understanding and response system) and ¶ [0010 and 0066]: “[0010] In accordance with another implementation, causing the response to the utterance to be generated includes matching the utterance to a particular utterance type within a hierarchical tree of utterance types, each utterance type in the hierarchical tree of utterance types having one or more responses associated therewith, and selecting the response to the utterance from among the response(s) associated with the particular utterance type. [0066] However, if query understanding and response system 136 fails to match the recognized or corrected utterance to any of the tasks within the predefined set, then query understanding and response system 136 may further analyze the words of the utterance to determine how such utterance should be handled thereby. For example, query understanding and response system 136 may determine that the utterance should be handled by conducting a Web search or by offering the user with an opportunity to conduct a Web search. In this case, the utterance may be handled by specialized logic for facilitating Web searching that is internal and/or external to query understanding and response system 136. Alternatively, query understanding and response system 136 may determine based on an analysis of the words of the utterance that the utterance comprises a chit-chat type utterance, which as noted above is an utterance intended to engage with a persona of digital personal assistant 130.” ). Fang et al., L'Huillier et al., and Reddy et al. are considered to be analogous to the claimed invention because they are in the same field of endeavor in sentiment/emotion analysis. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Fang et al. in combination with L'Huillier et al. to incorporate the teachings of Reddy et al. of determining the first polarity includes comparing at least one term of the first plurality of terms to the second plurality of terms to determine that a second polarity of the at least one term is unrecognized when the at least one term does not match one or more terms of the first plurality of terms which provides the benefit of improve its performance over time through continued interaction with the user ([0056] of Reddy et al.). Claims 5, 12, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Fang et al. (US 10037491 B1) further in view of L'Huillier et al. (US 9317566 B1) and Reddy et al. (US 20150199967 B1) as in claims 4, 11, and 18, above and further in view of Scholl et al. (US 10402431 B2). Regarding claims 5, 12, and 19, Fang et al. in combination with L’Huillier et al., Reddy et al. teach all of the limitations as in claims 4, 11 and 18, above. Fang et al. further teaches: 5, 12, and 19. (New) The method/system/non-transitory computing-device readable storage medium of claims 4, 11, and 18, wherein the plurality of phrases includes a plurality of terms (see Col. 5 lines 29-38 and Col. 5, lines 43-67: “In some embodiments, a comment is preprocessed and divided into multiple pieces. Process 200 analyzes an individual piece of comment data. For example, in some embodiments, each sentence in a multi-sentence comment is a separate piece of data to be analyzed. Other ways of processing data can be used, such as grouping several individual sentences to form a piece of comment data, further segmenting a sentence into multiple pieces of sub-clauses and/or phrases, etc...”) and L’Huillier et al. further teaches: generating the plurality of phrases (see Col. 14, lines 39-57: “) In step 402, the apparatus 200 may receive a consumer review (e.g., from a non-transitory computer-readable storage device, from an external consumer computing device, via a network device, or the like). In step 404, the apparatus 200 may create a data structure (e.g., an array named spList) for storing sentiment scores for the different textual units in the consumer review. In step 406, the apparatus 200 may programmatically parse the consumer review to split the consumer review into its different constituent textual units, for example, by detecting and splitting a series of words along period punctuations. In step 408, for each textual unit, the apparatus 200 may create a data structure (e.g., an array named wsList) for storing the sentiment scores for the different words in each textual unit, and the apparatus 200 may save the data structure on a non-transitory computer-readable storage device. In step 410, the apparatus 200 may programmatically parse each textual unit and split each textual unit into its different constituent words, for example by detecting and splitting the textual unit along spaces.”) comprises: Fang et al. in combination with L'Huillier et al., Reddy et al. are considered to be analogous to the claimed invention because they are in the same field of endeavor in sentiment analysis. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Fang et al. in combination with L'Huillier et al., and Reddy et al. to further incorporate the teachings of L'Huillier et al. of generating the plurality of phrases which provides the benefit of enabling a consumer to read individual reviews and ratings, and may be allowed to view the weights associated with the ratings (Col. 9, lines 6-25 of L'Huillier et al.). However, Fang et al. in combination with L’Huillier et al., Reddy et al do not explicitly teach, but Scholl et al. does teach: determining a first frequency of each term in the plurality of terms in each comment of the plurality of comments (see ¶ Col. 5, line 18 – Col. 5, line 30: “In block 303, the component calculates the average frequency of the selected word within the documents (e.g., web pages) of the search results. The “frequency” of a word is the number of occurrences of that word within the document divided by the total number of occurrences of words within that document. For example, if a word occurs 10 times within a document that contains 200 words, then its frequency is 0.05 (i.e., 10/200), which means that it accounts for 5% of the words in the document. The “average frequency” of a word within the search results is the average of the frequencies of that word for each document. For example, if the frequencies for a word are 0.05, 0.04, 0.02, and 0.01 in a search result that has four documents, then the average frequency for that word is 0.03 (e.g., (0.05+0.04+0.02+0.01)/4). The average frequency is represented by the following equation: eq (1)”); determining a second frequency of each comment that includes each term (see ¶ Col. 5, line 18 – Col. 5, line 30 citation as in limitation above and further: “… In block 304, the component retrieves the “normal frequency” for the word. The normal frequency represents the average frequency of the word in a very large corpus of documents, such as all web pages.”); and assigning a weight to each term based on the first frequency and the second frequency (see ¶ Col. 5, line 18 – Col. 5, line 30 citation as in limitation above and further: “… In block 305, the component calculates a “frequency score” for the selected word. If the average frequency of the selected word is much higher than the normal frequency of the selected word, then the word may be highly related to the item. The frequency score provides a scoring of the average frequency relative to the normal frequency. The frequency score may be represented by the following equation: eq (2) […] In block 307, the component calculates a “contain score” that indicates the fraction of the documents of the search results that contain the selected word. The contain score may be represented by the following equation: eq (3) where S.sub.f is the frequency score for the word, {tilde over (f)} is the normal frequency of the word, and atan is the arc tangent function. One skilled in the art will appreciate that this equation is just one of many equations that can be used to generate the frequency score.”). Fang et al., L'Huillier et al., Reddy et al., and Scholl et al. are considered to be analogous to the claimed invention because they are in the same field of endeavor in user query analysis. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Fang et al. in combination with L'Huillier et al. and Reddy et al. to incorporate the teachings of Scholl et al. of determining a first frequency of each term in the plurality of terms in each comment of the plurality of comments; determining a second frequency of each comment that includes each term; and assigning a weight to each term based on the first frequency and the second frequency which provides the benefit of identifying targeted results/advertisements (Col. 4, lines 21-23 of Scholl et al.). Claims 6 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over Fang et al. (US 10037491 B1) further in view of L'Huillier et al. (US 9317566 B1) and Reddy et al. (US 20150199967 B1) as in claims 4 and 11, above and further in view of Dillard et al. (US 8554701 B1). Regarding claims 6 and 13, Fang et al. in combination with L’Huillier et al., Reddy et al. teach all of the limitations as in claims 4, 11 and 18, above. However, Fang et al. in combination with L’Huillier et al., and Reddy et al. do not explicitly teach, but Dillard et al. does teach: 6 and 13. (New) The method/system of claims 4 and 11, wherein training the computer system further comprises removing at least one comment from the plurality of comments that are classified as being neutral (see Col. 4, line 66 - Col. 5, line 19 and Col. 8, lines 42-61: “The quote extraction module 134 may also determine the sentiment expressed by the extracted quotes or other collection of sentences or phrases contained in text-based comments in the customer review data 132. […] According to one embodiment, the classification of sentiment in the extracted quotes or sentences may be performed using a machine learning technique trained on sentences manually labeled for sentiment, as will be described below in regard to FIG. 5. The manually labeled sentences may be contained in training data 138 stored in the datastore 130 or other storage mechanism in the merchant system 120. In addition, the quote extraction module 134 may store other information required for the sentiment classification of the sentences in the training data 138, as will be described below. […] From operation 404, the routine 400 proceeds to operation 406, where the quote extraction module 134 classifies each of the individual sentences in the collection of sentences with a sentiment. In addition, once a sentiment for each sentence has been determined, the quote extraction module 134 removes those sentences having neutral sentiment from the collection of sentences before proceeding to identify the topics contained in the collection of sentences. Since a neutral sentiment sentence does not express a like or dislike of an item or aspect of the item, these sentences would likely not serve as salient quotes regarding a topic for a particular item that would provide a potential purchaser with a sense of how other customers feel regarding the topic. According to one embodiment, the quote extraction module 134 may use the method described below in regard to FIG. 5 to classify the sentences or phrases for sentiment and to discard those sentences having neutral sentiment from the collection of sentences. Alternatively, the quote extraction module 134 may utilize other methods known in the art for determining the sentiment for each sentence.”). Fang et al., L'Huillier et al., Reddy et al., and Dillard et al. are considered to be analogous to the claimed invention because they are in the same field of endeavor in sentiment/emotion analysis. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Fang et al. in combination with L'Huillier et al and Reddy et al. to incorporate the teachings of Dillard et al. to incorporate the teachings of Dillard et al. of removing at least one comment from the plurality of comments that are classified as being neutral which provides the benefit of greatly improving the accuracy of the sentiment classification process. (Col. 14, lines 10-27 of Dillard et al.). Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Keisha Y Castillo-Torres whose telephone number is (571)272-3975. The examiner can normally be reached Monday - Friday, 9:00 am - 4:00 pm (EST). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Pierre-Louis Desir can be reached on (571)272-7799. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. Keisha Y. Castillo-Torres Examiner Art Unit 2659 /Keisha Y. Castillo-Torres/Examiner, Art Unit 2659 /PIERRE LOUIS DESIR/Supervisory Patent Examiner, Art Unit 2659
Read full office action

Prosecution Timeline

Feb 26, 2024
Application Filed
Sep 18, 2024
Non-Final Rejection — §103, §DP
Dec 10, 2024
Examiner Interview Summary
Dec 10, 2024
Applicant Interview (Telephonic)
Dec 23, 2024
Response Filed
Feb 25, 2025
Final Rejection — §103, §DP
May 14, 2025
Applicant Interview (Telephonic)
May 14, 2025
Examiner Interview Summary
May 16, 2025
Request for Continued Examination
May 19, 2025
Response after Non-Final Action
Aug 04, 2025
Non-Final Rejection — §103, §DP
Dec 03, 2025
Applicant Interview (Telephonic)
Dec 03, 2025
Examiner Interview Summary
Dec 08, 2025
Response Filed
Jan 24, 2026
Final Rejection — §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12573402
GENERATING AND/OR UTILIZING UNINTENTIONAL MEMORIZATION MEASURE(S) FOR AUTOMATIC SPEECH RECOGNITION MODEL(S)
2y 5m to grant Granted Mar 10, 2026
Patent 12536989
Language-agnostic Multilingual Modeling Using Effective Script Normalization
2y 5m to grant Granted Jan 27, 2026
Patent 12531050
VOICE DATA CREATION DEVICE
2y 5m to grant Granted Jan 20, 2026
Patent 12499332
TRANSLATING TEXT USING GENERATED VISUAL REPRESENTATIONS AND ARTIFICIAL INTELLIGENCE
2y 5m to grant Granted Dec 16, 2025
Patent 12488180
SYSTEMS AND METHODS FOR GENERATING DIALOG TREES
2y 5m to grant Granted Dec 02, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
74%
Grant Probability
99%
With Interview (+30.5%)
3y 0m
Median Time to Grant
High
PTA Risk
Based on 108 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month