DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Applicant’s claim for the benefit of a prior-filed application under 35 U.S.C. 119(e) or under 35 U.S.C. 120, 121, or 365(c) is acknowledged.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 06/11/2024 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
The Manual of Patent Examining Procedure (MPEP) provides detailed rules for determining subject matter eligibility for claims in §2106. Those rules provide a basis for the analysis and finding of ineligibility that follows. MPEP §2106(III) states that examiners should determine whether a claim satisfies the criteria for subject matter eligibility by evaluating the claim in accordance with the flowchart in this section.
Claims 1-10 are rejected under 35 U.S.C. §101. The claimed invention is directed to unpatentable subject matter because the claimed invention recites a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more.
Claims 1-6 are directed to a device, claims 7-8 are directed to a method, and claims 9-10 are directed to a computer recording medium. Although claims 1-10 are directed to one of the four statutory categories of invention (MPEP 2106.03), the claims recite a number of steps of (“acquiring …”, “extracting …”, and “outputting …”). These limitations fall into a judicial exception (MPEP 2106.04 (II), “laws of nature”, “natural phenomena” and “abstract idea”). The Supreme Court has explained that the judicial exceptions reflect the Court’s view that abstract ideas, laws of nature, and natural phenomena are "the basic tools of scientific and technological work", and are thus excluded from patentability because "monopolization of those tools through the grant of a patent might tend to impede innovation more than it would tend to promote it." Alice Corp., 573 U.S. at 216, 110 USPQ2d at 1980. It should be noted that there are no bright lines between the types of exceptions, and that many of the concepts identified by the courts as exceptions can fall under several exceptions (MPEP 2106.04 (I) and (II)).
In light of the disclosure (Spec., [0085], [0096], Fig. 12), a claimed subject matter is related to extracting a target word and relevant words, (See an example mentioned in Spec. [0016], in case of a user provided sentence: “This hot spring had good hot spring quality”, “hot spring quality” is claimed as “a target word”, and a word “good” is a claimed “a relevant word”, see, Fig. 12).
Although limitations mention generic computer elements (“memory”, “processor”), a claimed invention could be interpreted as a customer service agent (a person) receives a natural language request (email or text message) from a customer. The agent determines a target word and relevant words in his /her mind by reading the user’s request. For example, a method defined by independent claim 7 could be reasonably interpreted as:
acquiring a first natural sentence input by a user (A customer service agent receives a text message: “I like this mobile phone, but the price is too high” from a customer);
extracting at least a target word (e.g. a word “mobile phone”) of a relevant word (“price is too high”) and the target word from the acquired first natural sentence using a model (natural language knowledge in agent’s mind) learnt to output the relevant word and the target word with a second natural sentence as an input, the relevant word being a word indicating a request of a user included in the second natural sentence and the target word being a word serving as a target of the relevant word (The agent understands the user’s text message by extracting a target word “mobile phone” and relevant words “price”, “too high”; The agent uses his language knowledge learnt in a school and during training in a company, corresponding to a claimed “a model learnt from …second natural language sentence”); and
outputting the extracted target word (the customer agent writes down a notice about a customer wanted to buy a cheap “mobile phone”),
.
From the above analysis, excepting mentioning generic computer elements and adding equivalent of “apply it”, a claimed invention could be performed in human mind. If above independent claim is patented, a customer agent would infringe the patent when the agent is doing his daily job by reading / understanding a user’s request. Independent claim 1 is directed to a device. Independent 9 is directed to a computer recording medium. Although these claims are directed to different categories by including generic computer elements, these claims include similar features as a method claim 7. Dependent claims 2-6, 8 and 10 further include limitations related to preprocessing data and visually displaying results. All these could be performed in human mind with a piece of paper / a pen.
Step 2A is a two-prong inquiry, in which examiners determine in Prong One whether a claim recites a judicial exception, and if so, then determine in Prong Two if the recited judicial exception is integrated into a practical application of that exception. Together, these prongs represent the first part of the Alice/Mayo test, which determines whether a claim is directed to a judicial exception (See a flowchart in MPEP 2106.04(II)(A)). In the prone one of the two prong inquiry, the above limitations recited in claims are directed to at least one of groups of abstract ideas (MPEP 2106.04(a), “Mathematical concepts”, “Certain methods of organizing human activity”, “Mental Processes”). It should be noted that these groupings are not mutually exclusive, i.e., some claims recite limitations that fall within more than one grouping or sub-grouping (MPEP 2106.04(a)(2)).
The courts consider a mental process (thinking) that “can be performed in the human mind, or by a human using a pen and paper” to be an abstract idea. CyberSource Corp. v. Retail Decisions, Inc., 654F.3d 1366, 1372, 99 USPQ2d 1690, 1695 (Fed. Cir.2011). If a claim recites a limitation that can practically be performed in the human mind, with or without the use of a physical aid such as pen and paper, the limitation falls within the mental processes grouping, and the claim recites an abstract idea. See, e.g., Benson, 409 U.S. at 67, 65, 175 USPQ at 674-75,674. If the claimed invention is described as a concept that is performed in the human mind and applicant is merely claiming that concept performed 1) on a generic computer, or 2) in a computer environment, or 3) is merely using a computer as a tool to perform the concept.
The recited limitations in independent claims could be performed in human mind or with a pen / a piece of paper. The dependent claims further recite preprocessing data and visualizing data. These claim elements, when considered alone and in combination, are considered to be abstract ideas because they are directed to a mental process.
In these situations, the claim is considered to recite a mental process. The Court concluded that the algorithm could be performed purely mentally even though the claimed procedures “can be carried out in existing computers long in use, no new machinery being necessary.” The claims therefore recited an abstract idea, despite the fact that the claimed steps were performed on a computer. 887 F.3d at 1385, 126 USPQ2d at 1504.
Since the claimed invention falls into a judicial exception according above analysis, a claim that is directed to a judicial exception must be evaluated to determine whether the claim recite additional elements that integrate the judicial exception into a practical application (MPEP 2106.04(II)(A)(2)). Prong Two asks whether the claim recite additional elements that integrate the judicial exception into a practical application. In Prong Two, examiners evaluate whether the claim as a whole integrates the exception into a practical application of that exception. Court in Gottschalk v. Benson ‘‘held that simply implementing a mathematical principle on a physical machine, namely a computer was not a patentable application of that principle. Accordingly, after determining that a claim recites a judicial exception in Step 2A Prong One examiners should evaluate whether the claim as a whole integrates the recited judicial exception into a practical application of the exception in Step 2A Prong Two. For a claim reciting a judicial exception to be eligible, the additional elements (if any) in the claim must "transform the nature of the claim" into a patent-eligible application of the judicial exception, Alice Corp., 573 U.S. at 217, 110 USPQ2d at 1981, either at Prong Two or in Step 2B. If there are no additional elements in the claim, then it cannot be eligible.
MPEP §2106.05 describes step 2B test to determine whether a claim amounts to significantly more. The second part of the Alice/Mayo test is often referred to as a search for an inventive concept. Alice Corp. Pty. Ltd. v. CLS Bank Int'l, 573 U.S. 208, 217, 110 USPQ2d 1976, 1981 (2014). The Supreme Court has identified a number of considerations as relevant to the evaluation of whether the claimed additional elements amount to an inventive concept (See MPEP §2106.05(I)(A)). It is notable that mere physicality or tangibility of an additional element or elements is not a relevant consideration in Step 2B. As the Supreme Court explained in Alice Corp., mere physical or tangible implementation of an exception is not in itself an inventive concept and does not guarantee eligibility.
The Supreme Court has identified a number of considerations as relevant to the evaluation of whether the claimed additional elements amount to an inventive concept. By considering limitations recited in the instant claims, the claims do not improve the functions of a computer, or any other technology or technical field. The claims also do not apply the judicial exception with, or by use of, a particular machine. The claims also do not have effecting a transformation or reduction of a particular article to a different state or thing. The claims fail to include a specific limitation other than what is well-understood, routine, conventional activity in the field, or adding unconventional steps that confine the claim to a particular useful application. The recited “processor” / “memory” are well-understood, routine and conventional in the field. Therefore, that recited element does not amount to significantly more than an abstract idea.
Please notes simply appending well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception, e.g., a claim to an abstract idea requiring no more than a generic computer to perform generic computer functions that are well-understood, routine and conventional activities previously known to the industry, as discussed in Alice Corp., 573 U.S. at 225, 110 USPQ2d at 1984. The court also found “adding insignificant extra-solution activity to the judicial exception” or “generally linking the use of the judicial exception to a particular technological environment or field of use” is not enough to be qualify as “significantly more” considerations.
By reviewing limitations recited in the claims, none of the limitations meet the significantly more considerations. Therefore, claims are directed to unpatentable subject matter (MPEP §2106, flowchart, Step 2B, NO branch).
Claims 9-10 are rejected under 35 U.S.C. 101 as not falling within one of the four statutory categories of invention.
Claims 9-10 recite "a computer readable recording medium" and the applicant's specification did not have a definition of a claimed “recording medium”. According to a reasonable interpretation, a claimed “a computer readable recording medium” covers forms of transitory propagating signals per se.
The United States Patent and Trademark Office (USPTO) is obliged to give claims their broadest reasonable interpretation consistent with the specification during proceedings before the USPTO. See In re Zletz, 893 F.2d 319(Fed. Cir. 1989) (during patent examination the pending claims must be interpreted as broadly as their terms reasonably allow). The broadest reasonable interpretation of a claim drawn to a computer readable medium (also called machine readable medium and other such variations) typically covers forms of non-transitory tangible media and transitory propagating signals per se in view of the ordinary and customary meaning of computer readable media, particularly when the specification is silent. See MPEP 2111.01. When the broadest reasonable interpretation of a claim covers a signal per se, the claim must be rejected under 35 U.S.C. 101 as covering non-statutory subject matter. See In re Nuijten, 500 F.3d 1346, 1356-57 (Fed. Cir. 2007) (transitory embodiments are not directed to statutory subject matter) and Interim Examination Instructions for Evaluating Subject Matter Eligibility Under 35 U.S.C. 101, Aug. 24, 2009; p. 2.
The USPTO recognizes that applicants may have claims directed to computer readable media that cover signals per se, which the USPTO must reject under 35 U.S.C. 101 as covering both non-statutory subject matter and statutory subject matter. In an effort to assist the patent community in overcoming a rejection or potential rejection under 35 U.S.C. 101 in this situation, the USPTO suggests the following approach. A claim drawn to such a computer readable medium that covers both transitory and non-transitory embodiments may be amended to narrow the claim to cover only statutory embodiments to avoid a rejection under 35 U.S.C. 101 by adding the limitation "non-transitory" to the claim. Such an amendment would typically not raise the issue of new matter, even when the specification is silent because the broadest reasonable interpretation relies on the ordinary and customary meaning that includes signals per se. The limited situations in which such an amendment could raise issues of new matter occur, for example, when the specification does not support a non-transitory embodiment because a signal per se is the only viable embodiment such that the amended claim is impermissibly broadened beyond the supporting disclosure. See, e.g.,Gentry Gallery, Inc. v. Berkline Corp., 134 F.3d 1473 (Fed. Cir. 1998).
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees.
A nonstatutory obviousness-type double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); and In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on a nonstatutory double patenting ground provided the conflicting application or patent either is shown to be commonly owned with this application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement.
Effective January 1, 1994, a registered attorney or agent of record may sign a terminal disclaimer. A terminal disclaimer signed by the assignee must fully comply with 37 CFR 3.73(b).
Claims 1-10 are provisionally rejected on the ground of nonstatutory obviousness-type double patenting as being unpatentable over claims 1-10 of co-pending U.S. Serial No. 18718154. For instance, claims 1 and 7 of the present invention amounts to claim 1 of U.S. Serial No. 18718154, and dependent claims thereof. It would have been obvious to one of ordinary skill in the art to omit the scope of user requests as part of the sentences extracted, In re Karlson 136 USPQ 184 (1963): "Omission of an element and its function is an obvious expedient if the remaining elements perform the same functions as before". See a claim comparison below.
This is a provisional obviousness-type double patenting rejection because the conflicting claims have not in fact been patented.
Claims in an co-pending application The instant claims
[Claim 1](Currently amended) An extraction device comprising: at least one memory storing a processing instruction: and at least one processor configured to execute the processing instruction, the at least one processor acquiring a first natural sentence input by a user, extracting at least a target word of a relevant word and the target word from the first natural sentence acquired by the acquisition section using a model learnt to output the relevant word and the target word with a second natural sentence as an input, the relevant word being a word defining relevancy between words included in the second natural sentence and a target word being a word serving as a target of the relevant word, and outputting the target word extracted by the extraction section.
[Claim 2] (Currently amended) The extraction device according to claim 1, wherein the at least one processor configured to execute the processing instruction extracts at least the target word of a pair of the target word and the relevant word corresponding to each relevancy using a plurality of models learnt for each relevancy defined by the relevant word.
[Claim 3] (Currently amended) The extraction device according to claim 1, wherein the at least one processor configured to execute the processing instruction extracts at least the target word of the relevant word indicating a positive feeling and the target word from the first natural sentence acquired by the acquisition section using a positive model extracting a pair of the relevant word indicating a positive feeling and the target word.
[Claim 4] (Currently amended) The extraction device according to claim 1, wherein the at least one processor configured to execute the processing instruction extracts at least the target word of the relevant word indicating a negative feeling and the target word from the first natural sentence acquired by the acquisition section using a negative model extracting a pair of the relevant word indicating a negative feeling and the target word.
[Claim 5] (Currently amended) The extraction device according to claim 1, wherein the at least one processor configured to execute the processing instruction learns a model to extract and output the relevant word and the target word for a natural sentence using a result of labeling the relevant word and the target word for the second natural sentence after parsing, and extracts at least the target word of the relevant word and the target word using a model learnt by the learning section.
[Claim 6] (Currently amended) The extraction device according to claim 5, wherein the at least one processor configured to execute the processing instruction learns a model using the labeling result and feature amounts of words stored in advance.
[Claim 7] (Currently amended) The extraction device according to claim 1, wherein the at least one processor configured to execute the processing instruction applies preprocessing for visualizing a factor of relevancy of a request from a user defined by the relevant word to the target word extracted by the extraction section, and outputs a result of the preprocessing performed by the preprocessing section.
[Claim 8] (Currently amended)The extraction device according to claim 7, wherein the at least one processor configured to execute the processing instruction applies clustering as the preprocessing to the target word extracted by the extraction section, andoutputs a result of the clustering applied by the preprocessing section.
[Claim 9](Original) An extraction method comprising:acquiring a first natural sentence input by a user;extracting at least a target word of a relevant word and the target word from the acquired first natural sentence using a model learnt to output the relevant word and the target word with a second natural sentence as an input, the relevant word being a word defining relevancy between words included in the second natural sentence and the target word being a word serving as a target of the relevant word; and outputting the extracted target word, the acquiring, the extracting, and the outputting being performed by an information processing device.
[Claim 10] (Original) A computer-readable recording medium storing a program for causing an information processing device to realize processing of: acquiring a first natural sentence input by a user; extracting at least a target word of a relevant word and the target word from the acquired first natural sentence using a model learnt to output the relevant word and the target word with a second natural sentence as an input, the relevant word being a word defining relevancy between words included in the second natural sentence and the target word being a word serving as a target of the relevant word; and outputting the extracted target word.
[Claim 1] A request extraction device comprising: at least one memory storing a processing instruction: and at least one processor configured to execute the processing instruction, the at least one processor acquiring a first natural sentence input by a user; extracting at least a target word of a relevant word and the target word from the first natural sentence to be acquired using a model learnt to output the relevant word and the target word with a second natural sentence as an input, the relevant word being a word indicating a request of a user included in the second natural sentence and the target word being a word serving as a target of the relevant word; and outputting the extracted target word.
[Claim 2] The request extraction device according to claim 1 wherein the at least one processor configured to execute the processing instruction applies preprocessing for visualizing a factor of relevancy of a request from a user defined by the relevant word to the extracted target word, and outputs a result of the preprocessing.
[Claim 3] The request extraction device according to claim 2, wherein the at least one processor configured to execute the processing instruction applies clustering as the preprocessing to the extracted target word, and outputs a result of the clustering.
[Claim 4] The request extraction device according to claim 2, wherein the at least one processor configured to execute the processing instruction totalizes and graphs appearance frequencies of the extracted target word extracted as the preprocessing for the extracted target word; and outputs a result of the graphing.
[Claim 5] The request extraction device according to claim 1 wherein the at least one processor configured to execute the processing instruction learns a model to extract and output the relevant word being the word indicating a request of a user and the target word serving as the target of the relevant word for a natural sentence using a result of labeling the relevant word and the target word for the second natural sentence after parsing, and extracts at least the target word of the relevant word and the target word using a learnt model.
[Claim 6] The request extraction device according to claim 5, wherein the at least one processor configured to execute the processing instruction learns a model using the labeling result and feature amounts of words stored in advance.
[Claim 7] A request extraction method including: acquiring a first natural sentence input by a user; extracting at least a target word of a relevant word and the target word from the acquired first natural sentence using a model learnt to output the relevant word and the target word with a second natural sentence as an input, the relevant word being a word indicating a request of a user included in the second natural sentence and the target word being a word serving as a target of the relevant word; and outputting the extracted target word, the acquiring, the extracting, and the outputting being performed by an information processing device.
[Claim 8] The request extraction method according to claim 7 comprising: applying preprocessing for visualizing a factor of relevancy of a request from a user defined by the relevant word to the extracted target word; and outputting a result of the preprocessing.
[Claim 9] A computer-readable recording medium storing a program for causing an information processing device to realize processing of: acquiring a first natural sentence input by a user; extracting at least a target word of a relevant word and the target word from the acquired first natural sentence using a model learnt to output the relevant word and the target word with a second natural sentence as an input, the relevant word being a word indicating a request of a user included in the second natural sentence and the target word being a word serving as the target of the relevant word; and outputting the extracted target word.
[Claim 10] The computer-readable recording medium according to claim 9 storing a program of: applying preprocessing for visualizing a factor of relevancy of a request from a user defined by the relevant word to the extracted target word, and outputting the result of the preprocessing.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Examiner’s Remarks
A claimed subject of the instant application is related to analyzing a sentence from a user’s message by extracting target words and relevant words (Spec. [0017], [0043], [0077], and Fig. 12; in a sentence: “Even though it is an old computer, it has good spec”, a word “spec” is extracted as a target word, a word “good” is extracted as a relevant word). The specification shows an example that a user wishes to get a lower a price illustrated in (Spec. [0027], [0029], Fig. 12, “spec is satisfied but price is high, lower price little more”). The examiner interprets a claimed “a first natural language input” / “a second natural language input” as user provided natural language inputs, which corresponds to a claimed “a request of a user”.
The instant claims are extremely broad. After performing extensive search, the examiner discovered a lot of prior art references. All these references meet the broadly recited limitations. In the following section, the examiner provides TWICE anticipation rejections under 35 U.S.C. §102 to show the broadness of instant claims.
Claims 1-10 are rejected under 35 U.S.C. §102 (a)(1) as being anticipated by Zhu et al. (US PG Pub. 2021/0150594, referred to as Zhu).
Zhu discloses analyzing various texts from customers including product review, pre-sale inquires or complaints to determine user’s sentiments in order to performing product research / development (Zhu, [0030-0034], Fig. 2, #204, #206, #208). Zhu further discloses applying clustering algorithms (Zhu, [0026], [0034], [0052], Fig. 5C, #508, Fig. 5D, #510). Zhu further discloses providing various visualization for the sentiments (Zhu, [0005], [0008], [0065-0066], Fig. 5A / 5B).
Regarding claims 1, 7 and 9, Zhu discloses a request extraction device, a method and a computer readable recording medium (Zhu, [0009], Fig. 1, a computer implemented device, a method and a medium for analyzing sentiment in user’s review or in user’s inquires), comprising
at least one memory storing a processing instruction: and at least one processor configured to execute the processing instruction, the at least one processor acquiring a first natural sentence input by a user (Zhu, [0030], [0037-0039], Fig. 2, #204, #205, receiving texts such as user’s reviews or pre-sale inquires, analyzing the text to determine positive sentiment or negative sentiment; [0045], determining subject, verb and noun in a sentence);
extracting at least a target word of a relevant word and the target word from the first natural sentence (Zhu, extracting topic of the review, e.g. a topic word “dishwasher” is a target word, Fig. 2, #220; [0034-0039], extracting sentiment words such as “new” “old”, “good”, “bad” are related word) to be acquired using model learnt to output the relevant word and the target word with a second natural sentence as an input (Zhu, [0039], training a natural language processing model to determine sentiment; [0045], training various neural network models), the relevant word being a word indicating a request of a user included in the second natural sentence and the target word being a word serving as a target of the relevant word (Zhu, [0037-0041], Fig. 3, #302-#310, using various data sources that have positive sentiments and negative sentiments); and
outputting the extracted target word (Zhu, Fig. 5A – 5I, providing visualized results for sentiment analyzing results, e.g., show a target word: dishwasher and related sentiment word: “quiet”).
Regarding claims 2, 8 and 10, Zhu further discloses applying preprocessing for visualizing a factor of relevancy of a request from a user defined by the relevant word to the extracted target word (Zhu, [0054], Fig. 5A, #504, displaying word usage frequency); and
outputting a result of the preprocessing (Zhu, Fig. 5A and 5B).
Regarding claim 3, Zhu further discloses:
applying clustering as the preprocessing to the extracted target word (Zhu, [0026], [0034], [0052-0053], Fig. 5A, #508, #509; also see Fig. 5G), and
outputs a result of the clustering (Zhu, Fig. 5A, Fig. 5G).
Regarding claim 4, Zhu further discloses:
totalizes and graphs appearance frequencies of the extracted target word extracted as the preprocessing for the extracted target word (Zhu, [0051], [0060], [0066], Fig. 5A / 5B); and
outputs a result of the graphing (Zhu, Fig. 5A / 5B).
Regarding claim 5, Zhu further discloses:
learns a model to extract and output the relevant word being the word indicating a request of a user and the target word serving as the target of the relevant word for a natural sentence using a result of labeling the relevant word and the target word for the second natural sentence after parsing (Zhu, [0030-0031], training a natural language processing (NLP) model to determine topics of user’s reviews / user’s inquires, sentiments in the user’s reviews / inquires; [0045], training various neural network models to extract subject, verb and object, as well as sentiments), and
extracts at least the target word of the relevant word and the target word using a learnt model (Zhu, [0037-0045], using NLP / BERT, LSTM model to extract topic words, sentiment words; Note, topic words e.g., “dishwasher”, correspond to “the target word”, sentiment words, e.g., “good, bad, quiet”, correspond to “the relevant word”).
Regarding claim 6, Zhu further discloses: learns a model using the labeling result and feature amounts of words stored in advance (Zhu, [0024], [0039], training a NLP model using complaint data from a call center).
Claims 1, 7 and 9 are rejected under 35 U.S.C. §102 (a)(1) as being anticipated by Ortega et al. (US PG Pub. 2021/0357591, referred to as Ortega).
Ortega discloses analyzing texts from user’s reviews, comments or frequently asked questions to determine sentiments in the texts. Ortega further discloses applying clustering algorithm to different topics (Ortega, [0048-0050], using K-clustering algorithm, a first cluster is for product, a second cluster is for service). Ortega further discloses presenting analyzed sentiment results in visual / graphical formats (Ortega, [0002], [0025], Fig. 4, #400, Fig. 9-11, presenting sentiment analyzing results in various graphical formats)
Regarding claims 1, 7 and 9, Ortega discloses a request extraction device, a method and a computer readable recording medium (Ortega, [0008], [0035], [0085], a computer implemented sentiment analyzing to text including user’s reviews, comments, or user’s frequently asked questions [0082]), comprising
at least one memory storing a processing instruction: and at least one processor configured to execute the processing instruction, the at least one processor acquiring a first natural sentence input by a user (Ortega, [0006-0007], [0088]);
extracting at least a target word of a relevant word and the target word from the first natural sentence (Ortega, [0029], [0038], extracting topics discussed and sentiment words) to be acquired using model learnt to output the relevant word and the target word with a second natural sentence as an input (Ortega, [0035], [0044], training machine learning models to identify sentiments in user’s reviews, comments or questions), the relevant word being a word indicating a request of a user included in the second natural sentence and the target word being a word serving as a target of the relevant word (Ortega, [0035-0036], [0056-0057], [0061], training machine learning models using positive / negative reviews / comments, Fig. 7); and
outputting the extracted target word (Ortega, [0008-0009], [0038-0039], [0058-0059], Fig. 6, #610, output terms in graphical format, See Fig. 3).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. The examiner discovered several relevant prior art references that are related to one or more concepts disclosed by the instant application. These references are included in the attached PTO-892 form for completeness of the record.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Jialong He, whose telephone number is (571) 270-5359. The examiner can normally be reached on Monday – Friday, 8:00AM – 4:30PM, EST.
If attempts to reach the examiner by telephone are unsuccessful, the examiner's supervisor, Pierre Desir can be reached on (571) 272-7799. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JIALONG HE/Primary Examiner, Art Unit 2659