Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
Claim Objections
Claims 1, 3, 4, 5, 7, and 8 are objected to because of the following informalities:
the acquisition section
the extracting section
the learning section
the preprocessing section
It is believed the claimed terms in question were inadvertently left in the claims during amendment, as they refer back to a different grammatical tense and use.
Appropriate correction is required.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory obviousness-type double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); and In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on a nonstatutory double patenting ground provided the conflicting application or patent either is shown to be commonly owned with this application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement.
Effective January 1, 1994, a registered attorney or agent of record may sign a terminal disclaimer. A terminal disclaimer signed by the assignee must fully comply with 37 CFR 3.73(b).
Claims 1-10 are provisionally rejected on the ground of nonstatutory obviousness-type double patenting as being unpatentable over claims 1-10 of copending U.S. Serial No. 18718511. For instance, claims 1 and 7 of the present invention amount to claim 1 of U.S. Serial No. 18718511, and dependent claims thereof. It would have been obvious to one of ordinary skill in the art to omit the scope of user requests as part of the sentences extracted, In re Karlson 136 USPQ 184 (1963): "Omission of an element and its function is an obvious expedient if the remaining elements perform the same functions as before"
This is a provisional obviousness-type double patenting rejection because the conflicting claims have not in fact been patented.
Current claims Conflicting claims
[Claim 1](Currently amended) An extraction device comprising: at least one memory storing a processing instruction: and at least one processor configured to execute the processing instruction, the at least one processor acquiring a first natural sentence input by a user, extracting at least a target word of a relevant word and the target word from the first natural sentence acquired by the acquisition section using a model learnt to output the relevant word and the target word with a second natural sentence as an input, the relevant word being a word defining relevancy between words included in the second natural sentence and a target word being a word serving as a target of the relevant word, and outputting the target word extracted by the extraction section.
[Claim 2] (Currently amended) The extraction device according to claim 1, wherein the at least one processor configured to execute the processing instruction extracts at least the target word of a pair of the target word and the relevant word corresponding to each relevancy using a plurality of models learnt for each relevancy defined by the relevant word.
[Claim 3] (Currently amended) The extraction device according to claim 1, wherein the at least one processor configured to execute the processing instruction extracts at least the target word of the relevant word indicating a positive feeling and the target word from the first natural sentence acquired by the acquisition section using a positive model extracting a pair of the relevant word indicating a positive feeling and the target word.
[Claim 4] (Currently amended) The extraction device according to claim 1, wherein the at least one processor configured to execute the processing instruction extracts at least the target word of the relevant word indicating a negative feeling and the target word from the first natural sentence acquired by the acquisition section using a negative model extracting a pair of the relevant word indicating a negative feeling and the target word.
[Claim 5] (Currently amended) The extraction device according to claim 1, wherein the at least one processor configured to execute the processing instruction learns a model to extract and output the relevant word and the target word for a natural sentence using a result of labeling the relevant word and the target word for the second natural sentence after parsing, andextracts at least the target word of the relevant word and the target word using a model learnt by the learning section.
[Claim 6] (Currently amended) The extraction device according to claim 5, wherein the at least one processor configured to execute the processing instruction learns a model using the labeling result and feature amounts of words stored in advance.
[Claim 7] (Currently amended) The extraction device according to claim 1, wherein the at least one processor configured to execute the processing instruction applies preprocessing for visualizing a factor of relevancy of a request from a user defined by the relevant word to the target word extracted by the extraction section, andoutputs a result of the preprocessing performed by the preprocessing section.
[Claim 8] (Currently amended)The extraction device according to claim 7, wherein the at least one processor configured to execute the processing instruction applies clustering as the preprocessing to the target word extracted by the extraction section, andoutputs a result of the clustering applied by the preprocessing section.
[Claim 9](Original) An extraction method comprising:acquiring a first natural sentence input by a user;extracting at least a target word of a relevant word and the target word from the acquired first natural sentence using a model learnt to output the relevant word and the target word with a second natural sentence as an input, the relevant word being a word defining relevancy between words included in the second natural sentence and the target word being a word serving as a target of the relevant word; andoutputting the extracted target word,the acquiring, the extracting, and the outputting being performed by an information processing device.
[Claim 10] (Original) A computer-readable recording medium storing a program for causing an information processing device to realize processing of:acquiring a first natural sentence input by a user;extracting at least a target word of a relevant word and the target word from the acquired first natural sentence using a model learnt to output the relevant word and the target word with a second natural sentence as an input, the relevant word being a word defining relevancy between words included in the second natural sentence and the target word being a word serving as a target of the relevant word; andoutputting the extracted target word.
[Claim 1] A request extraction device comprising: at least one memory storing a processing instruction: and at least one processor configured to execute the processing instruction, the at least one processor acquiring a first natural sentence input by a user; extracting at least a target word of a relevant word and the target word from the first natural sentence to be acquired using a model learnt to output the relevant word and the target word with a second natural sentence as an input, the relevant word being a word indicating a request of a user included in the second natural sentence and the target word being a word serving as a target of the relevant word; and outputting the extracted target word.
[Claim 2] The request extraction device according to claim 1 wherein the at least one processor configured to execute the processing instruction applies preprocessing for visualizing a factor of relevancy of a request from a user defined by the relevant word to the extracted target word, andoutputs a result of the preprocessing.
[Claim 3] The request extraction device according to claim 2, wherein the at least one processor configured to execute the processing instruction applies clustering as the preprocessing to the extracted target word, andoutputs a result of the clustering.
[Claim 4] The request extraction device according to claim 2, wherein the at least one processor configured to execute the processing instruction totalizes and graphs appearance frequencies of the extracted target word extracted as the preprocessing for the extracted target word; andoutputs a result of the graphing.
[Claim 5] The request extraction device according to claim 1 wherein the at least one processor configured to execute the processing instruction learns a model to extract and output the relevant word being the word indicating a request of a user and the target word serving as the target of the relevant word for a natural sentence using a result of labeling the relevant word and the target word for the second natural sentence after parsing, andextracts at least the target word of the relevant word and the target word using a learnt model.
[Claim 6] The request extraction device according to claim 5, wherein the at least one processor configured to execute the processing instruction learns a model using the labeling result and feature amounts of words stored in advance.
[Claim 7] A request extraction method including:acquiring a first natural sentence input by a user; extracting at least a target word of a relevant word and the target word from the acquired first natural sentence using a model learnt to output the relevant word and the target word with a second natural sentence as an input, the relevant word being a word indicating a request of a user included in the second natural sentence and the target word being a word serving as a target of the relevant word; andoutputting the extracted target word,the acquiring, the extracting, and the outputting being performed by an information processing device.
[Claim 8] The request extraction method according to claim 7 comprising:applying preprocessing for visualizing a factor of relevancy of a request from a user defined by the relevant word to the extracted target word;and outputting a result of the preprocessing.
[Claim 9] A computer-readable recording medium storing a program for causing an information processing device to realize processing of:acquiring a first natural sentence input by a user; extracting at least a target word of a relevant word and the target word from the acquired first natural sentence using a model learnt to output the relevant word and the target word with a second natural sentence as an input, the relevant word being a word indicating a request of a user included in the second natural sentence and the target word being a word serving as the target of the relevant word; and outputting the extracted target word.
[Claim 10] The computer-readable recording medium according to claim 9 storing a program of:applying preprocessing for visualizing a factor of relevancy of a request from a user defined by the relevant word to the extracted target word, andoutputting the result of the preprocessing.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claim 10 is rejected under 35 U.S.C. 101 because:
The claimed invention is directed to non-statutory subject matter.
As per the claims, the language computer readable recording medium, do not place the claimed subject matter into statutory form. The specification fails to indicate any specific type of medium such as ROM, RAM, hard disk, or non-transitory, non-carrier or non-propagating forms of storage, and therefore the claimed subject matter necessarily includes non-statutory embodiments, for instance transitory media, propagating signal, or signal. Examiner suggests amending the claims to include non-transitory. Note that it is not sufficient to add "tangible" or “physical”, see In re Nuijten, 500 F.3d 1346,1356-57 (Fed. Cir. 2007) for instance.
Regarding judicial exceptions:
Claims 1-10 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception such as a natural phenomenon, abstract idea, or law of nature, without significantly more and/or a practical application per se, specifically with one or more of:
1) Not integrating a judicial exception into a practical application (see explanation below), and
2) Not reciting elements that would amount to significantly more than the judicial exception (see explanation below).
Accordingly, claims 1-10 are directed towards patent ineligible subject matter under 35 U.S.C. 101.
The independent claims:
When taking the current claim limitations of the present invention, we see that they are directed to clustering similar sentiment in text
Regarding the claim limitations of claim(s) 1, 9, and 10 as recited:
[Claim 1](Currently amended) An extraction device comprising: at least one memory storing a processing instruction: and at least one processor configured to execute the processing instruction, the at least one processor acquiring a first natural sentence input by a user, extracting at least a target word of a relevant word and the target word from the first natural sentence acquired by the acquisition section using a model learnt to output the relevant word and the target word with a second natural sentence as an input, the relevant word being a word defining relevancy between words included in the second natural sentence and a target word being a word serving as a target of the relevant word, and outputtingthe target word extracted by the extraction section.
[Claim 9](Original) An extraction method comprising: acquiring a first natural sentence input by a user; extracting at least a target word of a relevant word and the target word from the acquired first natural sentence using a model learnt to output the relevant word and the target word with a second natural sentence as an input, the relevant word being a word defining relevancy between words included in the second natural sentence and the target word being a word serving as a target of the relevant word; and outputting the extracted target word, the acquiring, the extracting, and the outputting being performed by an information processing device.
[Claim 10] (Original) A computer-readable recording medium storing a program for causing an information processing device to realize processing of: acquiring a first natural sentence input by a user; extracting at least a target word of a relevant word and the target word from the acquired first natural sentence using a model learnt to output the relevant word and the target word with a second natural sentence as an input, the relevant word being a word defining relevancy between words included in the second natural sentence and the target word being a word serving as a target of the relevant word; and outputting the extracted target word.
Step 1: IS THE CLAIM DIRECTED TO A PROCESS, MACHINE, MANUFACTURE OR COMPOSITION OF MATTER?
Yes
Step 2A.1: IS THE CLAIM DIRECTED TO A LAW OF NATURE, A NATURAL PHENOMENON (PRODUCT OF NATURE) OR AN ABSTRACT IDEA?
Yes
Step 2A.2: DOES THE CLAIM RECITE ADDITIONAL ELEMENTS THAT INTEGRATE THE JUDICIAL EXCEPTION INTO A PRACTICAL APPLICATION?
Regarding the independent claims. No, analogous to Solutran, Inc. v. Elavon, Inc., 931 F.3d 1161, 2019 USPQ2d 281076 (Fed. Cir. 2019), the claims are directed to clustering similar sentiment or grammatical sentences/words such as lacking a clear improvement of function/technology.
Further as demonstrated in Solutran, Inc. v. Elavon, Inc., 931 F.3d 1161, 2019 USPQ2d 281076 (Fed. Cir. 2019), the claims were to methods for electronically processing paper checks, all of which contained limitations setting forth receiving merchant transaction data from a merchant, crediting a merchant’s account, and receiving and scanning paper checks after the merchant’s account is credited. In part one of the Alice/Mayo test, the Federal Circuit determined that the claims were directed to the abstract idea of crediting the merchant’s account before the paper check is scanned. The court first determined that the recited limitations of “crediting a merchant’s account as early as possible while electronically processing a check” is a “long-standing commercial practice” like in Alice and Bilski. 931 F.3d at 1167, 2019 USPQ2d 281076, at *5 (Fed. Cir. 2019). The Federal Circuit then continued with its analysis under part one of the Alice/Mayo test finding that the claims are not directed to an improvement in the functioning of a computer or an improvement to another technology. In particular, the court determined that the claims “did not improve the technical capture of information from a check to create a digital file or the technical step of electronically crediting a bank account” nor did the claims “improve how a check is scanned.” Id.
Regarding the December 5th 2025 Memo in light of September 26, 2025 Appeals Review Panel Decision in Ex parte Desjardins, Appeal 2024-000567 for Application 16/319,040, in deciding if a recited abstract idea does or does not direct the entire claim to an abstract idea, when a claim is considered as a whole.
The claim which demonstrated improvements to technology and/or function recites: "adjust the first values of the plurality of parameters to optimize performance of the machine learning model on the second machine learning task while protecting performance of the machine learning model on the first machine learning task.".
The decision recites that “We are persuaded that constitutes an improvement to how the machine learning model itself operates, and not, for example, the identified mathematical calculation.”
When considering the limitation decided upon, there are clear improvements to machine learning that are not rudimentary or a long-standing practice, for instance adjusting for optimization and protection of performance, as claimed, are improvements to a machine learning models operations, not simply a general mathematical or generic recitation, but rather an improvement to function.
Specifically, Ex Parte Desjardins explained the following:
Enfish ranks among the Federal Circuit's leading cases on the eligibility of technological improvements. In particular, Enfish recognized that “[m]uch of the advancement made in computer technology consists of improvements to software that, by their very nature, may not be defined by particular physical features but rather by logical structures and processes.” 822 F.3d at 1339. Moreover, because “[s]oftware can make non-abstract improvements to computer technology, just as hardware improvements can,” the Federal Circuit held that the eligibility determinations should turn on whether “the claims are directed to an improvement to computer functionality versus being directed to an abstract idea.” Id. at 1336. (Desjardins, page 8).
Further, specifically:
“Paragraph 21 of the Specification, which the Appellant cites, identifies improvements in training the machine learning model itself. Of course, such an assertion in the Specification alone is insufficient to support a patent eligibility determination, absent a subsequent determination that the claim itself reflects the disclosed improvement. See MPEP § 2106.05(a) (citing Intellectual Ventures I LLC v. Symantec Corp., 838 F.3d 1307, 1316 (Fed. Cir. 2016)). Here, however, we are persuaded that the claims reflect such an improvement. For example, one improvement identified in the 8 Appeal2024-000567 Application 16/319,040 Specification is to "effectively learn new tasks in succession whilst protecting knowledge about previous tasks." Spec. ,r 21. The Specification also recites that the claimed improvement allows artificial intelligence (AI) systems to "us[e] less of their storage capacity" and enables "reduced system complexity." Id. When evaluating the claim as a whole, we discern at least the following limitation of independent claim 1 that reflects the improvement: "adjust the first values of the plurality of parameters to optimize performance of the machine learning model on the second machine learning task while protecting performance of the machine learning model on the first machine learning task." We are persuaded that constitutes an improvement to how the machine learning model itself operates, and not, for example, the identified mathematical calculation. Under a charitable view, the overbroad reasoning of the original panel below is perhaps understandable given the confusing nature of existing § 101 jurisprudence, but troubling, because this case highlights what is at stake. Categorically excluding AI innovations from patent protection in the United States jeopardizes America's leadership in this critical emerging technology. Yet, under the panel's reasoning, many AI innovations are potentially unpatentable-even if they are adequately described and nonobvious-because the panel essentially equated any machine learning with an unpatentable "algorithm" and the remaining additional elements as "generic computer components," without adequate explanation. Dec. 24. Examiners and panels should not evaluate claims at such a high level of generality.”
Further in Ex Parte Desjardins, Appeal No. 2024-000567 (PTAB September 26, 2025, Appeals Review Panel Decision) (precedential), the claimed invention was a method of training a machine learning model on a series of tasks. The Appeals Review Panel (ARP) overall credited benefits including reduced storage, reduced system complexity and streamlining, and preservation of performance attributes associated with earlier tasks during subsequent computational tasks as technological improvements that were disclosed in the patent application specification. Specifically, the ARP upheld the Step 2A Prong One finding that the claims recited an abstract idea (i.e., mathematical concept). In Step 2A Prong Two, the ARP then determined that the specification identified improvements as to how the machine learning model itself operates, including training a machine learning model to learn new tasks while protecting knowledge about previous tasks to overcome the problem of “catastrophic forgetting” encountered in continual learning systems. Importantly, the ARP evaluated the claims as a whole in discerning at least the limitation “adjust the first values of the plurality of parameters to optimize performance of the machine learning model on the second machine learning task while protecting performance of the machine learning model on the first machine learning task” reflected the improvement disclosed in the specification. Accordingly, the claims as a whole integrated what would otherwise be a judicial exception instead into a practical application at Step 2A Prong Two, and therefore the claims were
The claim itself does not need to explicitly recite the improvement described in the specification (e.g., “thereby increasing the bandwidth of the channel”). See, e.g., Ex Parte Desjardins, Appeal No. 2024-000567 (PTAB September 26, 2025, Appeals Review Panel Decision) (precedential), in which the specification identified the improvement to machine learning technology by explaining how the machine learning model is trained to learn new tasks while protecting knowledge about previous tasks to overcome the problem of “catastrophic forgetting,” and that the claims reflected the improvement identified in the specification. Indeed, enumerated improvements identified in the Desjardins specification included disclosures of the effective learning of new tasks in succession in connection with specifically protecting knowledge concerning previously accomplished tasks; allowing the system to reduce use of storage capacity; and the enablement of reduced complexity in the system. Such improvements were tantamount to how the machine learning model itself would function in operation and therefore not subsumed in the identified mathematical calculation.
The second paragraph of MPEP § 2106.05(a), subsection I, is revised to add new examples xiii and xiv to the list of examples that may show an improvement in computer functionality:
xiii. An improved way of training a machine learning model that protected the model’s knowledge about previous tasks while allowing it to effectively learn new tasks; Ex Parte Desjardins, Appeal No. 2024-000567 (PTAB September 26, 2025, Appeals Review Panel Decision) (precedential); and
xiv. Improvements to computer component or system performance based upon adjustments to parameters of a machine learning model associated with tasks or workstreams; Ex Parte Desjardins, Appeal No. 2024-000567 (PTAB September 26, 2025, Appeals Review Panel Decision) (precedential).
Step 2B: DOES THE CLAIM RECITE ADDITIONAL ELEMENTS THAT AMOUNT TO SIGNIFICANTLY MORE THAN THE JUDICIAL EXCEPTION?
No. The claims can be performed by a person to learn patterns of positive of negative sentiment such as in general reading comprehension.
• Collecting and comparing known information (Classen)
• Collecting information, analyzing it, and displaying certain results of the collection and analysis (Electric Power Group; West View†)
• Comparing information regarding a sample or test subject to a control or target data (Ambry/Myriad CAFC)
• Comparing new and stored information and using rules to identify options (Smartgene)†
Assistance for Applicant in amending to overcome 101:
Limitations that the courts have found to qualify as “significantly more” when recited in a claim with a judicial exception include:
i. Improvements to the functioning of a computer, e.g., a modification of conventional Internet hyperlink protocol to dynamically produce a dual-source hybrid webpage, as discussed in DDR Holdings, LLC v. Hotels.com, L.P., 773 F.3d 1245, 1258-59, 113 USPQ2d 1097, 1106-07 (Fed. Cir. 2014) (see MPEP § 2106.05(a));
ii. Improvements to any other technology or technical field, e.g., a modification of conventional rubber-molding processes to utilize a thermocouple inside the mold to constantly monitor the temperature and thus reduce under- and over-curing problems common in the art, as discussed in Diamond v. Diehr, 450 U.S. 175, 191-92, 209 USPQ 1, 10 (1981) (see MPEP § 2106.05(a));
iii. Applying the judicial exception with, or by use of, a particular machine, e.g., a Fourdrinier machine (which is understood in the art to have a specific structure comprising a headbox, a paper-making wire, and a series of rolls) that is arranged in a particular way to optimize the speed of the machine while maintaining quality of the formed paper web, as discussed in Eibel Process Co. v. Minn. & Ont. Paper Co., 261 U.S. 45, 64-65 (1923) (see MPEP § 2106.05(b));
iv. Effecting a transformation or reduction of a particular article to a different state or thing, e.g., a process that transforms raw, uncured synthetic rubber into precision-molded synthetic rubber products, as discussed in Diehr, 450 U.S. at 184, 209 USPQ at 21 (see MPEP § 2106.05(c));
v. Adding a specific limitation other than what is well-understood, routine, conventional activity in the field, or adding unconventional steps that confine the claim to a particular useful application, e.g., a non-conventional and non-generic arrangement of various computer components for filtering Internet content, as discussed in BASCOM Global Internet v. AT&T Mobility LLC, 827 F.3d 1341, 1350-51, 119 USPQ2d 1236, 1243 (Fed. Cir. 2016) (see MPEP § 2106.05(d)); or
vi. Other meaningful limitations beyond generally linking the use of the judicial exception to a particular technological environment, e.g., an immunization step that integrates an abstract idea of data comparison into a specific process of immunizing that lowers the risk that immunized patients will later develop chronic immune-mediated diseases, as discussed in Classen Immunotherapies Inc. v. Biogen IDEC, 659 F.3d 1057, 1066-68, 100 USPQ2d 1492, 1499-1502 (Fed. Cir. 2011) (see MPEP § 2106.05(e)).
To help in amending the claims and for analysis purposes, example claims 3 and 4 are listed below from the courts, however such example amendment potentials are not limited to the provided examples and alternative amendments are possible using i-vi from the courts. The example below show differences between eligible claims (court claim 4) and ineligible claims (court claim 3), which thus illustrates significantly more which is tied to hardware that is not generally recited in the art. In this case general changing of font size in claim 3 versus a significant step of conditionally changing font size tied to hardware in claim 4.
See below examples based on MPEP and not on the current claim set, to help amend to overcome 101 rejections:
Regarding independent claim examples:
For instance in the example claims, for example claims 3 and 4 below:
Ineligible
3. A computer‐implemented method of resizing textual information within a window displayed in a graphical user interface, the method comprising:
(not significant) generating first data for describing the area of a first graphical element;
(not significant) generating second data for describing the area of a second graphical element containing textual information;
(not significant) calculating, by the computer, a scaling factor for the textual information which is proportional to the difference between the first data and second data.
The claim recites that the step of calculating a scaling factor is performed by “the computer” (referencing the computer recited in the preamble). Such a limitation gives “life, meaning and vitality” to the preamble and, therefore, the preamble is construed to further limit the claim. (See MPEP 2111.02.)
However, the mere recitation of “computer‐implemented” is akin to adding the words “apply it” in conjunction with the abstract idea. Such a limitation is not enough to qualify as significantly more. With regards to the graphical user interface limitation, the courts have found that simply limiting the use of the abstract idea to a particular technological environment is not significantly more. (See, e.g., Flook.)
Whereas in similar claim 4:
Eligible
4. A computer‐implemented method for dynamically relocating textual information within an underlying window displayed in a graphical user interface, the method comprising:
displaying a first window containing textual information in a first format within a graphical user interface on a computer screen;
displaying a second window within the graphical user interface;
constantly monitoring the boundaries of the first window and the second window to detect an overlap condition where the second window overlaps the first window such that the textual information in the first window is obscured from a user’s view;
determining the textual information would not be completely viewable if relocated to an unobstructed portion of the first window;
calculating a first measure of the area of the first window and a second measure of the area of the unobstructed portion of the first window;
calculating a scaling factor which is proportional to the difference between the first measure and the second measure;
scaling the textual information based upon the scaling factor;
(significant step) automatically relocating the scaled textual information, by a processor, to the unobscured portion of the first window in a second format during an overlap condition so that the entire scaled textual information is viewable on the computer screen by the user;
(significant step) automatically returning the relocated scaled textual information, by the processor, to the first format within the first window when the overlap condition no longer exists.
These limitations are not merely attempting to limit the mathematical algorithm to a particular technological environment. Instead, these claim limitations recite a specific application of the mathematical algorithm that improves the functioning of the basic display function of the computer itself. As discussed above, the scaling and relocating the textual information in overlapping windows improves the ability of the computer to display information and interact with the user.
The dependent claims are rejected as follows, for the same reasoning as being directed towards patent ineligible subject matter under 35 U.S.C. 101, and not adding eligible subject matter to the respective parent claim.
Claims 2 and 5-8 are directed to the same scope without further limiting the parent claim in any significant way, focusing on generalized relevancy and positive versus negative sentiment for clustered/grouped data. The acts of such claims still pertain to human steps and do not demonstrate a practical application per se.
Claims 3 and 4, are loosely directed to a positive or negative model to determine sentiment without further limiting the scope into a practical application of non-mental or non-extra solution activity.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-10 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 20080256040 A1 Sundaresan; Neelakantan et al. (hereinafter Sundaresan) in view of US 10037491 B1 Fang; Ji et al. (hereinafter Fang).
Re claim 1, Sundaresan teaches
[Claim 1](Currently amended) An extraction device comprising: at least one memory storing a processing instruction: and at least one processor configured to execute the processing instruction, (fig. 1)
the at least one processor acquiring a first natural sentence input by a user, (inputting a query or request for instance 0049 with fig. 3-5)
extracting at least a target word of a relevant word and the target word from the first natural sentence acquired by the acquisition section using a model learnt to output the relevant word and the target word with a second natural sentence as an input, the relevant word being a word defining relevancy between words included in the second natural sentence and a target word being a word serving as a target of the relevant word, and (as in fig. 1, 14, and 16 with 0060 1st through nth phrases and words input are clustered together based on sentiment such as positive or negative or neutral, using multiple models for each phrase/word parsed 0065 and features thereof grouped 0083 … based on various user reviews or searches such as by inputting a query or request or review for instance 0049 with fig. 3-5)
outputting the target word extracted by the extraction section. (outputting results e.g. fig. 3-5… as in fig. 1, 14, and 16 with 0060 1st through nth phrases and words input are clustered together based on sentiment such as positive or negative or neutral, using multiple models for each phrase/word parsed 0065 and features thereof grouped 0083 … based on various user reviews or searches such as by inputting a query or request or review for instance 0049 with fig. 3-5)
However, while Sundaresan teaches automatically aggregating data which the system is clustering without human intervention using multiple models for scoring, i.e. learning under BRI, it fails to teach a learning model per se.
Fang covers the use of a learning model (Fang machine/model learning as in 3a with classification for sentiment in fig. 2a with 0020 and 0055-0060)
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Sundaresan to incorporate the above claim limitations as taught by Fang to allow for a simple substitution of one known element of a learning model in the context of sentiment analysis for another such as the scoring models and intent extraction of Sundaresan to obtain predictable results, thereby improving the classification thereof by adjusting parameters of the model to improve predictions for the training data, wherein context sensitivity that is unknown can be handled by learning, for instance the values of the comment/review features, and make a classification according to rules of the model in-context, which more accurately classifies the sentiment associated with the comment/review data.
Re claim 9, this claim has been rejected for teaching a broader, or narrower claim based on general inclusion of hardware alone (e.g. processor, memory, instructions), representation of claim 1 omitting/including hardware for instance, otherwise amounting to a virtually identical scope
Re claim 10, this claim has been rejected for teaching a broader, or narrower claim based on general inclusion of hardware alone (e.g. processor, memory, instructions), representation of claim 1 omitting/including hardware for instance, otherwise amounting to a virtually identical scope
Re claim 2, Sundaresan teaches
[Claim 2] (Currently amended) The extraction device according to claim 1, wherein the at least one processor configured to execute the processing instruction extracts at least the target word of a pair of the target word and the relevant word corresponding to each relevancy using a plurality of models learnt for each relevancy defined by the relevant word. (two or more words groped as in fig. 1, 14, and 16 with 0060 1st through nth phrases and words input are clustered together based on sentiment such as positive or negative or neutral, using multiple models for each phrase/word parsed 0065 and features thereof grouped 0083 … based on various user reviews or searches such as by inputting a query or request or review for instance 0049 with fig. 3-5)
Re claim 3, Sundaresan while teaching classification based on at least sentiment, fails to teach a dedicated sentiment model:
[Claim 3] (Currently amended) The extraction device according to claim 1, wherein the at least one processor configured to execute the processing instruction extracts at least the target word of the relevant word indicating a positive feeling and the target word from the first natural sentence acquired by the acquisition section using a positive model extracting a pair of the relevant word indicating a positive feeling and the target word. (Fang machine/model learning as in 3a with positive and negative models using classification for sentiment in fig. 2a with 0020 and 0055-0060)
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Sundaresan to incorporate the above claim limitations as taught by Fang to allow for a simple substitution of one known element of a dedicated sentiment model (learning) in the context of sentiment analysis for another such as the scoring models and intent extraction of Sundaresan to obtain predictable results, thereby reducing false positives and improving the classification thereof by adjusting parameters of the model to improve predictions for the training data, wherein context sensitivity that is unknown can be handled by learning, for instance the values of the comment/review features, and make a classification according to rules of the model in-context, which more accurately classifies the sentiment associated with the comment/review data.
Re claim 4, Sundaresan while teaching classification based on at least sentiment, fails to teach a dedicated sentiment model:
[Claim 4] (Currently amended) The extraction device according to claim 1, wherein the at least one processor configured to execute the processing instruction extracts at least the target word of the relevant word indicating a negative feeling and the target word from the first natural sentence acquired by the acquisition section using a negative model extracting a pair of the relevant word indicating a negative feeling and the target word. (Fang machine/model learning as in 3a with positive and negative models using classification for sentiment in fig. 2a with 0020 and 0055-0060)
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Sundaresan to incorporate the above claim limitations as taught by Fang to allow for a simple substitution of one known element of a dedicated sentiment model (learning) in the context of sentiment analysis for another such as the scoring models and intent extraction of Sundaresan to obtain predictable results, thereby reducing false positives and improving the classification thereof by adjusting parameters of the model to improve predictions for the training data, wherein context sensitivity that is unknown can be handled by learning, for instance the values of the comment/review features, and make a classification according to rules of the model in-context, which more accurately classifies the sentiment associated with the comment/review data.
Re claim 5, Sundaresan teaches [Claim 5] (Currently amended) The extraction device according to claim 1, wherein the at least one processor configured to execute the processing instruction learns a model to extract and output the relevant word and the target word for a natural sentence using a result of labeling the relevant word and the target word for the second natural sentence after parsing, and (labeled as in classified or clustered, as in fig. 1, 14, and 16 with 0060 1st through nth phrases and words input are clustered together based on sentiment such as positive or negative or neutral, using multiple models for each phrase/word parsed 0065 and features thereof grouped 0083 … based on various user reviews or searches such as by inputting a query or request or review for instance 0049 with fig. 3-5)
extracts at least the target word of the relevant word and the target word using a model learnt by the learning section. (as in fig. 1, 14, and 16 with 0060 1st through nth phrases and words input are clustered together based on sentiment such as positive or negative or neutral, using multiple models for each phrase/word parsed 0065 and features thereof grouped 0083 … based on various user reviews or searches such as by inputting a query or request or review for instance 0049 with fig. 3-5)
However, while Sundaresan teaches automatically aggregating data which the system is clustering without human intervention using multiple models for scoring, i.e. learning under BRI, it fails to teach a learning model per se.
Fang covers the use of a learning model (Fang machine/model learning as in 3a with classification for sentiment in fig. 2a with 0020 and 0055-0060)
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Sundaresan to incorporate the above claim limitations as taught by Fang to allow for a simple substitution of one known element of a learning model in the context of sentiment analysis for another such as the scoring models and intent extraction of Sundaresan to obtain predictable results, thereby improving the classification thereof by adjusting parameters of the model to improve predictions for the training data, wherein context sensitivity that is unknown can be handled by learning, for instance the values of the comment/review features, and make a classification according to rules of the model in-context, which more accurately classifies the sentiment associated with the comment/review data.
Re claim 6, Sundaresan teaches
[Claim 6] (Currently amended) The extraction device according to claim 5, wherein the at least one processor configured to execute the processing instruction learns a model using the labeling result and feature amounts of words stored in advance. (data is aggregated as new information is entered or sourced from new user reviews, as in fig. 1, 14, and 16 with 0060 1st through nth phrases and words input are clustered together based on sentiment such as positive or negative or neutral, using multiple models for each phrase/word parsed 0065 and features thereof grouped 0083 … based on various user reviews or searches such as by inputting a query or request or review for instance 0049 with fig. 3-5)
However, while Sundaresan teaches automatically aggregating data which the system is clustering without human intervention using multiple models for scoring, i.e. learning under BRI, it fails to teach a learning model per se.
Fang covers the use of a learning model (Fang machine/model learning as in 3a with classification for sentiment in fig. 2a with 0020 and 0055-0060)
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Sundaresan to incorporate the above claim limitations as taught by Fang to allow for a simple substitution of one known element of a learning model in the context of sentiment analysis for another such as the scoring models and intent extraction of Sundaresan to obtain predictable results, thereby improving the classification thereof by adjusting parameters of the model to improve predictions for the training data, wherein context sensitivity that is unknown can be handled by learning, for instance the values of the comment/review features, and make a classification according to rules of the model in-context, which more accurately classifies the sentiment associated with the comment/review data.
Re claim 7, Sundaresan teaches
[Claim 7] (Currently amended) The extraction device according to claim 1, wherein the at least one processor configured to execute the processing instruction applies preprocessing for visualizing a factor of relevancy of a request from a user defined by the relevant word to the target word extracted by the extraction section, and (the GUI of fig. 4 displays the processed data… as in fig. 1, 14, and 16 with 0060 1st through nth phrases and words input are clustered together based on sentiment such as positive or negative or neutral, using multiple models for each phrase/word parsed 0065 and features thereof grouped 0083 … based on various user reviews or searches such as by inputting a query or request or review for instance 0049 with fig. 3-5)
outputs a result of the preprocessing performed by the preprocessing section. (outputted on the GUI of fig. 4 which displays the processed data… as in fig. 1, 14, and 16 with 0060 1st through nth phrases and words input are clustered together based on sentiment such as positive or negative or neutral, using multiple models for each phrase/word parsed 0065 and features thereof grouped 0083 … based on various user reviews or searches such as by inputting a query or request or review for instance 0049 with fig. 3-5)
Re claim 8, Sundaresan teaches
[Claim 8] (Currently amended) The extraction device according to claim 7, wherein the at least one processor configured to execute the processing instruction applies clustering as the preprocessing to the target word extracted by the extraction section, and (as in fig. 1, 14, and 16 with 0060 1st through nth phrases and words input are clustered together based on sentiment such as positive or negative or neutral, using multiple models for each phrase/word parsed 0065 and features thereof grouped 0083 … based on various user reviews or searches such as by inputting a query or request or review for instance 0049 with fig. 3-5)
outputs a result of the clustering applied by the preprocessing section. (outputted on the GUI of fig. 4 which displays the processed data… as in fig. 1, 14, and 16 with 0060 1st through nth phrases and words input are clustered together based on sentiment such as positive or negative or neutral, using multiple models for each phrase/word parsed 0065 and features thereof grouped 0083 … based on various user reviews or searches such as by inputting a query or request or review for instance 0049 with fig. 3-5)
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
US 9799035 B2 Cama; Karl J. et al.
Review mapping into emotion/sentiment
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHAEL C COLUCCI whose telephone number is (571)270-1847. The examiner can normally be reached on M-F 9 AM - 5 PM.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Flanders can be reached at (571)272-7516. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MICHAEL COLUCCI/Primary Examiner, Art Unit 2655 (571)-270-1847
Examiner FAX: (571)-270-2847
Michael.Colucci@uspto.gov