Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 1-22-2026 has been entered.
Response to Amendment
In the previous Office Action issued 8-22-2025 (hereinafter “the previous Office Action”), claims 1-27 were pending.
This action is in response to the amendment and remarks filed 1-22-2026. In the amendment, claims 1, 7, 13, and 20 were amended, no claims were canceled, and no claims were added. Thus, claims 1-27 are pending.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 12-4-2025 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-27 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1: Claims 1-6 are directed to a processor [machine].
Claims 7-12 are directed to a system [machine].
Claims 13-19 are directed to a method [process].
Claims 20-27 are directed to a non-transitory, machine-readable medium [machine].
Regarding Claim 1:
Step 2A, Prong 1: The following limitations are directed to the abstract idea of a mental process [see MPEP 2106.04(a)(2) III. C.]. In particular, the claim recites mental processes that are concepts performed in the human mind or with pen and paper (including an observation, evaluation, judgement, or opinion).
generate an event prediction…
calculation of a distance…
As drafted, under their broadest reasonable interpretation (BRI), in view of the specification, the above limitations cover concepts performed in the human mind (observation, evaluation, judgement, or opinion). Given a sufficiently small set of data, nothing in the claim prohibits this process from being performed mentally or with pen and paper.
Step 2A, Prong 2: There are no additional elements in this claim that integrate the judicial exception into a practical application.
The following additional element(s) is/are adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea [see MPEP 2106.05(f)] and therefore fails to integrate the judicial exception into a practical application.
A processor, comprising: one or more circuits to:
use one or more neural networks to:
The following additional element is directed to insignificant extra-solution activity to the judicial exception [see MPEP 2106.05(g)].
provide output data that indicates the event prediction
The following additional elements can be considered as generally linking the use of judicial exception to a particular technological environment or field of use [See MPEP § 2106.05(h)]. Therefore, the additional elements do not integrate the abstract ideas into a practical application.
…based at least in part, on: indications of when prior events occurred
wherein the indications comprise a first time of publication of a first published text comprising content corresponding to a first event and a second time of publication of a second published text comprising content corresponding to a second event that is different from the first event
…between the first event extracted from the first published text and the second event extracted from the second published text based, at least in part, on a length of time between the first time of publication of the first published text that the first event was extracted from and the second time of publication of the second published text that was the second event, different than the first event, was extracted from
Step 2B: There are no additional elements in this claim that amount to significantly more than the judicial exception.
The following additional elements are adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea [see MPEP 2106.05(f)] and therefore fails to amount to significantly more than the judicial exception.
A processor, comprising: one or more circuits to:
use one or more neural networks to:
The following additional element is directed to receiving or transmitting data over a network. The courts (as per Intellectual Ventures v. Symantec, 838 F.3d 1307, 1321; 120 USPQ2d 1353, 1362 (Fed. Cir. 2016)) have recognized receiving or transmitting data over a network as well-understood, routine, and conventional functions when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity to the judicial exception [see MPEP 2106.05(d) II.].
provide output data that indicates the event prediction
The following additional elements can be considered as generally linking the use of judicial exception to a particular technological environment or field of use [See MPEP § 2106.05(h)]. Therefore, the additional elements do not amount to significantly more than the judicial exception.
…based at least in part, on: indications of when prior events occurred
wherein the indications comprise a first time of publication of a first published text comprising content corresponding to a first event and a second time of publication of a second published text comprising content corresponding to a second event that is different from the first event
…between the first event extracted from the first published text and the second event extracted from the second published text based, at least in part, on a length of time between the first time of publication of the first published text that the first event was extracted from and the second time of publication of the second published text that was the second event, different than the first event, was extracted from
Regarding Claim 2:
Step 2A, Prong 1: The claim recites the same abstract ideas as in claim 1.
The following limitations are directed to the abstract idea of a mental process [see MPEP 2106.04(a)(2) III. C.]. In particular, the claim recites mental processes that are concepts performed in the human mind (including an observation, evaluation, judgement, or opinion).
infer information about the textual data
extract the indications of when the prior events occurred from textual data
Step 2A, Prong 2: There are no additional elements in this claim that integrate the judicial exception into a practical application.
The following additional elements are adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea [see MPEP 2106.05(f)] and therefore fails to integrate the judicial exception into a practical application.
wherein the one or more circuits are to
to train the one or more neural networks to
Step 2B: There are no additional elements in this claim that amount to significantly more than the judicial exception.
The following additional elements are adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea [see MPEP 2106.05(f)] and therefore fails to amount to significantly more than the judicial exception.
wherein the one or more circuits are to
to train the one or more neural networks to
Regarding Claim 3:
Step 2A, Prong 1: The claim recites the same abstract ideas as in claim 1.
The following limitation is directed to the abstract idea of a mental process [see MPEP 2106.04(a)(2) III. C.]. In particular, the claim recites mental processes that are concepts performed in the human mind (including an observation, evaluation, judgement, or opinion).
infer information based, at least in part, on a chronological order of two or more prior events
Step 2A, Prong 2: There are no additional elements in this claim that integrate the judicial exception into a practical application.
The following additional element is adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea [see MPEP 2106.05(f)] and therefore fails to integrate the judicial exception into a practical application.
wherein the one or more neural networks are to
Step 2B: There are no additional elements in this claim that amount to significantly more than the judicial exception.
The following additional element is adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea [see MPEP 2106.05(f)] and therefore fails to amount to significantly more than the judicial exception.
wherein the one or more neural networks are to
Regarding Claim 4:
Step 2A, Prong 1: The claim recites the same abstract ideas as in claim 1.
Step 2A, Prong 2: There are no additional elements in this claim that integrate the judicial exception into a practical application.
The following additional element is adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea [see MPEP 2106.05(f)] and therefore fails to integrate the judicial exception into a practical application.
wherein the one or more circuits are to train the one or more neural networks by pre-training the one or more neural networks to perform a plurality of different time-based tasks
Step 2B: There are no additional elements in this claim that amount to significantly more than the judicial exception.
The following additional element is adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea [see MPEP 2106.05(f)] and therefore fails to amount to significantly more than the judicial exception.
wherein the one or more circuits are to train the one or more neural networks by pre-training the one or more neural networks to perform a plurality of different time-based tasks
Regarding Claim 5:
Step 2A, Prong 1: The claim recites the same abstract ideas as in claim 1.
Step 2A, Prong 2: There are no additional elements in this claim that integrate the judicial exception into a practical application.
The following additional element is adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea [see MPEP 2106.05(f)] and therefore fails to integrate the judicial exception into a practical application.
wherein the one or more circuits use representations of uncut paragraphs of text to train the one or more neural networks
Step 2B: There are no additional elements in this claim that amount to significantly more than the judicial exception.
The following additional element is adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea [see MPEP 2106.05(f)] and therefore fails to amount to significantly more than the judicial exception.
wherein the one or more circuits use representations of uncut paragraphs of text to train the one or more neural networks
Regarding Claim 6:
Step 2A, Prong 1: The claim recites the same abstract ideas as in claim 1.
Step 2A, Prong 2: There are no additional elements in this claim that integrate the judicial exception into a practical application.
The following additional element does not meaningfully limit the judicial exception [see MPEP 2106.05(e)]. The claim simply recites additional information regarding the characteristics of the data used to generate an event prediction. Therefore, the additional element does not integrate the abstract ideas into a practical application.
wherein the indications are based, at least in part, on where two or more of the prior events appear in a publication
Step 2B: There are no additional elements in this claim that amount to significantly more than the judicial exception.
The following additional element does not meaningfully limit the judicial exception [see MPEP 2106.05(e)]. The claim simply recites additional information regarding the characteristics of the data used to generate an event prediction. Therefore, the additional element does not amount to significantly more than the judicial exception.
wherein the indications are based, at least in part, on where two or more of the prior events appear in a publication
Regarding Claim 7:
Claim 7 corresponds to claim 1.
Step 2A, Prong 1: The claim recites the same abstract ideas as in claim 1.
Step 2A, Prong 2: There are no additional elements in this claim that integrate the judicial exception into a practical application. The analysis of claim 7 at this step mirror that of claim 1, with the exception the following limitations.
The following additional element is adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea [see MPEP 2106.05(f)] and therefore fails to integrate the judicial exception into a practical application.
A system, comprising: one or more processors to…
Step 2B: There are no additional elements in this claim that amount to significantly more than the judicial exception. The analysis of claim 7 at this step mirror that of claim 1, with the exception the following limitations.
The following additional element is adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea [see MPEP 2106.05(f)] and therefore fails to amount to significantly more than the judicial exception.
A system, comprising: one or more processors to…
Regarding Claim 8:
Step 2A, Prong 1: The claim recites the same abstract ideas as in claim 7.
Step 2A, Prong 2: There are no additional elements in this claim that integrate the judicial exception into a practical application.
The following additional element is adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea [see MPEP 2106.05(f)] and therefore fails to integrate the judicial exception into a practical application.
wherein the one or more neural networks comprise a bidirectional encoder representations from transformers (BERT) learning model
Step 2B: There are no additional elements in this claim that amount to significantly more than the judicial exception.
The following additional element is adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea [see MPEP 2106.05(f)] and therefore fails to amount to significantly more than the judicial exception.
wherein the one or more neural networks comprise a bidirectional encoder representations from transformers (BERT) learning model
Regarding Claim 9:
Step 2A, Prong 1: The claim recites the same abstract ideas as in claim 7.
Step 2A, Prong 2: There are no additional elements in this claim that integrate the judicial exception into a practical application.
The following additional element does not meaningfully limit the judicial exception [see MPEP 2106.05(e)]. The claim simply recites additional information regarding the characteristics of the data used to generate an event prediction. Therefore, the additional element does not integrate the abstract ideas into a practical application.
wherein the indications are based, at least in part, on where two or more of the prior events appear in a publication
Step 2B: There are no additional elements in this claim that amount to significantly more than the judicial exception.
The following additional element does not meaningfully limit the judicial exception [see MPEP 2106.05(e)]. The claim simply recites additional information regarding the characteristics of the data used to generate an event prediction. Therefore, the additional element does not amount to significantly more than the judicial exception.
wherein the indications are based, at least in part, on where two or more of the prior events appear in a publication
Regarding Claim 10:
Step 2A, Prong 1: The claim recites the same abstract ideas as in claim 7.
The following limitation is directed to the abstract idea of a mental process [see MPEP 2106.04(a)(2) III. C.]. In particular, the claim recites mental processes that are concepts performed in the human mind (including an observation, evaluation, judgement, or opinion).
infer a causal relationship between events
Step 2A, Prong 2: There are no additional elements in this claim that integrate the judicial exception into a practical application.
The following additional elements is adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea [see MPEP 2106.05(f)] and therefore fails to integrate the judicial exception into a practical application.
wherein the one or more processors are further to train the one or more neural networks to
Step 2B: There are no additional elements in this claim that amount to significantly more than the judicial exception.
The following additional element is adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea [see MPEP 2106.05(f)] and therefore fails to amount to significantly more than the judicial exception.
wherein the one or more processors are further to train the one or more neural networks to
Regarding Claim 11:
Step 2A, Prong 1: The claim recites the same abstract ideas as in claim 7.
Step 2A, Prong 2: There are no additional elements in this claim that integrate the judicial exception into a practical application.
The following additional elements do not meaningfully limit the judicial exception [see MPEP 2106.05(e)]. The claim simply recites additional information regarding the characteristics of the training of neural networks for the generation of an event prediction. Therefore, the additional elements do not integrate the abstract ideas into a practical application.
wherein training the one or more neural networks is further based, at least in part, on pre-training with multiple different tasks
wherein performance of the different tasks is based on when one or more events occurred
Step 2B: There are no additional elements in this claim that amount to significantly more than the judicial exception.
The following additional elements do not meaningfully limit the judicial exception [see MPEP 2106.05(e)]. The claim simply recites additional information regarding the characteristics of the training of neural networks for the generation of an event prediction. Therefore, the additional element does not amount to significantly more than the judicial exception.
wherein training the one or more neural networks is further based, at least in part, on pre-training with multiple different tasks
wherein performance of the different tasks is based on when one or more events occurred
Regarding Claim 12:
Step 2A, Prong 1: The claim recites the same abstract ideas as in claim 7.
The following limitation is directed to the abstract idea of a mental process [see MPEP 2106.04(a)(2) III. C.]. In particular, the claim recites mental processes that are concepts performed in the human mind (including an observation, evaluation, judgement, or opinion).
calculate a distance between two or more events based, at least in part, whether the two or more events appeared in different publications
Step 2A, Prong 2: There are no additional elements in this claim that integrate the judicial exception into a practical application.
The following additional elements is adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea [see MPEP 2106.05(f)] and therefore fails to integrate the judicial exception into a practical application.
wherein the one or more processors train the one or more neural networks to
Step 2B: There are no additional elements in this claim that amount to significantly more than the judicial exception.
The following additional element is adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea [see MPEP 2106.05(f)] and therefore fails to amount to significantly more than the judicial exception.
wherein the one or more processors train the one or more neural networks to
Regarding Claim 13:
Claim 13 corresponds to claim 1.
Step 2A, Prong 1: The claim recites the same abstract ideas as in claim 1.
Step 2A, Prong 2: There are no additional elements in this claim that integrate the judicial exception into a practical application. The analysis of claim 13 at this step mirror that of claim 1.
Step 2B: There are no additional elements in this claim that amount to significantly more than the judicial exception. The analysis of claim 13 at this step mirror that of claim 1.
Regarding Claim 14:
Step 2A, Prong 1: The claim recites the same abstract ideas as in claim 13.
The following limitation is directed to the abstract idea of a mental process [see MPEP 2106.04(a)(2) III. C.]. In particular, the claim recites mental processes that are concepts performed in the human mind (including an observation, evaluation, judgement, or opinion).
masks a date on which at least one of the prior events occurred
Step 2A, Prong 2: There are no additional elements in this claim that integrate the judicial exception into a practical application.
The following additional element is adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea [see MPEP 2106.05(f)] and therefore fails to integrate the judicial exception into a practical application.
wherein using the one or more neural networks
Step 2B: There are no additional elements in this claim that amount to significantly more than the judicial exception.
The following additional element is adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea [see MPEP 2106.05(f)] and therefore fails to amount to significantly more than the judicial exception.
wherein using the one or more neural networks
Regarding Claim 15:
Step 2A, Prong 1: The claim recites the same abstract ideas as in claim 13.
The following limitation is directed to the abstract idea of a mental process [see MPEP 2106.04(a)(2) III. C.]. In particular, the claim recites mental processes that are concepts performed in the human mind (including an observation, evaluation, judgement, or opinion).
infer a causal relationship between events and entities
Step 2A, Prong 2: There are no additional elements in this claim that integrate the judicial exception into a practical application.
The following additional element is adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea [see MPEP 2106.05(f)] and therefore fails to integrate the judicial exception into a practical application.
further comprising training the one or more neural networks to
Step 2B: There are no additional elements in this claim that amount to significantly more than the judicial exception.
The following additional element is adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea [see MPEP 2106.05(f)] and therefore fails to amount to significantly more than the judicial exception.
further comprising training the one or more neural networks to
Regarding Claim 16:
Step 2A, Prong 1: The claim recites the same abstract ideas as in claim 13.
The following limitation is directed to the abstract idea of a mental process [see MPEP 2106.04(a)(2) III. C.]. In particular, the claim recites mental processes that are concepts performed in the human mind (including an observation, evaluation, judgement, or opinion).
calculate a distance between prior events based, at least in part, on where one or more of the prior events appeared in a publication
Step 2A, Prong 2: There are no additional elements in this claim that integrate the judicial exception into a practical application.
The following additional element is adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea [see MPEP 2106.05(f)] and therefore fails to integrate the judicial exception into a practical application.
further comprising training the one or more neural networks to
Step 2B: There are no additional elements in this claim that amount to significantly more than the judicial exception.
The following additional element is adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea [see MPEP 2106.05(f)] and therefore fails to amount to significantly more than the judicial exception.
further comprising training the one or more neural networks to
Regarding Claim 17:
Step 2A, Prong 1: The claim recites the same abstract ideas as in claim 13.
Step 2A, Prong 2: There are no additional elements in this claim that integrate the judicial exception into a practical application.
The following additional element is adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea [see MPEP 2106.05(f)] and therefore fails to integrate the judicial exception into a practical application.
wherein training the one or more neural networks is further based, at least in part, on a task of finding expressions in textual data that refer to an entity
Step 2B: There are no additional elements in this claim that amount to significantly more than the judicial exception.
The following additional element is adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea [see MPEP 2106.05(f)] and therefore fails to amount to significantly more than the judicial exception.
wherein training the one or more neural networks is further based, at least in part, on a task of finding expressions in textual data that refer to an entity
Regarding Claim 18:
Step 2A, Prong 1: The claim recites the same abstract ideas as in claim 13.
Step 2A, Prong 2: There are no additional elements in this claim that integrate the judicial exception into a practical application.
The following additional element is adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea [see MPEP 2106.05(f)] and therefore fails to integrate the judicial exception into a practical application.
wherein the training of one or more neural networks is further based, at least in part, on dwell times for one or more question-document pairs from a log
Step 2B: There are no additional elements in this claim that amount to significantly more than the judicial exception.
The following additional element is adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea [see MPEP 2106.05(f)] and therefore fails to amount to significantly more than the judicial exception.
wherein the training of one or more neural networks is further based, at least in part, on dwell times for one or more question-document pairs from a log
Regarding Claim 19:
Step 2A, Prong 1: The claim recites the same abstract ideas as in claim 13.
Step 2A, Prong 2: There are no additional elements in this claim that integrate the judicial exception into a practical application.
The following additional element is adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea [see MPEP 2106.05(f)] and therefore fails to integrate the judicial exception into a practical application.
wherein the training of one or more neural networks is further based, at least in part, on rewriting a question of a passage- question pairs
Step 2B: There are no additional elements in this claim that amount to significantly more than the judicial exception.
The following additional element is adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea [see MPEP 2106.05(f)] and therefore fails to amount to significantly more than the judicial exception.
wherein the training of one or more neural networks is further based, at least in part, on rewriting a question of a passage- question pairs
.
Regarding Claim 20:
Claim 20 corresponds to claim 1.
Step 2A, Prong 1: The claim recites the same abstract ideas as in claim 1.
Step 2A, Prong 2: There are no additional elements in this claim that integrate the judicial exception into a practical application. The analysis of claim 20 at this step mirror that of claim 1, with the exception the following limitations.
The following additional elements are adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea [see MPEP 2106.05(f)] and therefore fails to integrate the judicial exception into a practical application.
A non-transitory, machine-readable medium having stored thereon a set of instructions, which if performed by one or more processors, cause the one or more processors to at least:
Step 2B: There are no additional elements in this claim that amount to significantly more than the judicial exception. The analysis of claim 20 at this step mirror that of claim 1, with the exception the following limitations.
The following additional elements are adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea [see MPEP 2106.05(f)] and therefore fails to amount to significantly more than the judicial exception.
A non-transitory, machine-readable medium having stored thereon a set of instructions, which if performed by one or more processors, cause the one or more processors to at least:
Regarding Claim 21:
Step 2A, Prong 1: The claim recites the same abstract ideas as in claim 20.
Step 2A, Prong 2: There are no additional elements in this claim that integrate the judicial exception into a practical application.
The following additional element is adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea [see MPEP 2106.05(f)] and therefore fails to integrate the judicial exception into a practical application.
wherein the one or more neural networks are to be trained further based, at least in part, by masking entities
Step 2B: There are no additional elements in this claim that amount to significantly more than the judicial exception.
The following additional element is adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea [see MPEP 2106.05(f)] and therefore fails to amount to significantly more than the judicial exception.
wherein the one or more neural networks are to be trained further based, at least in part, by masking entities
Regarding Claim 22:
Step 2A, Prong 1: The claim recites the same abstract ideas as in claim 20.
Step 2A, Prong 2: There are no additional elements in this claim that integrate the judicial exception into a practical application.
The following additional element is adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea [see MPEP 2106.05(f)] and therefore fails to integrate the judicial exception into a practical application.
wherein the one or more neural networks are to be trained further based, at least in part, on masking capitalized phrases
Step 2B: There are no additional elements in this claim that amount to significantly more than the judicial exception.
The following additional element is adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea [see MPEP 2106.05(f)] and therefore fails to amount to significantly more than the judicial exception.
wherein the one or more neural networks are to be trained further based, at least in part, on masking capitalized phrases
Regarding Claim 23:
Step 2A, Prong 1: The claim recites the same abstract ideas as in claim 20.
The following limitation is directed to the abstract idea of a mental process [see MPEP 2106.04(a)(2) III. C.]. In particular, the claim recites mental processes that are concepts performed in the human mind (including an observation, evaluation, judgement, or opinion).
infer causal relationships between entities and events
Step 2A, Prong 2: There are no additional elements in this claim that integrate the judicial exception into a practical application.
The following additional elements are adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea [see MPEP 2106.05(f)] and therefore fails to integrate the judicial exception into a practical application.
wherein the set of instructions further include instructions, which if performed by the one or more processors, cause the one or more processors to:
train the one or more neural networks to
Step 2B: There are no additional elements in this claim that amount to significantly more than the judicial exception.
The following additional elements are adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea [see MPEP 2106.05(f)] and therefore fails to amount to significantly more than the judicial exception.
wherein the set of instructions further include instructions, which if performed by the one or more processors, cause the one or more processors to:
train the one or more neural networks to
Regarding Claim 24:
Step 2A, Prong 1: The claim recites the same abstract ideas as in claim 20.
The following limitation is directed to the abstract idea of a mental process [see MPEP 2106.04(a)(2) III. C.]. In particular, the claim recites mental processes that are concepts performed in the human mind (including an observation, evaluation, judgement, or opinion).
calculate one or more distances between at least two of the prior events based, at least in part, on a length of time between the occurrences of the at least two prior events
Step 2A, Prong 2: There are no additional elements in this claim that integrate the judicial exception into a practical application.
The following additional elements are adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea [see MPEP 2106.05(f)] and therefore fails to integrate the judicial exception into a practical application.
wherein the set of instructions further include instructions, which if performed by the one or more processors, cause the one or more processors to:
Step 2B: There are no additional elements in this claim that amount to significantly more than the judicial exception.
The following additional elements are adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea [see MPEP 2106.05(f)] and therefore fails to amount to significantly more than the judicial exception.
wherein the set of instructions further include instructions, which if performed by the one or more processors, cause the one or more processors to:
Regarding Claim 25:
Step 2A, Prong 1: The claim recites the same abstract ideas as in claim 20.
Step 2A, Prong 2: There are no additional elements in this claim that integrate the judicial exception into a practical application.
The following additional elements can be considered as generally linking the use of judicial exception to a particular technological environment or field of use [See MPEP § 2106.05(h)]. Therefore, the additional elements do not integrate the abstract ideas into a practical application.
wherein a first timestamp of metadata associated with the first published text indicates the first time of publication of the first published text and a second timestamp of metadata associated with the second published text indicates the second time of publication of the second published text
Step 2B: There are no additional elements in this claim that amount to significantly more than the judicial exception.
The following additional elements can be considered as generally linking the use of judicial exception to a particular technological environment or field of use [See MPEP § 2106.05(h)]. Therefore, the additional elements do not amount to significantly more than the judicial exception.
wherein a first timestamp of metadata associated with the first published text indicates the first time of publication of the first published text and a second timestamp of metadata associated with the second published text indicates the second time of publication of the second published text
Regarding Claim 26:
Step 2A, Prong 1: The claim recites the same abstract ideas as in claim 20.
Step 2A, Prong 2: There are no additional elements in this claim that integrate the judicial exception into a practical application.
The following additional elements are adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea [see MPEP 2106.05(f)] and therefore fails to integrate the judicial exception into a practical application.
wherein the set of instructions further include instructions, which if performed by the one or more processors, cause the one or more processors to:
train the one or more neural networks using market movement prediction tasks
Step 2B: There are no additional elements in this claim that amount to significantly more than the judicial exception.
The following additional elements are adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea [see MPEP 2106.05(f)] and therefore fails to amount to significantly more than the judicial exception.
wherein the set of instructions further include instructions, which if performed by the one or more processors, cause the one or more processors to:
train the one or more neural networks using market movement prediction tasks
Regarding Claim 27:
Step 2A, Prong 1: The claim recites the same abstract ideas as in claim 20.
Step 2A, Prong 2: There are no additional elements in this claim that integrate the judicial exception into a practical application.
The following additional elements are adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea [see MPEP 2106.05(f)] and therefore fails to integrate the judicial exception into a practical application.
wherein the set of instructions further include instructions, which if performed by the one or more processors, cause the one or more processors to:
train the one or more neural networks to predict stock prices
Step 2B: There are no additional elements in this claim that amount to significantly more than the judicial exception.
The following additional elements are adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea [see MPEP 2106.05(f)] and therefore fails to amount to significantly more than the judicial exception.
wherein the set of instructions further include instructions, which if performed by the one or more processors, cause the one or more processors to:
train the one or more neural networks to predict stock prices
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-3, 5-7, 9, 13, 20, and 25-27 are rejected under 35 U.S.C. 103 as being unpatentable over Nguyen et al. (US 20210049700), hereinafter Nguyen, in view of Petroni et al.(US 20190012374), hereinafter Petroni.
Regarding Claim 1:
Nguyen discloses:
A processor, comprising: one or more circuits to:
Nguyen, [0041], “In one aspect a computer implemented system for future entity states includes one or more computer processors operating in conjunction with a computer memory and non-transitory computer readable media…”
Nguyen discloses a computer implemented system with one or more processors [processor, comprising: one or more circuits].
use one or more neural networks to: generate an event prediction based, at least in part, on: indications of when prior events occurred,
Nguyen, [0083], “The disclosed system may extract relevant signals from various financial data sources (e.g., unstructured news data and transaction data) to predict if companies will need capital funding in the future. In order to make such predictions, a neural network (e.g., natural language processor), a first machine learning model, and a second machine learning model are applied on both structured numerical and unstructured textual news data (financial fundamentals, news, press releases, earning calls, etc.)”
[0183], “For example, during training, the neural network may ingest numerical time-series data and attempt to determine a future feature value for each instance in the numerical time-series data. The ingested numerical time-series data includes a label for each instance in the numerical time-series data with the subsequent feature value”
Nguyen discloses using neural networks and machine learning models [use one or more neural networks] and various financial data sources to predict if companies will need capital funding [generate an event prediction]. In para. 183, the neural network is stated to ingest time-series data [based, at least in part, on: indications of when prior events occurred].
wherein the indications comprise a first time of publication of a first published text comprising content corresponding to a first event and a second time of publication of a second published text comprising content corresponding to a second event that is different than the first event
Nguyen, [0094], “Data source(s) 116 provide the system 102 with numerical time-series data including a plurality of feature data structures having a feature value… and an instance value (e.g., a time or date associated with the feature value) alternatively referred to herein as numerical data. Data source(s) 116 also provide the system 102 with unstructured textual news data including one or more documents having one or more words, such as news, press releases, earning calls, and so forth, alternatively referred to herein as news data. Data source(s) 116, for example, can include financial terminals such as Bloomberg™, SNL™, S&P™, Factset™, Dealogic™, Thomson Reuters™, which provide numerical and unstructured data”
[0046], “In example embodiments, the one or more processors further configured to receive a second data set having a second numerical time series data or a second unstructured textual data.”
[i.e., the time-series data of multiple publications includes dates (indications of time of publication) of a first and second event, the daily frequency of stock prices suggests the date is the time of publication]
Para. 46 further discloses a second set of data that is different from the first set of data.
provide output data that indicates the event prediction
Nguyen, [0299], “FIG. 24 is a diagram 2400 of a sample dashboard rendered on a graphical user interface, according to some embodiments outputting the results of one or more of the models discussed herein.”
[0085], “The first machine learning models are able to make time series forecasting with Mean Square Error (MSE) of less than 0.001 on a test sample; the neural networks are able to extract and correctly classify text into 7 categories with accuracy of 91.134% and f1 score of 0.91280. The results generated by the techniques are input for Naïve Bayes Inference algorithms to output the probability of whether a company will need capital in the future.”
In para. 299, Nguyen discloses a GUI that outputs the results of the models [provide output data], and para. 85 specifies the outputs of the algorithms are the probability of whether a company will need capital in the future [indicates the event prediction].
Nguyen does not explicitly disclose:
calculation of a distance between the first event extracted from the first published text and the second event extracted from the second published text based, at least in part, on a length of time between the first time of publication of the first published text that the first event was extracted from and the second time of publication of the second published text that the second event, different than the first event, was extracted from
However, in the same field, analogous art Nguyen teaches:
calculation of a distance between the first event extracted from the first published text and the second event extracted from the second published text based, at least in part, on a length of time between the first time of publication of the first published text that the first event was extracted from and the second time of publication of the second published text that the second event, different than the first event, was extracted from
Petroni, [0152], “The news event extraction module 844 detects events referenced by the news articles, extracts information about the detected events, and generates an event representation including attributes of the event based on the extracted information. The news event database module 848 stores the generated event representations”
[0203], “For clusters including only a single social media posting, the timestamp of the posting may be selected as the time attribute”
[0212], “The one or more similarity measures may be based on…a time attribute of the news article event representation and a time attribute of the social media cluster event representation”
[0220], “At step 1906, a temporal similarity ST between the news article and social media cluster is determined as a similarity based on the temporal attributes of the event representations of and/or temporal expressions extracted from the news article and social media cluster. The determining of the temporal similarity ST may include…determining the temporal similarity ST as the minimum time difference between temporal expressions in the news temporal vector and in the social media temporal vector”
In para. 152, Petroni teaches that extracted published events are used as data in the system. Para. 203 states the timestamp of the posting (time of publication) is used as a time attribute. Then, in para. 212, Petroni uses a similarity measure based on time attributes of the articles and social media events (publications) data. Lastly, in para. 220, the temporal similarity measurement calculating a time difference between the publications extracted texts functions as a calculation of a distance based on the length of time between the publication texts. The articles and the social media events (publications) are different from each other.
Nguyen, Petroni, and the instant application are analogous art because they are all directed to classifying data using machine learning (see, e.g., Nguyen, paragraph [0012] and Petroni, paragraph [0081]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the neural network event prediction processor of Nguyen to incorporate the teachings of Petroni to include distance calculations between events extracted from first and second published texts that are based in part on a length of time between the first time of publication of the first published text and the second time of publication of the second published text. Doing so would have allowed Nguyen to use Petroni’s method in order to “process and store this event information in an event representation form as used for the news article, social media and coreferenced events”, as suggested by Petroni (see, e.g., Petroni, paragraph [0240]).
Regarding Claim 2:
As discussed above, Nguyen in view of Petroni teach [the] process of claim 1, and Nguyen further discloses:
wherein the one or more circuits are to extract the indications of when the prior events occurred from textual data to train the one or more neural networks to infer information about the textual data
Nguyen, [0083], “The disclosed system may extract relevant signals from various financial data sources (e.g., unstructured news data and transaction data) to predict if companies will need capital funding in the future. In order to make such predictions, a neural network (e.g., natural language processor), a first machine learning model, and a second machine learning model are applied on both structured numerical and unstructured textual news data (financial fundamentals, news, press releases, earning calls, etc.)”
Regarding Claim 3:
As discussed above, Nguyen in view of Petroni teach [the] process of claim 1, and Nguyen further discloses:
wherein the one or more neural networks are to infer information based, at least in part, on a chronological order of the prior events
Nguyen, [0173], “Neural network 104 is configured to receive the one or more numerical time-series data related to a set of historical financial transactions and process the received one or more numerical time-series data to generate one or more future feature data structures having a future feature value and a future instance value”
Regarding Claim 5:
As discussed above, Nguyen in view of Petroni teach [the] process of claim 1, and Nguyen further discloses:
wherein the one or more circuits use representations of uncut paragraphs of text to train the one or more neural networks
Nguyen, [0231], “The preprocessing unit 118 may process the received unstructured textual news data set 1102 in a variety of manners. For example, the preprocessing unit 118 can process the received unstructured textual news data set 1102 to remove stop words (e.g., commonly recurring work at no meaning resentenced), remove punctuation, remove additional spacing, remove numbers, and remove special characters”
[0233] “During operation, the word vectorizer 1106 may be configured to represent each of the one or more words in each document within the unstructured textual news data set 1102…by structuring the vectorized representation with a term frequency and inverse document frequency (TF-IDF) of text words within each document.”
[0256], “FIGS. 19A and 19B show an example implementation 1900A and 1900B of a Doc2Vec word vectorizer 1106 in Gensim. In the shown embodiment, the Doc2Vec word vectorizer 1106 trained for 100 epochs on the example new data set, with the minimum word count set to two in order to discard words with very few occurrences.”
[i.e., the Doc2Vec word vectorizer is a neural network]
Regarding Claim 6:
As discussed above, Nguyen in view of Petroni teach [the] process of claim 1, and Petroni further discloses:
wherein the indications are based, at least in part, on where two or more of the prior events appear in a publication
Petroni, [0171], “The location attribute extraction module 926 generates a location attribute for the event using the candidate attributes”
[i.e., the candidate attributes are directly associated with indications of events]
[0181], “The feature vector for each candidate location may be composed based on the news article and candidate attributes. For example, feature vector for each candidate location may be composed as a concatenation of the following…(4) a position offset of the candidate location in the news article”
[i.e., the position offset of the candidate location (event indication) in the news article (publication) represents an encoding of where the candidates appear in the publication for classification]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the neural network event prediction processor of Nguyen to incorporate the teachings of Petroni to use indications based on where two or more of the prior events appear in a publication. Doing so would have allowed Nguyen to use Petroni’s method for “classifying numeric references of the candidate attributes and adjacent word sequences as either representing an impact of the event or not”, as suggested by Petroni (see, e.g., Petroni, paragraph [0184]).
Regarding Claim 7:
Claim 7 is a system claim corresponding to processor claim 1 and is rejected for at least the same reasons as given in the rejection of claim 1, with the exception of the following limitations.
Nguyen discloses:
A system, comprising: one or more processors
Nguyen, [0041], “In one aspect a computer implemented system for future entity states includes one or more computer processors operating in conjunction with a computer memory and non-transitory computer readable media…”
Nguyen discloses a computer implemented system with one or more processors [A system, comprising: one or more processors].
Regarding Claim 9:
Claim 9 is a system claim corresponding to processor claim 6 and is rejected for at least the same reasons as given in the rejection of claim 6.
Regarding Claim 13:
Claim 13 is a method claim corresponding to processor claim 1 and is rejected for at least the same reasons as given in the rejection of claim 1.
Regarding Claim 20:
Claim 20 is a non-transitory, machine-readable medium claim corresponding to processor claim 1 and is rejected for at least the same reasons as given in the rejection of claim 1, with the exception of the following limitations.
Nguyen discloses:
A non-transitory, machine-readable medium having stored thereon a set of instructions, which if performed by one or more processors, cause the one or more processors to at least:
Nguyen, [0041], “In one aspect a computer implemented system for future entity states includes one or more computer processors operating in conjunction with a computer memory and non-transitory computer readable media…”
Nguyen discloses a non-transitory computer readable media with computer memory and one or more processors.
Regarding Claim 25:
As discussed above, Nguyen in view of Petroni teach [the] non-transitory, machine-readable medium of claim 20, and Petroni further discloses:
wherein a first timestamp of metadata associated with the first published text indicates the first time of publication of the first published text and a second timestamp of metadata associated with the second published text indicates the second time of publication of the second published text
Petroni, [0020], “FIGS. 5l-5n is an exemplary metadata of an event detected cluster with ingested data of FIG. 5e as one of the related unit data”
FIG. 5g:
PNG
media_image1.png
48
177
media_image1.png
Greyscale
See, e.g., the timestamp on metadata associated with published texts, in FIG. 5g.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Nguyen to incorporate the teachings of Petroni to utilize timestamps of metadata associated with published texts of publications indicating their respective times of publication. Doing so would have allowed Nguyen to use Petroni’s method in order to “process and store this event information in an event representation form as used for the news article, social media and coreferenced events”, as suggested by Petroni (see, e.g., Petroni, paragraph [0240]).
Regarding Claim 26:
As discussed above, Nguyen in view of Petroni teach [the] non-transitory, machine-readable medium of claim 20, and Nguyen further discloses:
wherein the set of instructions further include instructions, which if performed by the one or more processors, cause the one or more processors to: train the one or more neural networks using market movement prediction tasks1
Nguyen, [0012], “In operation, the neural network receives raw numerical data inputs, such as market data relating to an entity including stock price data, volume data, and moving averages of the same, and can generate an output data structure representing a predicted price moving average for a future timestate.”
[0183], “The neural network includes one or more Recurrent Neural Network (RNN) layers (e.g., a first LSTM layer 704 and a second LSTM layer 706, one Kth-order Hidden Markov Models), for analysing, forecasting and detecting anomalies in numerical time-series data [i.e., market movement prediction tasks]…the neural network is trained to generate a future feature data structure having a future feature value and a future instance value for numerical time-series data. For example, during training, the neural network may ingest numerical time-series data and attempt to determine a future feature value for each instance in the numerical time-series data”
Regarding Claim 27:
As discussed above, Nguyen in view of Petroni teach [the] non-transitory, machine-readable medium of claim 20, and Nguyen further discloses:
wherein the set of instructions further include instructions, which if performed by the one or more processors, cause the one or more processors to: train the one or more neural networks to predict stock prices
Nguyen, [0225] “FIG. 9A is a diagram 900A generated where an example numerical time-series data for Gibson Energy, as listed on the TSX, was processed by the neural network 104 to predict a 30 day moving average of the stock price.”
Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Nguyen in view of Petroni, and further in view of Gage et al. (US 20100235310), hereinafter Gage.
Regarding Claim 4:
As discussed above, Nguyen in view of Petroni teach [the] processor of claim 1, but do not explicitly disclose:
wherein the one or more circuits are to train the one or more neural networks by pre-training the one or more neural networks to perform a plurality of different time-based tasks
However, in the same field, analogous art Gage teaches:
wherein the one or more circuits are to train the one or more neural networks by pre-training the one or more neural networks to perform a plurality of different time-based tasks
Gage, [0010], “The artificial neural network can be included in various methods, apparatus, and articles for use in predicting or profiling events.”
[0014], “a computer system for predicting a future event is provided…a trained artificial neural network [i.e., already trained / pre-trained]…and configured to produce new trainable nodes…”
[0051], “to predict or profile events in systems that have substantial dynamics over long time scales…used to predict or profile events in various finance systems. i.e., in stock markets, commodities markets, options markets…”
[0052], “Various embodiments may be useful in forecasting, for example in the field of sports and sporting events…”
Nguyen, Petroni, Gage, and the instant application are analogous art because they are all directed to machine learning methods involving neural networks that are used for financial prediction tasks (see, e.g., Nguyen, paragraph [0083], Petroni, paragraph [0176] and Gage, abstract and paragraph [0051]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Nguyen in view of Petroni to incorporate the teachings of Gage to pre-train neural networks to perform a variety of time-based tasks. Doing so would have allowed Nguyen in view of Petroni to use Gage’s methods “ for use in predicting or profiling events.”, as suggested by Gage (see, e.g., Gage, [0010]).
Claims 8, 14, 21, and 22 are rejected under 35 U.S.C. 103 as being unpatentable over Nguyen in view of Petroni, and further in view of Hajarnis et al. (US 12056592), hereinafter Hajarnis.
Regarding Claim 8:
As discussed above, Nguyen in view of Petroni teach [the] system of claim 7, but do not explicitly disclose:
wherein the one or more neural networks comprise a bidirectional encoder representations from transformers (BERT) learning model
However, in the same field, analogous art Hajarnis teaches:
wherein the one or more neural networks comprise a bidirectional encoder representations from transformers (BERT) learning model
Hajarnis, col 12, lines 30-36, “Implementations may then process different segments in parallel using a BERT neural network layer containing multiple BERT nodes, and then using forward and reverse direction sequence-to-sequence layers to further fine-tune the results to a specific task. Implementations of the disclosure have been tested to show superior performance for NLP tasks.”
Nguyen, Petroni, Hajarnis, and the instant application are analogous art because they are all directed to implementing neural networks with natural language processing (see, e.g., Nguyen, paragraph [0026], Petroni, paragraph [0176-0178] and Hajarnis, col 1. Lines 8-9)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Nguyen in view of Petroni to incorporate the teachings of Hajarnis to implement bidirectional encoder representations from transformers (BERT) learning models in neural networks. Doing so would have allowed Nguyen in view of Petroni to use Hajarnis' method in order to “ To achieve deeper understanding of the underlying text”, as suggested by Hajarnis (see, e.g., Hajarnis, col. 2, lines 60-61).
Regarding Claim 14:
As discussed above, Nguyen in view of Petroni teach [the] method of claim 13, but do not explicitly disclose:
wherein using the one or more neural networks masks a date on which at least one of the prior events occurred
However, in the same field, analogous art Hajarnis teaches:
wherein using the one or more neural networks masks a date on which at least one of the prior events occurred
Hajarnis, col 2, lines 63-67, “BERT may use a technique called Masked Language Modeling (MLM) that may randomly mask words in a sentence and then try to predict the masked words from other words in the sentence surrounding the masked words from both left and right of the masked words.”
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Nguyen in view of Petroni to incorporate the teachings of Hajarnis to use BERT models that employ masking techniques to make predictions from context in sentences for certain tokens such as dates (e.g., time, day, month, year etc.) for optimal training neural networks. Doing so would have allowed Nguyen in view of Petroni to use Hajarnis' method in order to “ To achieve deeper understanding of the underlying text”, as suggested by Hajarnis (see, e.g., Hajarnis, col. 2, lines 60-61).
Regarding Claim 21:
As discussed above, Nguyen in view of Petroni teach [the] non-transitory machine-readable medium of claim 20, but do not explicitly disclose:
wherein the one or more neural networks are to be trained further based, at least in part, by masking entities
However, in the same field, analogous art Hajarnis teaches:
wherein the one or more neural networks are to be trained further based, at least in part, by masking entities
Hajarnis, col 2, lines 63-67, “BERT may use a technique called Masked Language Modeling (MLM) that may randomly mask words in a sentence and then try to predict the masked words from other words in the sentence surrounding the masked words from both left and right of the masked words.”
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Nguyen in view of Petroni to incorporate the teachings of Hajarnis to use BERT models that employ masking techniques to make predictions from context in sentences for certain tokens such as entities (e.g., objects, places, etc.) for optimal training neural networks. Doing so would have allowed Nguyen in view of Petroni to use Hajarnis' method in order to “ To achieve deeper understanding of the underlying text”, as suggested by Hajarnis (see, e.g., Hajarnis, col. 2, lines 60-61).
Regarding Claim 22:
As discussed above, Nguyen in view of Petroni teach [the] non-transitory machine-readable medium of claim 20, but do not explicitly disclose:
wherein the one or more neural networks are to be trained further based, at least in part, on masking capitalized phrases
However, in the same field, analogous art Hajarnis teaches:
wherein the one or more neural networks are to be trained further based, at least in part, on masking capitalized phrases
Hajarnis, col 2, lines 63-67, “BERT may use a technique called Masked Language Modeling (MLM) that may randomly mask words in a sentence and then try to predict the masked words from other words in the sentence surrounding the masked words from both left and right of the masked words.”
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Nguyen in view of Petroni to incorporate the teachings of Hajarnis to use BERT models that employ masking techniques to make predictions from context in sentences for certain tokens such as capitalized phrases (e.g., people, places, acronyms, etc.) for optimal training neural networks. Doing so would have allowed Nguyen in view of Petroni to use Hajarnis' method in order to “ To achieve deeper understanding of the underlying text”, as suggested by Hajarnis (see, e.g., Hajarnis, col. 2, lines 60-61).
Claims 10-12, 15-17, and 23-24 are rejected under 35 U.S.C. 103 as being unpatentable over Nguyen in view of Petroni, and further in view of Blair et al. (US 10878505), hereinafter Blair.
Regarding Claim 10:
As discussed above, Nguyen in view of Petroni teach [the] system of claim 7, but do not explicitly disclose:
wherein the one or more processors are further to train the one or more neural networks to infer a causal relationship between events
However, in the same field, analogous art Blair teaches:
wherein the one or more processors are further to train the one or more neural networks
Blair, col. 16, lines 10-15, “commodity-specific neural networks 190…which inform and enable the development of comprehensive forecasts within the commodity forecasting algorithm 180.”
to infer a causal relationship between events
Blair, col. 15, lines 27-31, “The time-series modeling engine 170 converts information…that reflects the temporally-relevant explanation of the information therein to understand how events affect commodity values over time.”
[i.e., a causal relationship between events]
Col. 31, lines 25-43, “At step 350 the process the performs the commodity forecasting algorithm…with the sequence of discrete time data points 178 to create a set of classified, normalized content 186 for the selected commodity 102 that represents variables having an influence over the commodity state over the specified period of time, as well as”
Under the broadest reasonable interpretation (BRI), variables having influence over commodity states for a specified period of time suggests that the variables relate to supply and demand impact in commodity prices or market conditions which corresponds to a causal relationship between events. Additionally, under the BRI, a time-series modeling engine (described as a supervised instantiation of machine learning in Blair, col. 15 lines 5-6), that’s used for understanding how events affect commodity prices, would reasonably include a machine learning model such as a neural network, which are known to include supervised learning models capable of modeling time-series data.
Nguyen, Petroni, Blair, and the instant application are analogous art because they are all directed to neural networks that are used for prediction tasks (see, e.g., Nguyen, paragraph [0083], Petroni, paragraph [0176] and Blair, col 1 lines 15-24).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Nguyen in view of Petroni to incorporate the teachings of Blair to train neural networks to infer a causal relationship between events. Doing so would have allowed Nguyen in view of Petroni to use Blair's method in order to “to process large amounts of data from both historical pricing data and other, non-traditional data sources, which can be analyzed in differing temporal contexts (i.e., over time and in real time).”, as suggested by Blair (see, e.g., Blair, col. 1, lines 55-57).
Regarding Claim 11:
As discussed above, Nguyen in view of Petroni teach [the] system of claim 7, but do not explicitly disclose:
wherein training the one or more neural networks is further based, at least in part, on pre-training with multiple different tasks
wherein performance of the different tasks is based on when on or more events occurred
However, in the same field, analogous art Blair teaches:
wherein training the one or more neural networks is further based, at least in part, on pre-training with multiple different tasks
Blair, col. 28, lines 12-33, “In unsupervised meta learning, machine learning algorithms themselves propose their own task distributions within this environment, given un-curated and unlabeled data [i.e., pre-training on unlabeled data], by automatically and continuously curating new commodity-specific data as it is ingested. In the present invention, proposing (or learning) a task distribution includes optimizing parameters for the reward function that maps variables (sentiment) to different reward functions (price prediction) for each commodity… In this manner, unsupervised meta learning causes the system to discover its own optimized task distributions based on reward functions that it learns based on environmental similarities. Unsupervised meta learning is therefore utilized to learn best approaches for improving the production neural network(s) 191 and the training neural network(s) 192”
wherein performance of the different tasks is based on when on or more events occurred
Blair, col.27, lines 43-52, “at least to improve upon training data sets for training neural network(s) 192, and the outcomes of the production neural network(s) 191. The one or more deep learning meta networks 200 are therefore a deeper layer within the neural network modeling layer. Both of these functions are designed to produce outcomes in forecasts of future prices which are as close as possible to simulations of commodity prices using historical pricing data, and improve the overall accuracy [i.e., correct performance] of the multi-layer machine learning-based model.”
Col. 20, lines 22-46, “Neural networks having a recurrent architecture may also have stored, or controlled, internal states which permit storage under direct control of the neural network, making them more suitable for inputs having a temporal nature…In the present invention, where output data 220 is in the form of forecasted states, or prices, of commodities at some future, an understanding of the influence of various events and sentiment on a state over a period of time lead to more highly accurate and reliable forecasts.”
[i.e., the neural network performance is dependent on when events occurred]
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Nguyen in view of Petroni to incorporate the teachings of Blair to pre-train neural networks with tasks whose correct performance is dependent on time-based events. Doing so would have allowed Nguyen in view of Petroni to use Blair’ method in order to “develop(ing) and apply(ing) a deep learning aspect to the model, (so that) the algorithm(s) can determine on its own if a prediction is accurate or not through its own neural network.” as suggested by Blair (see, e.g., Blair, cols. 2, lines 12-15).
Regarding Claim 12:
As discussed above, Nguyen in view of Petroni teach [the] system of claim 7, but do not explicitly disclose:
wherein the one or more processors train the one or more neural networks to calculate a distance between two or more events based
at least in part, whether the two or more events appeared in different publications
However, in the same field, analogous art Blair teaches:
wherein the one or more processors train the one or more neural networks to calculate a distance between two or more events based
Blair, col. 18, lines 57-64, “The neural network modeling layer therefore uses the commodity-specific indicators for each taxonomy (keywords, frequencies of their occurrence and distance relationships between them), to construct the nodes and connections of the production neural networks 191. Each textual vector corresponds to an assigned keyword, frequency, or distance relationship, and each node and connection is designed to model a particular aspect of the commodity 102.”
Col. 26-27, lines 65-9, “These activities are utilized to infer relevance in new documents that are ingested into the data analytics platform 100 that are un-curated, or unlabeled. In this manner, as the new information about each commodity is ingested, the neural network modeling layer is able to train itself both to identify relevant documents, and build on existing taxonomies of indicators for each commodity being analyzed. Still further, identifying such clusters 202 also enables development of a taxonomy corpus that is entity-specific as to the entity generating the corporate communications documents 136, and allows for training on entity-specific neural networks.”
at least in part, whether the two or more events appeared in different publications
Blair col. 9, lines 8-24, “News reports 131…Other textual information sources 133 include any other sources of unstructured data that must be processed to develop a sentiment 161 therefrom. These may include magazine articles, blog posts, forum comments (or comments to any news article, magazine article, or blog post) and any other digital sources of informational reports or comments.”
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Nguyen in view of Petroni to incorporate the teachings of Blair to train neural networks to calculate distances between multiple events that are based on the locality of said events in different publications . Doing so would have allowed Nguyen in view of Petroni to use Blair’ method in order to “to integrate a knowledge-based approach into such neural networks that applies rules developed from fundamental economic and commodity-specific indicators in forecasting commodity states”, as suggested by Blair (see, e.g., Blair, col. 3-4, lines 64-3).
Regarding Claim 15:
As discussed above, Nguyen in view of Petroni teach [the] method of claim 13, but do not explicitly disclose:
further comprising training the one or more neural networks to infer a causal relationship between events and entities
However, in the same field, analogous art Blair teaches:
further comprising training the one or more neural networks to infer a causal relationship between events and entities
Blair, col. 29, lines 27-43, “neural networks configured to learn how to improve the outcomes generated by the one or more neural networks 190, reinforced by a deeper understanding of how the input data 110 is influenced by, characterized by, or affected by its environment…in the present invention, a selected commodity 102—has a price that is necessarily influenced by characteristics which affect sentiment (its “environment”) through various interactions, such as economic policy and activity, weather, etc. [i.e., events] … building a further taxonomy of indicators 205 from corporate communications documents 136. ”
And with reference to the entities, see e.g., Blair, col. 7, lines 20-24, “may also include corporate communications documents 136…filings that corporate entities are required to file with governmental regulatory agencies”
Under the broadest reasonable interpretation (BRI), learning how input data is influenced, or affected by its environment including interactions such as economic policy, can be interpreted as inferring a causal relationship between events and entities, because corporate entities are impacted by economic policy events, such as federal interest rate changes, and said relationships are learned or inferred by the neural network.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Nguyen in view of Petroni to incorporate the teachings of Blair to train neural networks to infer a causal relationship between events and entities. Doing so would have allowed Nguyen in view of Petroni to use Blair's method in order to “to process large amounts of data from both historical pricing data and other, non-traditional data sources, which can be analyzed in differing temporal contexts (i.e., over time and in real time).”, as suggested by Blair (see, e.g., Blair, col. 1, lines 55-57).
Regarding Claim 16:
As discussed above, Nguyen in view of Petroni teach [the] method of claim 13, but do not explicitly disclose:
further comprising training the one or more neural networks to calculate a distance between prior events based, at least in part, on where the one or more prior events appeared in a publication
However, in the same field, analogous art Blair teaches:
further comprising training the one or more neural networks to calculate a distance between prior events based, at least in part,
Blair, col. 18, lines 57-64, “The neural network modeling layer therefore uses the commodity-specific indicators for each taxonomy (keywords, frequencies of their occurrence and distance relationships between them), to construct the nodes and connections of the production neural networks 191. Each textual vector corresponds to an assigned keyword, frequency, or distance relationship, and each node and connection is designed to model a particular aspect of the commodity 102.”
Blair, col. 26-27, lines 65-9, “These activities are utilized to infer relevance in new documents that are ingested into the data analytics platform 100 that are un-curated, or unlabeled. In this manner, as the new information about each commodity is ingested, the neural network modeling layer is able to train itself both to identify relevant documents, and build on existing taxonomies of indicators for each commodity being analyzed. Still further, identifying such clusters 202 also enables development of a taxonomy corpus that is entity-specific as to the entity generating the corporate communications documents 136, and allows for training on entity-specific neural networks”
on where the one or more prior events appeared in a publication
Blair col. 9, lines 8-24, “News reports 131…Other textual information sources 133 include any other sources of unstructured data that must be processed to develop a sentiment 161 therefrom. These may include magazine articles, blog posts, forum comments (or comments to any news article, magazine article, or blog post) and any other digital sources of informational reports or comments”
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Nguyen in view of Petroni to incorporate the teachings of Blair to train neural networks to calculate a distance between prior events based on their locality in a publication. Doing so would have allowed Nguyen in view of Petroni to use Blair’ method in order to “to integrate a knowledge-based approach into such neural networks that applies rules developed from fundamental economic and commodity-specific indicators in forecasting commodity states”, as suggested by Blair (see, e.g., Blair, cols. 3-4, lines 64-3).
Regarding Claim 17:
As discussed above, Nguyen in view of Petroni teach [the] method of claim 13, but do not explicitly disclose:
wherein training the one or more neural networks is further based, at least in part, on a task of finding expressions in textual data that refer to an entity
However, in the same field, analogous art Blair teaches:
wherein training the one or more neural networks is further based, at least in part, on a task of finding expressions in textual data that refer to an entity
Blair, col 27. lines 1-8, “In this manner, as the new information about each commodity is ingested, the neural network modeling layer is able to train itself both to identify relevant documents, and build on existing taxonomies of indicators for each commodity being analyzed. Still further, identifying such clusters 202 also enables development of a taxonomy corpus that is entity-specific as to the entity generating the corporate communications documents 136, and allows for training on entity-specific neural networks.”
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Nguyen in view of Petroni to incorporate the teachings of Blair to train neural networks on a task of finding expressions in textual data that refer to entities. Doing so would have allowed Nguyen in view of Petroni to use Blair method in order to “enables development of a taxonomy corpus that is entity-specific as to the entity generating the corporate communications documents 136, and allows for training on entity-specific neural networks.”, as suggested by Blair (see, e.g., Blair, col 27, lines 7-9).
Regarding Claim 23:
As discussed above, Nguyen in view of Petroni teach [the] non-transitory machine-readable medium of claim 20, but do not explicitly disclose:
wherein the set of instructions further include instructions, which if performed by the one or more processors, cause the one or more processors to: train the one or more neural networks to infer causal relationships between entities and events
However, in the same field, analogous art Blair teaches:
wherein the set of instructions further include instructions, which if performed by the one or more processors, cause the one or more processors to: train the one or more neural networks
Blair, col. 16, lines 10-15, “These assessments are performed in conjunction with one or more commodity-specific neural networks 190 and one or more deep learning meta networks 200 as described further below, which inform and enable the development of comprehensive forecasts within the commodity forecasting algorithm 180”
to infer causal relationships between entities and events
Blair, col. 15, lines 27-31, “The time-series modeling engine 170 converts information…that reflects the temporally-relevant explanation of the information therein to understand how events affect commodity values over time.”
Col. 31, lines 25-43, “At step 350 the process the performs the commodity forecasting algorithm…with the sequence of discrete time data points 178 to create a set of classified, normalized content 186 for the selected commodity 102 that represents variables having an influence over the commodity state over the specified period of time, as well as”
With respect to the time-series modeling engine and the aspects of inferring causal relationships between events, the Broadest Reasonable Interpretation (BRI) analysis in claim 10, applies equally here.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Nguyen in view of Petroni to incorporate the teachings of Blair to train neural networks to infer a causal relationship between events and entities. Doing so would have allowed Nguyen in view of Petroni to use Blair's method in order to “to process large amounts of data from both historical pricing data and other, non-traditional data sources, which can be analyzed in differing temporal contexts (i.e., over time and in real time).”, as suggested by Blair (see, e.g., Blair, col. 1, lines 55-57).
Regarding Claim 24:
As discussed above, Nguyen in view of Petroni teach [the] non-transitory machine-readable medium of claim 20, but do not explicitly disclose:
wherein the set of instructions further include instructions, which if performed by the one or more processors, cause the one or more processors to: calculate one or more distances between at least two of the prior events based, at least in part, on a length of time between the occurrences of the at least two prior events
However, in the same field, analogous art Blair teaches:
wherein the set of instructions further include instructions, which if performed by the one or more processors, cause the one or more processors to: calculate one or more distances between at least two of the prior events based, at least in part, on a length of time between the occurrences of the at least two prior events
Blair col. 18, lines 57-64, “The neural network modeling layer therefore uses the commodity-specific indicators for each taxonomy (keywords, frequencies of their occurrence and distance relationships between them), to construct the nodes and connections of the production neural networks 191. Each textual vector corresponds to an assigned keyword, frequency, or distance relationship, and each node and connection is designed to model a particular aspect of the commodity 102.”
Blair col. 26-27, lines 65-9, “These activities are utilized to infer relevance in new documents that are ingested into the data analytics platform 100 that are un-curated, or unlabeled. In this manner, as the new information about each commodity is ingested, the neural network modeling layer is able to train itself both to identify relevant documents, and build on existing taxonomies of indicators for each commodity being analyzed. Still further, identifying such clusters 202 also enables development of a taxonomy corpus that is entity-specific as to the entity generating the corporate communications documents 136, and allows for training on entity-specific neural networks.”
[i.e., distance relationships between textual vectors of multiple commodities (financial events) is calculated]
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Nguyen in view of Petroni to incorporate the teachings of Blair to train neural networks to calculate distances between multiple past events that are based on the length of times between occurrences of said events . Doing so would have allowed Nguyen in view of Petroni to use Blair’ method in order to “to integrate a knowledge-based approach into such neural networks that applies rules developed from fundamental economic and commodity-specific indicators in forecasting commodity states”, as suggested by Blair (see, e.g., Blair, cols. 3-4, lines 64-3).
Claims 18 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Nguyen in view of Petroni, and further in view of Gong et al. (WO2021146003), hereinafter Gong.
Regarding Claim 18:
As discussed above, Nguyen in view of Petroni teach [the] method of claim 13, but do not explicitly disclose:
wherein the training of one or more neural networks is further based, at least in part, on dwell times for one or more question-document pairs from a log
However, in the same field, analogous art Gong teaches:
wherein the training of one or more neural networks is further based, at least in part, on dwell times for one or more question-document pairs from a log
Gong, [0020], “The collection cost of implicit relevance feedbacks for web documents is relatively low, the quantity is large, and the burden on users is not increased. Various features for mining implicit relevance feedbacks for Web documents from user behaviors have been proposed, e.g., click information, average dwell time, number of page visits, etc.”
[0003], “Embodiments of the present disclosure provide methods and apparatuses for providing QA training data and training a QA model based on implicit relevance feedbacks. A question-passage pair and corresponding user behaviors may be obtained from a search log. Behavior features may be extracted from the user behaviors. A relevance score between the question and the passage may be determined, through an implicit relevance feedback model, based on the behavior features.”
[0068], “The QA model 610 shown in FIG.6 may have an architecture based on various technologies. For example, the QA model 610 may be based on a deep neural network, e.g., bidirectional long short term memory (BiLSTM), bidirectional encoder representation from transformers (BERT), etc.” [i.e., neural network]
Nguyen, Petroni, Gong, and the instant application are analogous art because they are all directed to using neural network models for inferencing tasks (see, e.g., Nguyen, paragraph [0085], Petroni, paragraph [0176] and Gong, paragraph [0018]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Nguyen in view of Petroni to incorporate the teachings of Gong to provide techniques for training neural networks based on dwell times for question-document pairs from a log. Doing so would have allowed Nguyen in view of Petroni to incorporate Gong’s method for obtaining "… good indicator(s) for the relevance between the passage and the question.” [i.e., training neural networks on user behavior indicators for question-document pairs from a log], (see, e.g., Gong, [0052]).
Regarding Claim 19:
As discussed above, Nguyen in view of Petroni teach [the] method of claim 13, but do not explicitly disclose:
wherein the training of one or more neural networks is further based, at least in part, on rewriting a question of a passage-question pairs
However, in the same field, analogous art Gong teaches:
wherein the training of one or more neural networks is further based, at least in part,
Gong, [0030], “The relevant question block 130 may include questions relevant to or similar to the user question in the search block 110. These relevant questions may include, e.g., questions frequently searched by other users. In FIG. 1, multiple questions relevant to the user question "summer flu treatment" are shown in the relevant question block 130, e.g., "What causes summer flu?", "Medicines for summer flu?", etc. When the user clicks on a relevant question…” [i.e., generated rewritten questions])
on rewriting a question of a passage-question pairs
Gong, [0083], “In an implementation, the method 700 may further comprise: adding the question-passage pair and the relevance label into a QA training data set as a QA training data example.” [i.e., passage-question pairs used for training data set]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Nguyen in view of Petroni to incorporate the teachings of Gong to provide techniques for training neural networks to rewrite questions for question-passage pairs from a log. Doing so would have allowed Nguyen in view of Petroni to incorporate Gong’s method for inferring relevant questions for question-document pairs as explained here: “These relevant questions may include, e.g., questions frequently searched by other users”, (see, e.g., Gong, [0030]).
Response to Arguments
Applicant's arguments filed 1-22-2026 (“Remarks”) have been fully considered but they are not persuasive.
35 U.S.C. § 103:
Remarks, pg. 8-12, Applicant argues with respect to independent claim 1 that Petroni fails to teach amended claim 1. Examiner respectfully disagrees. Firstly, as discussed under the section 103 rejections, para. 152 of Petroni teaches event representations are extracted from data, para. 203 of Petroni teaches that timestamps of social media posts (i.e. time of publication) is used as a time attribute, and in para. 212, Petroni teaches using a similarity measure on the time attributes of the event representations. Putting it together in para. 220, Petroni teaches using the similarity measure based on time attribute of events to calculate a temporal similarity (disclosed to be the difference/distance) between a news article and a social media cluster. Under broadest reasonable interpretation, the news article and the social media cluster disclosed by Petroni are construed as a first event and a second event representation. Therefore, Petroni is construed as teaching a distance between two different events that are extracted from different published texts (one is a news article and the other is a social media post), based on a length of time (temporal difference) between the publications. Furthermore, regarding Applicant’s argument that Petroni fails to teach or suggest “generating an event prediction based, at least in part, on the above calculation”, Examiner respectfully disagrees. In response to applicant's arguments against the references individually, one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986). In particular, Nguyen is cited as disclosing using neural networks to generate an event prediction data, and Petroni is cited as teaching the calculation. Thus, it is the combination of Nguyen and Petroni that teaches the claimed limitation.
35 U.S.C. § 101:
Remarks, pg. 12-14. Applicant argues with respect to amended claim 1 and Step 2A that the claim recites elements that have integrated the judicial exceptions into a practical application. In particular, Applicant points to the recited claims of using the neural networks to generate an event prediction and providing output data that indicates the event prediction. Examiner respectfully disagrees. As discussed under section 101, the use of the neural networks to generate an event prediction is analyzed as mere instructions to apply. In particular, the neural network is recited at a high level of generality and is merely recited as being used to predict, wherein a prediction is a mental process (e.g. human evaluation). Furthermore, providing output data that indicates the prediction is also recited at a high level of generality and has been analyzed as insignificant extra-solution activity (IESA). Additionally, Applicant cites the instant specification to argue there is an improvement. Examiner respectfully disagrees. The specification cited merely asserts an improvement, but does not provide sufficient details such that one of ordinary skill in the art would recognize the claimed invention as providing an improvement [see MPEP 2106.05(a)]. For at least these reasons, the claim does not recite elements that integrate the judicial exceptions into a practical application, and remain rejected under 35 U.S.C. § 101.
Remarks, pg. 14-15. Applicant argues with respect to amended claim 1 and Step 2B that claim 1 amounts to significantly more. In particular, Applicant argues it is the combination of elements recited in claim 1 that amount to more than the judicial exception. Examiner respectfully disagrees. The combination of claim limitations are directed to generating an event prediction (a mental process), which uses a distance calculation (a mental process). The prediction and the distance calculation are then in combination applied to the field of publications involving published texts. For at least these reasons, amended claim 1 does not amount to significantly more than the judicial exception, and remains rejected under 35 U.S.C. § 101.
Claims 7, 13, and 20, corresponding to claim 1, remain rejected under 35 U.S.C. § 101 for at least the same reasons.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to STEVEN PHUNG whose telephone number is (703) 756-1499. The examiner can normally be reached Monday-Thursday: 9:00AM-4:00PM ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, KAMRAN AFSHAR can be reached at (571) 272-7796. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/STEVEN PHUNG/Examiner, Art Unit 2125
/KAMRAN AFSHAR/Supervisory Patent Examiner, Art Unit 2125
1 Examiner interprets the phrase “market movement prediction tasks” in view of Paragraphs [0090-0091]