Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Objections
Claim 5 is objected to because of the following informalities:
Claim 5, line 19, “escribing” should be change to -- describing --.
Claim 8, line 1, “The medium of claim 8.” should be changed to – The medium of claim 8, wherein the instructions, when executed, further comprises:
Appropriate correction is required.
DETAILED ACTION
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1 – 18 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step One
The claims are directed to a method (claims 1 - 6), non-transitory computer readable medium (claims 7 - 12 ), and an apparatus with structural components (13 – 18). Thus, each of the claims falls within one of the four statutory categories (i.e., process, machine, manufacture, or composition of matter).
As to claim 7,
Step 2A, Prong One
The claim recites in part:
identify a plurality of first factors from the disentangled low-dimensional representation of the obtained data that affect an output of the artificial intelligence model;
determine a generative mapping from the disentangled low-dimensional representation between the identified plurality of first factors and the output of the artificial intelligence model, using causal reasoning;
generate explanation data using the determined generative mapping, wherein the generated explanation data wherein the generated explanation data provides a description of an operation leading to the output of the artificial intelligence model using the identified plurality of first factors;
As drafted and under its broadest reasonable interpretation, these limitation covers performance of the limitation in the mind (including an observation, evaluation, judgment, opinion) or with the aid of pencil and paper but for the recitation of generic computer components. For example: (1) A human can easily identify a plurality of first factors by writing them down using a pencil and paper of highlighting them on the screen of a generic computer. (2) A human can easily transform or summarize (generative mapping) the first factors to create some type of meaning. (3) A human can easily generate a diagram (generate explanation data) that clearly visualizes the relationships, descriptions, and insights of the first factors
Accordingly, at Step 2A, Prong One, the claim is directed to an abstract idea.
Step 2A, Prong Two
The judicial exception is not integrated into a practical application. In particular, the claim recites the additional elements of:
obtain a dataset as an input for an artificial intelligence model, wherein the obtained dataset is filtered to a disentangled low-dimensional representation;
which amounts to extra-solution activity of gathering data for use in the claimed process. As described in MPEP 2106.05(g), limitations that amount to merely adding insignificant extra-solution activity to a judicial exception do not amount to significantly more than the exception itself, and cannot integrate a judicial exception into a practical application.
The claim further recites:
provide the generated explanation data via a graphical user interface.
these elements are recited at a high-level of generality and amounts to no more than adding the words “apply it” to the judicial exception. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea (See MPEP 2106.05(f)). These limitations also amount to extra solution activity because it is a mere nominal or tangential addition to the claim, amounting to mere data output (see MPEP 2106.05(g)).
The non-transitory machine readable medium, at least one machine, and user interface are recited at a high-level of generality and amounts to no more than mere instructions to apply the exception using a generic computer component (See MPEP 2106.05(f)).
Accordingly, at Step 2A, Prong Two, the additional elements individually or in combination do no integrate the judicial exception into a practical application.
Step 2B
In accordance with Step 2B, the claim does not include additional elements that are sufficient to amount to significantly more that the judicial exception. As discussed above, the additional elements of:
obtain a dataset as an input for an artificial intelligence model, wherein the obtained dataset is filtered to a disentangled low-dimensional representation;
are recited at a high level of generality and amounts to extra-solution activity of receiving data i.e. pre-solution activity of gathering data for use in the claimed process. The courts have found limitations directed to obtaining information electronically, recited at a high level of generality, to be well-understood, routine, and conventional (see MPEP 2106.05(d)(II), “receiving or transmitting data over a network”, "electronic record keeping," and "storing and retrieving information in memory").
The claim further recites:
provide the generated explanation data via a graphical user interface.
are recited at a high-level of generality and amounts to no more than adding the words “apply it” to the judicial exception. These limitations also amount to extra solution activity because it is a mere nominal or tangential addition to the claim, amounting to mere data output (see MPEP 2106.05(g)). The courts have similarly found limitations directed to displaying a result, recited at a high level of generality, to be well-understood, routine, and conventional. See (MPEP 2106.05(d)(II), "presenting offers and gathering statistics.", “determining an estimated outcome and setting a price”).
05(f))
The non-transitory machine readable medium, at least one machine, and user interface are recited at a high-level of generality and amounts to no more than mere instructions to apply the exception using a generic computer component (See MPEP 2106.05(f)).
Accordingly, at Step 2B the additional elements individually or in combination do not amount to significantly more than the judicial exception.
As to claim 8,
Step 2A, Prong One
The claim recites in part:
learn the generated generative mapping data to generate the explanation data;
identify a plurality of second factors within the obtained data, wherein the identified plurality of second factors have lesser impact on the output of the artificial intelligence model when compared to the identified plurality of first factors.
As drafted and under its broadest reasonable interpretation, this limitation covers performance of the limitation in the mind (including an observation, evaluation, judgment, opinion) or with the aid of pencil and paper but for the recitation of generic computer components. For example, (1) a human would naturally learn from transforming or summarizing (generative mapping) the first factors to create some type of meaning. (2) A human can easily identify a plurality of second factors by writing them down using a pencil and paper of highlighting them on the screen of a generic computer. A human can also prioritize the first factors because they have a higher impact over the second factors.
Accordingly, at Step 2A, Prong One, the claim is directed to an abstract idea.
Step 2A, Prong Two
The claim does not include additional elements that integrate the judicial exception into a practical application or amount to significantly more than the judicial exception itself
Step 2B
The claim does not include additional elements that are sufficient to amount to “significantly more” to the judicial exception
As to claim 9,
Step 2A, Prong One
The claim recites in part:
define a causal model representing a relationship between the identified first factors, second factors, and the output of the artificial intelligence model;
define a quantifying metric to quantify the causal influence of the identified first factors on the output of the artificial intelligence model; and
define a learning framework.
As drafted and under its broadest reasonable interpretation, this limitation covers performance of the limitation in the mind (including an observation, evaluation, judgment, opinion) or with the aid of pencil and paper but for the recitation of generic computer components. For example: (1) A human can easily apply an independent first variable to the first factor and the second factor and compare the before and after outputs to see how the independent variable directly causes a change in the first factor and the second factor (2) A human can easily determine a quantifying metric which is just the strength or impact the independent variable has on changing the output associated with the first factor and the second factor (3) A human would naturally learn how the cause and effect that an independent variable can have.
Accordingly, at Step 2A, Prong One, the claim is directed to an abstract idea.
Step 2A, Prong Two
The claim does not include additional elements that integrate the judicial exception into a practical application or amount to significantly more than the judicial exception itself
Step 2B
The claim does not include additional elements that are sufficient to amount to “significantly more” to the judicial exception
As to claim 10,
Step 2A, Prong One
The claim recites in part:
describe a functional causal structure of the dataset;
derive an explanation from an indirect causal link from the identified plurality of first factors and the output of the artificial intelligence model.
As drafted and under its broadest reasonable interpretation, this limitation covers performance of the limitation in the mind (including an observation, evaluation, judgment, opinion) or with the aid of pencil and paper but for the recitation of generic computer components. For example: (1) A human can describe anything including a functional causal structure of the dataset. (2) A human can observe and analyze the indirect causal link and easily derive an explanation to understand the “why” the or “how” the indirect causal link has a causal effect o on the first factors or the second factors.
Humans have been describing and deriving explanations before computers where even created.
Accordingly, at Step 2A, Prong One, the claim is directed to an abstract idea.
Step 2A, Prong Two
The claim does not include additional elements that integrate the judicial exception into a practical application or amount to significantly more than the judicial exception itself
Step 2B
The claim does not include additional elements that are sufficient to amount to “significantly more” to the judicial exception
As to claim 11,
Step 2A, Prong One
The claim recites in part:
define the quantifying metric considering a factor to capture functional dependencies and quantify indirect causal relationship between the identified plurality of first factors and the output of the artificial intelligence model.
As drafted and under its broadest reasonable interpretation, this limitation covers performance of the limitation in the mind (including an observation, evaluation, judgment, opinion) or with the aid of pencil and paper but for the recitation of generic computer components. For example, a human can easily determine a quantifying metric which is just the strength or impact the independent variable has on changing the output associated with the first factor and the second factor
Accordingly, at Step 2A, Prong One, the claim is directed to an abstract idea.
Step 2A, Prong Two
The claim does not include additional elements that integrate the judicial exception into a practical application or amount to significantly more than the judicial exception itself
Step 2B
The claim does not include additional elements that are sufficient to amount to “significantly more” to the judicial exception
As to claim 12,
Step 2A, Prong One
The claim recites in part:
the identified plurality of second factors does not affect the output of the artificial intelligence model.
As drafted and under its broadest reasonable interpretation, this limitation covers performance of the limitation in the mind (including an observation, evaluation, judgment, opinion) or with the aid of pencil and paper but for the recitation of generic computer components. For example, a human can easily identify a plurality of first factors by writing them down using a pencil and paper of highlighting them on the screen of a generic computer.
Accordingly, at Step 2A, Prong One, the claim is directed to an abstract idea.
Step 2A, Prong Two
The claim does not include additional elements that integrate the judicial exception into a practical application or amount to significantly more than the judicial exception itself
Step 2B
The claim does not include additional elements that are sufficient to amount to “significantly more” to the judicial exception
Claim 1 has similar limitations as claim 7. Therefore, the claim is rejected for the same reasons as above.
Claim 2 has similar limitations as claim 8. Therefore, the claim is rejected for the same reasons as above.
Claim 3 has similar limitations as claim 9. Therefore, the claim is rejected for the same reasons as above.
As to claims 4,
Step 2A, Prong One
The claim recites in part:
wherein the identifying, determining and generating are each by the causal explanation computing apparatus.
As drafted and under its broadest reasonable interpretation, this limitation covers performance of the limitation in the mind (including an observation, evaluation, judgment, opinion) or with the aid of pencil and paper but for the recitation of generic computer components. For example, humans have been identifying, determining, and generating before computers where even invented.
Accordingly, at Step 2A, Prong One, the claim is directed to an abstract idea.
Step 2A, Prong Two
The judicial exception is not integrated into a practical application. In particular, the claim recites the additional elements of:
obtaining, by a causal explanation computing apparatus, the dataset as an input for the artificial intelligence model, wherein the obtained dataset is filtered to the disentangled low-dimensional representation;
which amounts to extra-solution activity of gathering data for use in the claimed process. As described in MPEP 2106.05(g), limitations that amount to merely adding insignificant extra-solution activity to a judicial exception do not amount to significantly more than the exception itself, and cannot integrate a judicial exception into a practical application.
The claim further recites:
providing, by the causal explanation computing apparatus, the generated explanation data via a graphical user interface;
these elements are recited at a high-level of generality and amounts to no more than adding the words “apply it” to the judicial exception. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea (See MPEP 2106.05(f)). These limitations also amount to extra solution activity because it is a mere nominal or tangential addition to the claim, amounting to mere data output (see MPEP 2106.05(g)).
The computing apparatus and graphical user interface are recited at a high-level of generality and amounts to no more than mere instructions to apply the exception using a generic computer component (See MPEP 2106.05(f)).
Accordingly, at Step 2A, Prong Two, the additional elements individually or in combination do no integrate the judicial exception into a practical application.
Step 2B
In accordance with Step 2B, the claim does not include additional elements that are sufficient to amount to significantly more that the judicial exception. As discussed above, the additional elements of:
obtaining, by a causal explanation computing apparatus, the dataset as an input for the artificial intelligence model, wherein the obtained dataset is filtered to the disentangled low-dimensional representation;
are recited at a high level of generality and amounts to extra-solution activity of receiving data i.e. pre-solution activity of gathering data for use in the claimed process. The courts have found limitations directed to obtaining information electronically, recited at a high level of generality, to be well-understood, routine, and conventional (see MPEP 2106.05(d)(II), “receiving or transmitting data over a network”, "electronic record keeping," and "storing and retrieving information in memory").
The claim further recites:
providing, by the causal explanation computing apparatus, the generated explanation data via a graphical user interface;
are recited at a high-level of generality and amounts to no more than adding the words “apply it” to the judicial exception. These limitations also amount to extra solution activity because it is a mere nominal or tangential addition to the claim, amounting to mere data output (see MPEP 2106.05(g)). The courts have similarly found limitations directed to displaying a result, recited at a high level of generality, to be well-understood, routine, and conventional. See (MPEP 2106.05(d)(II), "presenting offers and gathering statistics.", “determining an estimated outcome and setting a price”).
05(f))
The computing apparatus and graphical user interface are recited at a high-level of generality and amounts to no more than mere instructions to apply the exception using a generic computer component (See MPEP 2106.05(f)).
Accordingly, at Step 2B the additional elements individually or in combination do not amount to significantly more than the judicial exception.
As to claims 5,
Step 2A, Prong One
The claim recites in part:
learning, by the causal explanation computing apparatus, the generated generative mapping to generate the explanation data comprising:
defining, by the causal explanation computing apparatus, a causal model representing a relationship between the identified first factors, the second factors, and the output of the artificial intelligence model;
defining, by the causal explanation computing apparatus, a quantifying metric to quantify the causal influence of the identified first factors on the output of the artificial intelligence model; and
defining, by the causal explanation computing apparatus, a learning framework;
identifying, by the causal explanation computing apparatus, second factors within the obtained dataset, wherein the identified second factors have a lesser impact on the output of the artificial intelligence model when compared to the identified first factors;
wherein the quantifying metric is defined considering a factor to capture functional dependencies and quantify indirect causal relationship between the identified first factors and the output of the artificial intelligence model;
wherein the defining the causal model comprises:
escribing a functional causal structure of the dataset; and
deriving an explanation from an indirect causal link from the identified first factors and the output of the artificial intelligence model; and
wherein the quantifying metric is defined considering a factor to capture functional dependencies and quantify indirect causal relationship between the identified first factors and the output of the artificial intelligence model.
As drafted and under its broadest reasonable interpretation, this limitation covers performance of the limitation in the mind (including an observation, evaluation, judgment, opinion) or with the aid of pencil and paper but for the recitation of generic computer components. For example, (1) A human would naturally learn how the cause and effect that an independent variable can have. (2) A human can easily circle data with a pencil on a sheet of paper to “identify” data within a dataset. (3) A human can easily determine a quantifying metric which is just the strength or impact the independent variable has on changing the output associated with the first factor and the second factor. (4) A human can define a causal model as simply cause and effect relationships.
Accordingly, at Step 2A, Prong One, the claim is directed to an abstract idea.
Step 2A, Prong Two
The claim does not include additional elements that integrate the judicial exception into a practical application or amount to significantly more than the judicial exception itself
Step 2B
The claim does not include additional elements that are sufficient to amount to “significantly more” to the judicial exception
Claim 6 has similar limitations as claim 12. Therefore, the claim is rejected for the same reasons as above.
Claim 13 has similar limitations as claim 7. Therefore, the claim is rejected for the same reasons as above.
The computing apparatus, memory, machine readable medium, storage system, processor, and a graphical user interface are recited at a high-level of generality and amounts to no more than mere instructions to apply the exception using a generic computer component (See MPEP 2106.05(f)).
Claim 14 has similar limitations as claim 8. Therefore, the claim is rejected for the same reasons as above.
Claim 15 has similar limitations as claim 9. Therefore, the claim is rejected for the same reasons as above.
Claim 16 has similar limitations as claim 10. Therefore, the claim is rejected for the same reasons as above.
Claim 17 has similar limitations as claim 11. Therefore, the claim is rejected for the same reasons as above.
Claim 18 has similar limitations as claim 12. Therefore, the claim is rejected for the same reasons as above.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1 - 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Woo et al (US 2023/0105970) in view of Moore et al (US 10,402,726).
As to claim 7, Woo et al figures 2 – 5 shows and teaches a non-transitory machine readable medium having stored thereon instruction comprising machine executable code which when executed by at least one machine causes the machine to (paragraph [0045]… memory 520 may include non-transitory, tangible, machine readable media that includes executable code that when run by one or more processors (e.g., processor 510) may cause the one or more processors to perform the methods described in further detail herein):
obtain a dataset as an input for an artificial intelligence model, wherein the obtained dataset is filtered to a disentangled low-dimensional representation (paragraph [0027]…the system 200, also referred to as a representation learning model 200, paragraph [0028]…In the system 200, a backbone encoder 202 maps observations to a latent space, e.g., projecting m-dimensional raw signals into a d-dimensional latent space for each timestep ; paragraph ; paragraph [0045]… a trained season-trend representative learning module 530 may receive input that includes a time series data 540 via the data interface 515 and generate representations of time series data 550 as output)(Examiner’s Note: “receive input that includes a time series data via the data interface of the system 200” reads on “obtain a dataset as input for an artificial intelligence model” ; “learning module” reads on “artificial intelligence model” ; “backbone encoder 202 maps observations to a latent space, e.g., projecting m-dimensional raw signals into a d-dimensional latent space” reads on “filtered” ; By definition, latent space is much smaller (lower-dimensional) than raw data, acting as a compressed, abstract representation that captures essential features while discarding noise);
identify a plurality of first factors from the disentangled low-dimensional representation of the obtained data that affect an output of the artificial intelligence model (paragraph [0028]…trend feature disentangler 204 (also referred to as a trend feature extractor 204) may extract the trend representations (e.g., via a mixture of auto-regressive experts), and may be learnt via a time domain contrastive loss 208 (denoted as L.sub.time) using contrastive learning) ; paragraph [0041]…the method 400 may proceed to block 418, where a forecasting task is performed based on the learned feature representations) ; (Examiner’s Note: “trend feature disentangler” reads on “identify” ; “trend representations” reads on “first factors” ; “forecasting task is performed” reads on “output” );; (Applicant doesn’t define what the first factor is…examiner interprets the first factor as the Trend Feature and the second factor as the Seasonal Feature)
determine a generative mapping from the disentangled low-dimensional representation between the identified plurality of first factors and the output of the artificial intelligence model, using causal reasoning (paragraph [0028]…trend feature disentangler 204 (also referred to as a trend feature extractor 204) may extract the trend representations (e.g., via a mixture of auto-regressive experts), and may be learnt via a time domain contrastive loss 208 (denoted as L.sub.time) using contrastive learning) ; paragraph [0031]… Extracting the underlying trend is crucial for modeling time series. Auto-regressive filtering may be to capture time-lagged causal relationships from past observations…. As illustrated in FIG. 3A, in some embodiments, the trend feature disentangler 300 may include a mixture of L+1 autoregressive experts. In an example, L=└ log.sub.2(h/2)┘. Each expert 306-I may be implemented as a id causal convolution with d input channels and d.sub.T output channels, where the kernel size of the i-th expert is 2i. Each expert outputs a matrix {tilde over (V)}.sup.(T,i)=CausalConv({tilde over (V)}, 2.sup.i). An average-pooling operation may be performed over the outputs to obtain the final trend representations, by average pool unit 308, as follows) (Examiner’s Note: “causal convolution + contrastive learning” reads on “generative mapping” ; “causal relationships” reads on “causal reasoning” ).
generate explanation data using the determined generative mapping, wherein the generated explanation data wherein the generated explanation data provides a description of an operation leading to the output of the artificial intelligence model using the identified plurality of first factors (paragraph [0033]… [0033] A contrastive loss in the time domain (e.g., time domain contrastive loss) may be used to learn discriminative trend representations ; paragraph [0040]… the representation learning method 402 may proceed to block 408, where a time domain contrastive loss is generated based on the trend feature representations ; paragraph [0041]… the representation learning model is trained at block 402, the method 400 may proceed to block 416, where learned feature representations including disentangled trend feature representations and seasonal feature representations are generated using the trained representation learning model. The method 400 may proceed to block 418, where a forecasting task is performed based on the learned feature representations.) (Examiner’s Note: “generated time domain contrastive loss” reads on “generated explanation data” ; “learn discriminative trend representations” reads on “description of an operation” ; “ forecasting task is performed based on the learned feature representations” reads on “an operation leading to the output of the artificial intelligence model” )
Woo et al fails to explicitly show/teach provide the generated explanation data via a graphical user interface.
However, Moore et al teaches providing the generated explanation data via a graphical user interface (column 2, lines 35 – 55…the simulation output may include a distribution of possible values for each feature of the selected feature set that results in the target value being within a particular range (e.g., a tolerance) of the target value. Alternatively, the selected feature set may undergo principal component analysis (PCA) to determine a set of latent features, and the set of latent features may be sampled and perturbed to generate the output simulations. The simulation results may then be presented, e.g., via a graphical user interface, to a user to enable the user to make a decision based on the simulation results “) (Examiner’s Note: “the simulation output may include a distribution of possible values for each feature of the selected feature set that results in the target value being within a particular range (e.g., a tolerance) of the target value” reads on “explanation data”).
Therefore, it would have been obvious for one having ordinary skill in the art, at the time the invention was made, for Woo et al to provide the generated explanation data via a graphical user interface, as in Moore et al, for the purpose of presenting simulation results to a user to enable the user to make a decision based on the simulation results.
As to claim 8, Woo et al figures 2 – 5 shows the medium, wherein the instructions, when executed, further causes the machine to:
learn the generated generative mapping data to generate the explanation data (paragraph…[0033] A contrastive loss in the time domain (e.g., time domain contrastive loss 208) may be used to learn discriminative trend representations.); and
identify a plurality of second factors within the obtained data, wherein the identified plurality of second factors have lesser impact on the output of the artificial intelligence model when compared to the identified plurality of first factors (paragraph [0020]…observed time series data include a seasonal component and a trend component. Specifically, as shown in the example of FIG. 1A, where the observed time series 102 includes a seasonal component 104 (e.g., generated by a seasonal module) and a trend component 106 (e.g., generated by a nonlinear trend module). The seasonal component 104 has seasonality representing a repeating short-term cycle in the series. The trend component 106 has a trend indicating the increasing or decreasing value in the series.; c may extract the seasonal representations (e.g., via a learnable Fourier layer), and may be learned by a frequency domain contrastive loss 210 using contrastive learning) (Examiner’s Note: “seasonal feature disentangler” reads on ”identify “ ; “seasonal representations” reads on “second factors” ; Figure 1A shows that during longer period of times the second factor (seasonal representation) has a lesser impact (trend component 106) than the first factor (trend representation)).
As to claim 9, Woo et al figures 2 – 5 shows the medium, wherein the instructions, when executed, further causes the machine to:
define a causal model representing a relationship between the identified plurality of first factors, the plurality of second factors, and the output of the artificial intelligence model (paragraph [0041]… after the representation learning model is trained at block 402, the method 400 may proceed to block 416, where learned feature representations including disentangled trend feature representations and seasonal feature representations are generated using the trained representation learning model. The method 400 may proceed to block 418, where a forecasting task is performed based on the learned feature representations) (Examiner’s Note: “trained representation learning model” reads on “causal model”);
define a quantifying metric to quantify the causal influence of the identified plurality of first factors on the output of the artificial intelligence model; and define a learning framework.
(paragraph [0027]… [0027] Referring to FIG. 2, illustrated is a simplified diagram illustrating a system 200 for learning disentangled seasonal-trend representations for time series forecasting, according to embodiments described herein. The system 200, also referred to as a representation learning model 200, learns representations, which includes disentangled representations for seasonal and trend components for each time step, e.g., denoted as V=[V.sup.(T); V.sup.(S)], where the disentangled representations V include trend feature representations V.sup.(T) (also referred to as trend representations) and seasonal feature representations V.sup.(S) (also referred to as seasonal representations)) (Examiner’s Note: “V.sup.(T)” reads on “quantifying metric” ; “ learns representations” reads on “define learning framework”).
As to claim 10, Woo et al figures 2 – 5 shows the medium, wherein the instructions, when executed, further causes the machine to:
describe a functional causal structure of the dataset; and
derive an explanation from an indirect causal link from the identified plurality of first factors and the output of the artificial intelligence model.
(paragraph [0023]… data may arise from the rich interaction of multiple sources. A goal of the representation is to disentangle the various explanatory sources, making it robust to complex and richly structured variations. Not doing so may otherwise lead to capturing spurious features that do not transfer well under non-independent and identically distributed (i. i. d.) data distribution settings. To achieve this goal, structural priors for time series is introduced. As illustrated in the causal graph 150 in FIG. 1B, it is assumed that the observed time series data X 152 is generated from the error variable E 154 and the error-free latent variable X*156. X*156 in turn, is generated from the trend variable T 160 and seasonal variable S 158. As E 154 is not predictable, the optimal prediction can be achieved by uncover X*156 which only depends on T 160 and S 158, and does not depend on E 154) (Examiner’s Note: “structural priors” reads on “functional causal structure of the dataset” and “error-free latent variable X*” reads on “indirect causal link”).
As to claim 11, Woo et al figures 2 – 5 shows the medium, wherein the instructions, when executed, further causes the machine to:
define the quantifying metric is considering a factor to capture functional
dependencies and quantify indirect causal relationship between the identified plurality of first factors and the output of the artificial intelligence model (paragraph [0030]… [0031] Extracting the underlying trend is crucial for modeling time series. Auto-regressive filtering may be to capture time-lagged causal relationships from past observations. One challenge is to select the appropriate look-back window: a smaller window leads to under-fitting, while a larger model leads to over-fitting and over-parameterization issues. In some examples, this hyper-parameter is optimized by grid search on the training or validation loss, but such an approach is too computationally expensive. In examples like those illustrated in FIG. 3A, a mixture of auto-regressive experts may be used to adaptively select the appropriate look-back window. As illustrated in FIG. 3A, in some embodiments, the trend feature disentangler 300 may include a mixture of L+1 autoregressive experts. In an example, L=└ log.sub.2(h/2)┘. Each expert 306-I may be implemented as a id causal convolution with d input channels and d.sub.T output channels, where the kernel size of the i-th expert is 2i. Each expert outputs a matrix {tilde over (V)}.sup.(T,i)=CausalConv({tilde over (V)}, 2.sup.i). An average-pooling operation may be performed over the outputs to obtain the final trend representations, by average pool unit 308, equation [00003]) (Examiner’s Note: “average pool unit” reads on “indirect causal relationship”).
As to claim 12, Woo et al figures 2 – 5 shows the medium, wherein the identified plurality of second factors does not affect the output of the artificial intelligence model. (paragraph [0025]…the seasonal and trend modules do not influence or inform each other. Therefore, even if one mechanism changes due to a distribution shift, the other remains unchanged. Accordingly, disentangling seasonality and trend leads to better transfer, or generalization in nonstationary environments. Furthermore, independent seasonal and trend mechanisms can be learned independently and be flexibly re-used and re-purposed).
Claim 1 has similar limitations as claim 7. Therefore, the claim is rejected for the same reasons as above.
Claim 2 has similar limitations as claim 8. Therefore, the claim is rejected for the same reasons as above.
Claim 3 has similar limitations as claim 9. Therefore, the claim is rejected for the same reasons as above.
As to claim 4, Woo et al figures 2 – 5 shows the method obtaining, by a causal explanation computing apparatus, the dataset as an input for the artificial intelligence model, wherein the obtained dataset is filtered to the disentangled low-dimensional representation (paragraph [0028]…In the system 200, a backbone encoder 202 maps observations to a latent space, e.g., projecting m-dimensional raw signals into a d-dimensional latent space for each timestep ; paragraph ; paragraph [0045]… a trained season-trend representative learning module 530 may receive input that includes a time series data 540 via the data interface 515 and generate representations of time series data 550 as output)(Examiner’s Note: “data interface” reads on “obtaining” ; “learning module” reads on “artificial intelligence model” ; “encoder” reads on “filtered” ; By definition, latent space is much smaller (lower-dimensional) than raw data, acting as a compressed, abstract representation that captures essential features while discarding noise);
Woo et al fails to explicitly show teach providing, by the causal explanation computing apparatus, the generated explanation data via a graphical user interface; wherein the identifying, determining and generating are each by the causal explanation computing apparatus.
However, Moore et al teaches providing, by the causal explanation computing apparatus, the generated explanation data via a graphical user interface; wherein the identifying, determining and generating are each by the causal explanation computing apparatus (column 2, lines 35 – 55…the simulation output may include a distribution of possible values for each feature of the selected feature set that results in the target value being within a particular range (e.g., a tolerance) of the target value. Alternatively, the selected feature set may undergo principal component analysis (PCA) to determine a set of latent features, and the set of latent features may be sampled and perturbed to generate the output simulations. The simulation results may then be presented, e.g., via a graphical user interface, to a user to enable the user to make a decision based on the simulation results).
Therefore, it would have been obvious for one having ordinary skill in the art, at the time the invention was made, for Woo et al to provide, by the causal explanation computing apparatus, the generated explanation data via a graphical user interface; wherein the identifying, determining and generating are each by the causal explanation computing apparatus, as in Moore et al, for the purpose of presenting simulation results to a user to enable the user to make a decision based on the simulation results.
Claim 5 has similar limitations as claim 11. Therefore, the claim is rejected for the same reasons as above.
Claim 6 has similar limitations as claim 12. Therefore, the claim is rejected for the same reasons as above.
Claim 13 has similar limitations as claim 7. Therefore, the claim is rejected for the same reasons as above.
Claim 14 has similar limitations as claim 8. Therefore, the claim is rejected for the same reasons as above.
Claim 15 has similar limitations as claim 9. Therefore, the claim is rejected for the same reasons as above.
Claim 16 has similar limitations as claim 10. Therefore, the claim is rejected for the same reasons as above.
Claim 17 has similar limitations as claim 11. Therefore, the claim is rejected for the same reasons as above.
Claim 18 has similar limitations as claim 12. Therefore, the claim is rejected for the same reasons as above.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to BRANDON S COLE whose telephone number is (571)270-5075. The examiner can normally be reached Mon - Fri 7:30pm - 5pm EST (Alternate Friday's Off).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Omar Fernandez can be reached at 571-272-2589. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/BRANDON S COLE/ Primary Examiner, Art Unit 2128