DETAILED ACTION
This action is in response to the application filed 04/27/2023. Claims 1-19 are pending and have been examined.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement filed 05/19/2023 fails to comply with 37 CFR 1.98(a)(2), which requires a legible copy of each cited foreign patent document; each non-patent literature publication or that portion which caused it to be listed; and all other information or that portion which caused it to be listed. It has been placed in the application file, but the information referred to therein has not been considered.
Claim Objections
Claims 1, 8, and 14 are objected to because of the following informalities:
Regarding claim 1, in line 17, “propagate the initial point of latent dynamics of the device forward in time till a time index of interest” should read “propagate the initial point of latent dynamics of the device forward in time until a time index of interest”.
Regarding claim 8, in line 11, “propagating the initial point of latent dynamics of the device forward in time till a time index of interest” should read “propagate the initial point of latent dynamics of the device forward in time until a time index of interest”.
Regarding claim 14, in line 12, “propagate the initial point of latent dynamics of the device forward in time till a time index of interest” should read “propagate the initial point of latent dynamics of the device forward in time until a time index of interest”.
Appropriate correction is required.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are:
“An encoder” in claim 1 claiming “an encoder configured to encode each input data point of the time series input data…”
“A latent subnetwork” in claim 1 claiming “a latent subnetwork configured to propagate the initial point of latent dynamics of the device forward in time…”
“An extended decoder” in claim 1 claiming “an extended decoder configured to decode the state of latent dynamics of the device…”
“A decoder” in claim 2 claiming “a decoder configured to decode the state of latent dynamics of the device…”
“the extended decoder” in claim 3 claiming “the extended decoder is further configured to interpolate the trajectory of the device…”
“the extended decoder” in claim 4 claiming “the extended decoder is further configured to extrapolate the trajectory of the device…”
“each extended decoder” in claim 5 claiming “each extended decoder of the plurality of extended decoders is configured to decode the state of latent dynamics into a different state space…”
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-7 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Regarding claim 1, claim limitation “an encoder configured to encode each input data point of the time series input data…” invokes 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. However, the written description fails to disclose the corresponding structure, material, or acts for performing the entire claimed function and to clearly link the structure, material, or acts to the function. An encoder configured to encode each input data point is a specialized computer function that would require an algorithm to be disclosed, in addition to the physical structure that would perform the algorithm. While Figures 1A-1D display an encoder, there is no physical structure shown in any of them. Therefore, the claim is indefinite and is rejected under 35 U.S.C. 112(b) or pre-AIA 35 U.S.C. 112, second paragraph.
Applicant may:
(a) Amend the claim so that the claim limitation will no longer be interpreted as a limitation under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph;
(b) Amend the written description of the specification such that it expressly recites what structure, material, or acts perform the entire claimed function, without introducing any new matter (35 U.S.C. 132(a)); or
(c) Amend the written description of the specification such that it clearly links the structure, material, or acts disclosed therein to the function recited in the claim, without introducing any new matter (35 U.S.C. 132(a)).
If applicant is of the opinion that the written description of the specification already implicitly or inherently discloses the corresponding structure, material, or acts and clearly links them to the function so that one of ordinary skill in the art would recognize what structure, material, or acts perform the claimed function, applicant should clarify the record by either:
(a) Amending the written description of the specification such that it expressly recites the corresponding structure, material, or acts for performing the claimed function and clearly links or associates the structure, material, or acts to the claimed function, without introducing any new matter (35 U.S.C. 132(a)); or
(b) Stating on the record what the corresponding structure, material, or acts, which are implicitly or inherently set forth in the written description of the specification, perform the claimed function. For more information, see 37 CFR 1.75(d) and MPEP §§ 608.01(o) and 2181.
Regarding claim 1, claim limitation “a latent subnetwork configured to propagate the initial point of latent dynamics of the device forward in time…” invokes 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. However, the written description fails to disclose the corresponding structure, material, or acts for performing the entire claimed function and to clearly link the structure, material, or acts to the function. A latent subnetwork configured to the initial point of latent dynamics is a specialized computer function that would require an algorithm to be disclosed, in addition to the physical structure that would perform the algorithm. While Figures 1C-1D display a latent subnetwork, there is no physical structure shown in any of them. Therefore, the claim is indefinite and is rejected under 35 U.S.C. 112(b) or pre-AIA 35 U.S.C. 112, second paragraph.
Applicant may:
(a) Amend the claim so that the claim limitation will no longer be interpreted as a limitation under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph;
(b) Amend the written description of the specification such that it expressly recites what structure, material, or acts perform the entire claimed function, without introducing any new matter (35 U.S.C. 132(a)); or
(c) Amend the written description of the specification such that it clearly links the structure, material, or acts disclosed therein to the function recited in the claim, without introducing any new matter (35 U.S.C. 132(a)).
If applicant is of the opinion that the written description of the specification already implicitly or inherently discloses the corresponding structure, material, or acts and clearly links them to the function so that one of ordinary skill in the art would recognize what structure, material, or acts perform the claimed function, applicant should clarify the record by either:
(a) Amending the written description of the specification such that it expressly recites the corresponding structure, material, or acts for performing the claimed function and clearly links or associates the structure, material, or acts to the claimed function, without introducing any new matter (35 U.S.C. 132(a)); or
(b) Stating on the record what the corresponding structure, material, or acts, which are implicitly or inherently set forth in the written description of the specification, perform the claimed function. For more information, see 37 CFR 1.75(d) and MPEP §§ 608.01(o) and 2181.
Regarding claim 1, claim limitation “an extended decoder configured to decode the state of latent dynamics of the device…” invokes 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. However, the written description fails to disclose the corresponding structure, material, or acts for performing the entire claimed function and to clearly link the structure, material, or acts to the function. An extended decoder configured to decode the state of latent dynamics is a specialized computer function that would require an algorithm to be disclosed, in addition to the physical structure that would perform the algorithm. While Figures 1B-1D display an extended decoder, there is no physical structure shown in any of them. Therefore, the claim is indefinite and is rejected under 35 U.S.C. 112(b) or pre-AIA 35 U.S.C. 112, second paragraph.
Applicant may:
(a) Amend the claim so that the claim limitation will no longer be interpreted as a limitation under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph;
(b) Amend the written description of the specification such that it expressly recites what structure, material, or acts perform the entire claimed function, without introducing any new matter (35 U.S.C. 132(a)); or
(c) Amend the written description of the specification such that it clearly links the structure, material, or acts disclosed therein to the function recited in the claim, without introducing any new matter (35 U.S.C. 132(a)).
If applicant is of the opinion that the written description of the specification already implicitly or inherently discloses the corresponding structure, material, or acts and clearly links them to the function so that one of ordinary skill in the art would recognize what structure, material, or acts perform the claimed function, applicant should clarify the record by either:
(a) Amending the written description of the specification such that it expressly recites the corresponding structure, material, or acts for performing the claimed function and clearly links or associates the structure, material, or acts to the claimed function, without introducing any new matter (35 U.S.C. 132(a)); or
(b) Stating on the record what the corresponding structure, material, or acts, which are implicitly or inherently set forth in the written description of the specification, perform the claimed function. For more information, see 37 CFR 1.75(d) and MPEP §§ 608.01(o) and 2181.
Regarding claims 2-7, claims 2-7 are rejected for at least the same reasons as claim 1 since claims 2-7 depend on claim 1.
Regarding claim 2, claim limitation “a decoder configured to decode the state of latent dynamics of the device…” invokes 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. However, the written description fails to disclose the corresponding structure, material, or acts for performing the entire claimed function and to clearly link the structure, material, or acts to the function. A decoder configured to decode the state of latent dynamics is a specialized computer function that would require an algorithm to be disclosed, in addition to the physical structure that would perform the algorithm. While Figures 1A-1C display a decoder, there is no physical structure shown in any of them. Therefore, the claim is indefinite and is rejected under 35 U.S.C. 112(b) or pre-AIA 35 U.S.C. 112, second paragraph.
Applicant may:
(a) Amend the claim so that the claim limitation will no longer be interpreted as a limitation under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph;
(b) Amend the written description of the specification such that it expressly recites what structure, material, or acts perform the entire claimed function, without introducing any new matter (35 U.S.C. 132(a)); or
(c) Amend the written description of the specification such that it clearly links the structure, material, or acts disclosed therein to the function recited in the claim, without introducing any new matter (35 U.S.C. 132(a)).
If applicant is of the opinion that the written description of the specification already implicitly or inherently discloses the corresponding structure, material, or acts and clearly links them to the function so that one of ordinary skill in the art would recognize what structure, material, or acts perform the claimed function, applicant should clarify the record by either:
(a) Amending the written description of the specification such that it expressly recites the corresponding structure, material, or acts for performing the claimed function and clearly links or associates the structure, material, or acts to the claimed function, without introducing any new matter (35 U.S.C. 132(a)); or
(b) Stating on the record what the corresponding structure, material, or acts, which are implicitly or inherently set forth in the written description of the specification, perform the claimed function. For more information, see 37 CFR 1.75(d) and MPEP §§ 608.01(o) and 2181.
Regarding claim 3, claim limitation “the extended decoder is further configured to interpolate the trajectory of the device…” invokes 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. However, the written description fails to disclose the corresponding structure, material, or acts for performing the entire claimed function and to clearly link the structure, material, or acts to the function. The extended decoder further configured to interpolate the trajectory of the device is a specialized computer function that would require an algorithm to be disclosed, in addition to the physical structure that would perform the algorithm. While Figures 1B-1D display an extended decoder, there is no physical structure shown in any of them. Therefore, the claim is indefinite and is rejected under 35 U.S.C. 112(b) or pre-AIA 35 U.S.C. 112, second paragraph.
Applicant may:
(a) Amend the claim so that the claim limitation will no longer be interpreted as a limitation under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph;
(b) Amend the written description of the specification such that it expressly recites what structure, material, or acts perform the entire claimed function, without introducing any new matter (35 U.S.C. 132(a)); or
(c) Amend the written description of the specification such that it clearly links the structure, material, or acts disclosed therein to the function recited in the claim, without introducing any new matter (35 U.S.C. 132(a)).
If applicant is of the opinion that the written description of the specification already implicitly or inherently discloses the corresponding structure, material, or acts and clearly links them to the function so that one of ordinary skill in the art would recognize what structure, material, or acts perform the claimed function, applicant should clarify the record by either:
(a) Amending the written description of the specification such that it expressly recites the corresponding structure, material, or acts for performing the claimed function and clearly links or associates the structure, material, or acts to the claimed function, without introducing any new matter (35 U.S.C. 132(a)); or
(b) Stating on the record what the corresponding structure, material, or acts, which are implicitly or inherently set forth in the written description of the specification, perform the claimed function. For more information, see 37 CFR 1.75(d) and MPEP §§ 608.01(o) and 2181.
Regarding claim 4, claim limitation “the extended decoder is further configured to extrapolate the trajectory of the device…” invokes 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. However, the written description fails to disclose the corresponding structure, material, or acts for performing the entire claimed function and to clearly link the structure, material, or acts to the function. The extended decoder further configured to extrapolate the trajectory of the device is a specialized computer function that would require an algorithm to be disclosed, in addition to the physical structure that would perform the algorithm. While Figures 1B-1D display an extended decoder, there is no physical structure shown in any of them. Therefore, the claim is indefinite and is rejected under 35 U.S.C. 112(b) or pre-AIA 35 U.S.C. 112, second paragraph.
Applicant may:
(a) Amend the claim so that the claim limitation will no longer be interpreted as a limitation under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph;
(b) Amend the written description of the specification such that it expressly recites what structure, material, or acts perform the entire claimed function, without introducing any new matter (35 U.S.C. 132(a)); or
(c) Amend the written description of the specification such that it clearly links the structure, material, or acts disclosed therein to the function recited in the claim, without introducing any new matter (35 U.S.C. 132(a)).
If applicant is of the opinion that the written description of the specification already implicitly or inherently discloses the corresponding structure, material, or acts and clearly links them to the function so that one of ordinary skill in the art would recognize what structure, material, or acts perform the claimed function, applicant should clarify the record by either:
(a) Amending the written description of the specification such that it expressly recites the corresponding structure, material, or acts for performing the claimed function and clearly links or associates the structure, material, or acts to the claimed function, without introducing any new matter (35 U.S.C. 132(a)); or
(b) Stating on the record what the corresponding structure, material, or acts, which are implicitly or inherently set forth in the written description of the specification, perform the claimed function. For more information, see 37 CFR 1.75(d) and MPEP §§ 608.01(o) and 2181.
Claim 5 recites the limitation "wherein the autoencoder includes a plurality of extended decoders, and wherein each extended decoder of the plurality of extended decoders is configured to decode the state of latent dynamics…" in lines 2-3. There is insufficient antecedent basis for this limitation in the claim. It is unclear as to whether these extended decoders are the same as the extended decoder as recited in claim 1. For purposes of examination, Examiner has interpreted this plurality of extended decoders to be a group containing multiple extended decoders from claim 1.
Regarding claim 5, claim limitation “each extended decoder of the plurality of extended decoders is configured to decode the state of latent dynamics into a different state space…” invokes 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. However, the written description fails to disclose the corresponding structure, material, or acts for performing the entire claimed function and to clearly link the structure, material, or acts to the function. Each extended decoder configured to decode the state of latent dynamics into a different state space is a specialized computer function that would require an algorithm to be disclosed, in addition to the physical structure that would perform the algorithm. While Figures 1B-1D display an extended decoder, there is no physical structure shown in any of them. Therefore, the claim is indefinite and is rejected under 35 U.S.C. 112(b) or pre-AIA 35 U.S.C. 112, second paragraph.
Applicant may:
(a) Amend the claim so that the claim limitation will no longer be interpreted as a limitation under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph;
(b) Amend the written description of the specification such that it expressly recites what structure, material, or acts perform the entire claimed function, without introducing any new matter (35 U.S.C. 132(a)); or
(c) Amend the written description of the specification such that it clearly links the structure, material, or acts disclosed therein to the function recited in the claim, without introducing any new matter (35 U.S.C. 132(a)).
If applicant is of the opinion that the written description of the specification already implicitly or inherently discloses the corresponding structure, material, or acts and clearly links them to the function so that one of ordinary skill in the art would recognize what structure, material, or acts perform the claimed function, applicant should clarify the record by either:
(a) Amending the written description of the specification such that it expressly recites the corresponding structure, material, or acts for performing the claimed function and clearly links or associates the structure, material, or acts to the claimed function, without introducing any new matter (35 U.S.C. 132(a)); or
(b) Stating on the record what the corresponding structure, material, or acts, which are implicitly or inherently set forth in the written description of the specification, perform the claimed function. For more information, see 37 CFR 1.75(d) and MPEP §§ 608.01(o) and 2181.
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claims 1-7 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the enablement requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to enable one skilled in the art to which it pertains, or with which it is most nearly connected, to make and/or use the invention.
Regarding claim 1, “an encoder configured to encode each input data point of the time series input data…” as described above does not provide adequate structure to perform the claimed function (see 112(b) rejection above). Therefore, the specification does not appear to provide sufficient detail such that one of ordinary skill can reasonably conclude that the inventor had possession of the claimed invention.
Regarding claim 1, “a latent subnetwork configured to propagate the initial point of latent dynamics of the device forward in time…” as described above does not provide adequate structure to perform the claimed function (see 112(b) rejection above). Therefore, the specification does not appear to provide sufficient detail such that one of ordinary skill can reasonably conclude that the inventor had possession of the claimed invention.
Regarding claim 1, “an extended decoder configured to decode the state of latent dynamics of the device…” as described above does not provide adequate structure to perform the claimed function (see 112(b) rejection above). Therefore, the specification does not appear to provide sufficient detail such that one of ordinary skill can reasonably conclude that the inventor had possession of the claimed invention.
Regarding claims 2-7, claims 2-7 are rejected for at least the same reasons as claim 1 since claims 2-7 depend on claim 1.
Regarding claim 2, “a decoder configured to decode the state of latent dynamics of the device…” as described above does not provide adequate structure to perform the claimed function (see 112(b) rejection above). Therefore, the specification does not appear to provide sufficient detail such that one of ordinary skill can reasonably conclude that the inventor had possession of the claimed invention.
Regarding claim 3, “a decoder configured to decode the state of latent dynamics of the device…” as described above does not provide adequate structure to perform the claimed function (see 112(b) rejection above). Therefore, the specification does not appear to provide sufficient detail such that one of ordinary skill can reasonably conclude that the inventor had possession of the claimed invention.
Regarding claim 4, “the extended decoder is further configured to extrapolate the trajectory of the device…” as described above does not provide adequate structure to perform the claimed function (see 112(b) rejection above). Therefore, the specification does not appear to provide sufficient detail such that one of ordinary skill can reasonably conclude that the inventor had possession of the claimed invention.
Regarding claim 5, “each extended decoder of the plurality of extended decoders is configured to decode the state of latent dynamics into a different state space…” as described above does not provide adequate structure to perform the claimed function (see 112(b) rejection above). Therefore, the specification does not appear to provide sufficient detail such that one of ordinary skill can reasonably conclude that the inventor had possession of the claimed invention.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-19 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Regarding Claim 1:
Subject Matter Eligibility Analysis Step 1:
Claim 1 recites an artificial intelligence (AI) system and is thus an apparatus, one of the four statutory categories of patentable subject matter.
Subject Matter Eligibility Analysis Step 2A Prong 1:
Claim 1 recites
encode each input data point of the time series input data from the input state space into a latent space to produce latent data points indexed in time according to time indices of corresponding input data points (This limitation is a mental process based on mathematical concepts as it encompasses a human mentally encoding data.)
and propagate the latent data points backward in time with a neural Ordinary Differential Equation (ODE) approximating dynamics of the device in the latent space to estimate an initial point of latent dynamics of the device in the latent space; (This limitation is a mental process based on mathematical concepts as it encompasses a human mentally propagating data points backward in time with an equation.)
propagate the initial point of latent dynamics of the device forward in time till a time index of interest using the neural ODE to produce a state of latent dynamics of the device at the time index of interest; (This limitation is a mental process based on mathematical concepts as it encompasses a human mentally propagating a data point forward in time with an equation.)
decode the state of latent dynamics of the device into the output state space different from the input state space to produce output data including the state of the device at the time index of interest (This limitation is a mental process based on mathematical concepts as it encompasses a human mentally decoding the state of latent dynamics of the device.)
Therefore, claim 1 recites an abstract idea.
Subject Matter Eligibility Analysis Step 2A Prong 2:
Claim 1 further recites additional elements of
An Artificial Intelligence (AI) system (This element does not integrate the abstract idea into a practical application because it recites generic computing components on which to perform the abstract idea (see MPEP 2106.05(f)).)
for sensing a state of a device with continuous-time dynamics, (This element does not integrate the abstract idea into a practical application because it recites a technological environment in which to apply a judicial exception (see MPEP 2106.05(h)).)
the AI system including a neural network having an autoencoder architecture adapted for dynamic transformation of time series input data from an input state space indicative of the state of the device into an output state space indicative of the state of the device, comprising: (This element does not integrate the abstract idea into a practical application because it recites insignificant extra-solution activity of data gathering (see MPEP 2106.05(g)).)
at least one processor; and a memory having instructions stored thereon that cause the at least one processor to execute the neural network, train the neural network, or both, the autoencoder architecture comprising: (This element does not integrate the abstract idea into a practical application because it recites generic computing components on which to perform the abstract idea (see MPEP 2106.05(f)).)
an encoder configured to encode(This element does not integrate the abstract idea into a practical application because it amounts to mere “apply it on a computer” (see MPEP 2106.05(f)).)
a latent subnetwork configured to propagate (This element does not integrate the abstract idea into a practical application because it amounts to mere “apply it on a computer” (see MPEP 2106.05(f)).)
an extended decoder configured to decode (This element does not integrate the abstract idea into a practical application because it amounts to mere “apply it on a computer” (see MPEP 2106.05(f)).)
Therefore, claim 1 is not integrated into a practical application.
Subject Matter Eligibility Analysis Step 2B:
The additional elements of claim 1 do not provide significantly more than the abstract idea itself, taken alone and in combination because
An Artificial Intelligence (AI) system uses a computer as a tool to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(f)).
for sensing a state of a device with continuous-time dynamics specifies a particular technological environment to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(h)).
the AI system including a neural network having an autoencoder architecture adapted for dynamic transformation of time series input data from an input state space indicative of the state of the device into an output state space indicative of the state of the device, comprising is the well understood, routine, and conventional activity of “transmitting or receiving data over a network” (see MPEP 2106.05(d)(II); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network)).
at least one processor; and a memory having instructions stored thereon that cause the at least one processor to execute the neural network, train the neural network, or both, the autoencoder architecture comprising uses a computer as a tool to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(f)).
an encoder configured to encode uses a computer as a tool to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(f)).
a latent subnetwork configured to propagate uses a computer as a tool to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(f)).
an extended decoder configured to decode uses a computer as a tool to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(f)).
Therefore, claim 1 is subject-matter ineligible.
Regarding Claim 2:
Subject Matter Eligibility Analysis Step 2A Prong 1:
Claim 2 recites
decode the state of latent dynamics of the device into the output state space same as the input state space to reconstruct the time series input data. (This limitation is a mental process as it encompasses a human mentally decoding the state of latent dynamics.)
Therefore, claim 2 recites an abstract idea.
Subject Matter Eligibility Analysis Step 2A Prong 2:
Claim 2 further recites additional elements of
wherein the autoencoder architecture further comprising a decoder configured to decode (This element does not integrate the abstract idea into a practical application because it amounts to mere “apply it on a computer” (see MPEP 2106.05(f)).)
Therefore, claim 2 is not integrated into a practical application.
Subject Matter Eligibility Analysis Step 2B:
The additional elements of claim 2 do not provide significantly more than the abstract idea itself, taken alone and in combination because
wherein the autoencoder architecture further comprising a decoder configured to decode uses a computer as a tool to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(f)).
Therefore, claim 2 is subject-matter ineligible.
Regarding Claim 3:
Subject Matter Eligibility Analysis Step 2A Prong 1:
Claim 3 recites
interpolate the trajectory of the device based on the state of latent dynamics of the device (This limitation is a mental process based on mathematical concepts as it encompasses a human mentally interpolating the trajectory.)
Therefore, claim 3 recites an abstract idea.
Subject Matter Eligibility Analysis Step 2A Prong 2:
Claim 3 further recites additional elements of
wherein the state of the device corresponds to a trajectory of the device, (This element does not integrate the abstract idea into a practical application because it recites a technological environment in which to apply a judicial exception (see MPEP 2106.05(h)).)
and wherein the extended decoder is further configured to interpolate (This element does not integrate the abstract idea into a practical application because it amounts to mere “apply it on a computer” (see MPEP 2106.05(f)).)
Therefore, claim 3 is not integrated into a practical application.
Subject Matter Eligibility Analysis Step 2B:
The additional elements of claim 3 do not provide significantly more than the abstract idea itself, taken alone and in combination because
wherein the state of the device corresponds to a trajectory of the device, specifies a particular technological environment to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(h)).
and wherein the extended decoder is further configured to interpolate uses a computer as a tool to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(f)).
Therefore, claim 3 is subject-matter ineligible.
Regarding Claim 4:
Subject Matter Eligibility Analysis Step 2A Prong 1:
Claim 4 recites
extrapolate the trajectory of the device based on the state of latent dynamics of the device (This limitation is a mental process as it encompasses a human mentally extrapolating the trajectory.)
Therefore, claim 4 recites an abstract idea.
Subject Matter Eligibility Analysis Step 2A Prong 2:
Claim 4 further recites additional elements of
wherein the extended decoder is further configured to extrapolate (This element does not integrate the abstract idea into a practical application because it amounts to mere “apply it on a computer” (see MPEP 2106.05(f)).)
Therefore, claim 4 is not integrated into a practical application.
Subject Matter Eligibility Analysis Step 2B:
The additional elements of claim 4 do not provide significantly more than the abstract idea itself, taken alone and in combination because
wherein the extended decoder is further configured to extrapolate uses a computer as a tool to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(f)).
Therefore, claim 4 is subject-matter ineligible.
Regarding Claim 5:
Subject Matter Eligibility Analysis Step 2A Prong 1:
Claim 5 recites
decode the state of latent dynamics into a different state space different from the input state space. (This limitation is a mental process as it encompasses a human mentally decoding the state of latent dynamics.)
Therefore, claim 5 recites an abstract idea.
Subject Matter Eligibility Analysis Step 2A Prong 2:
Claim 5 further recites additional elements of
wherein the autoencoder architecture includes a plurality of extended decoders, and wherein each extended decoder of the plurality of extended decoders is configured to decode (This element does not integrate the abstract idea into a practical application because it amounts to mere “apply it on a computer” (see MPEP 2106.05(f)).)
Therefore, claim 5 is not integrated into a practical application.
Subject Matter Eligibility Analysis Step 2B:
The additional elements of claim 5 do not provide significantly more than the abstract idea itself, taken alone and in combination because
wherein the autoencoder architecture includes a plurality of extended decoders, and wherein each extended decoder of the plurality of extended decoders is configured to decode uses a computer as a tool to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(f)).
Therefore, claim 5 is subject-matter ineligible.
Regarding Claim 6:
Subject Matter Eligibility Analysis Step 2A Prong 1:
Claim 6 recites the same abstract ideas as claim 1. Therefore, claim 6 recites an abstract idea.
Subject Matter Eligibility Analysis Step 2A Prong 2:
Claim 6 further recites additional elements of
wherein the device is a mobile robot including a Wi-Fi receiver, wherein the input state space is a signal space parameterized on Wi-Fi measurements of the Wi-Fi receiver, and wherein the output state space is a location space parametrized on coordinates of the mobile robot. (This element does not integrate the abstract idea into a practical application because it recites a technological environment in which to apply a judicial exception (see MPEP 2106.05(h)).)
Therefore, claim 6 is not integrated into a practical application.
Subject Matter Eligibility Analysis Step 2B:
The additional elements of claim 6 do not provide significantly more than the abstract idea itself, taken alone and in combination because
wherein the device is a mobile robot including a Wi-Fi receiver, wherein the input state space is a signal space parameterized on Wi-Fi measurements of the Wi-Fi receiver, and wherein the output state space is a location space parametrized on coordinates of the mobile robot specifies a particular technological environment to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(h)).
Therefore, claim 6 is subject-matter ineligible.
Regarding Claim 7:
Subject Matter Eligibility Analysis Step 2A Prong 1:
Claim 7 recites the same abstract ideas as claim 1. Therefore, claim 7 recites an abstract idea.
Subject Matter Eligibility Analysis Step 2A Prong 2:
Claim 7 further recites additional elements of
wherein the device is a vehicle, wherein the input state space is a signal space parametrized on acceleration measurements of the vehicle, and wherein the output state space is a location space parametrized on coordinates of the vehicle. (This element does not integrate the abstract idea into a practical application because it recites a technological environment in which to apply a judicial exception (see MPEP 2106.05(h)).)
Therefore, claim 7 is not integrated into a practical application.
Subject Matter Eligibility Analysis Step 2B:
The additional elements of claim 7 do not provide significantly more than the abstract idea itself, taken alone and in combination because
wherein the device is a vehicle, wherein the input state space is a signal space parametrized on acceleration measurements of the vehicle, and wherein the output state space is a location space parametrized on coordinates of the vehicle specifies a particular technological environment to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(h)).
Therefore, claim 7 is subject-matter ineligible.
Regarding Claim 8:
Subject Matter Eligibility Analysis Step 1:
Claim 8 recites a method and is thus a process, one of the four statutory categories of patentable subject matter.
Subject Matter Eligibility Analysis Step 2A Prong 1:
Claim 8 recites
encoding each input data point of time series input data from an input state space into a latent space to produce latent data points indexed in time according to time indices of corresponding input data points, (This limitation is a mental process based on mathematical concepts as it encompasses a human mentally encoding data.)
propagating the latent data points backward in time with a neural Ordinary Differential Equation (ODE) approximating dynamics of the device in the latent space to estimate an initial point of latent dynamics of the device in the latent space; (This limitation is a mental process based on mathematical concepts as it encompasses a human mentally propagating data points backward in time with an equation.)
propagating the initial point of latent dynamics of the device forward in time till a time index of interest using the neural ODE to produce a state of latent dynamics of the device at the time index of interest; (This limitation is a mental process based on mathematical concepts as it encompasses a human mentally propagating a data point forward in time with an equation.)
decoding the state of latent dynamics of the device into the output state space different from the input state space to produce output data including the state of the device at the time index of interest (This limitation is a mental process based on mathematical concepts as it encompasses a human mentally decoding the state of latent dynamics of the device.)
Therefore, claim 8 recites an abstract idea.
Subject Matter Eligibility Analysis Step 2A Prong 2:
Claim 8 further recites additional elements of
for sensing a state of a device with continuous-time dynamics, (This element does not integrate the abstract idea into a practical application because it recites a technological environment in which to apply a judicial exception (see MPEP 2106.05(h)).)
Therefore, claim 8 is not integrated into a practical application.
Subject Matter Eligibility Analysis Step 2B:
The additional elements of claim 8 do not provide significantly more than the abstract idea itself, taken alone and in combination because
for sensing a state of a device with continuous-time dynamics, specifies a particular technological environment to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(h)).
Therefore, claim 8 is subject-matter ineligible.
Regarding claim 9, claim 9 recites substantially similar limitations to claim 2, and is therefore rejected under the same analysis.
Regarding claim 10, claim 10 recites substantially similar limitations to claim 3, and is therefore rejected under the same analysis.
Regarding claim 11, claim 11 recites substantially similar limitations to claim 4, and is therefore rejected under the same analysis.
Regarding claim 12, claim 12 recites substantially similar limitations to claim 6, and is therefore rejected under the same analysis.
Regarding claim 13, claim 13 recites substantially similar limitations to claim 7, and is therefore rejected under the same analysis.
Regarding Claim 14:
Subject Matter Eligibility Analysis Step 1:
Claim 14 recites a non-transitory computer readable storage medium and is thus an article of manufacture, one of the four statutory categories of patentable subject matter.
Subject Matter Eligibility Analysis Step 2A Prong 1:
Claim 14 recites
encoding each input data point of time series input data from an input state space into a latent space to produce latent data points indexed in time according to time indices of corresponding input data points, (This limitation is a mental process based on mathematical concepts as it encompasses a human mentally encoding data.)
propagating the latent data points backward in time with a neural Ordinary Differential Equation (ODE) approximating dynamics of the device in the latent space to estimate an initial point of latent dynamics of the device in the latent space; (This limitation is a mental process based on mathematical concepts as it encompasses a human mentally propagating data points backward in time with an equation.)
propagating the initial point of latent dynamics of the device forward in time till a time index of interest using the neural ODE to produce a state of latent dynamics of the device at the time index of interest; (This limitation is a mental process based on mathematical concepts as it encompasses a human mentally propagating a data point forward in time with an equation.)
decoding the state of latent dynamics of the device into the output state space different from the input state space to produce output data including the state of the device at the time index of interest (This limitation is a mental process based on mathematical concepts as it encompasses a human mentally decoding the state of latent dynamics of the device.)
Therefore, claim 14 recites an abstract idea.
Subject Matter Eligibility Analysis Step 2A Prong 2:
Claim 14 further recites additional elements of
A non-transitory computer readable storage medium embodied thereon a program executable by a processor for performing a method (This element does not integrate the abstract idea into a practical application because it recites generic computing components on which to perform the abstract idea (see MPEP 2106.05(f)).)
for sensing a state of a device with continuous-time dynamics, (This element does not integrate the abstract idea into a practical application because it recites a technological environment in which to apply a judicial exception (see MPEP 2106.05(h)).)
Therefore, claim 14 is not integrated into a practical application.
Subject Matter Eligibility Analysis Step 2B:
The additional elements of claim 14 do not provide significantly more than the abstract idea itself, taken alone and in combination because
A non-transitory computer readable storage medium embodied thereon a program executable by a processor for performing a method uses a computer as a tool to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(f)).
for sensing a state of a device with continuous-time dynamics, specifies a particular technological environment to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(h)).
Therefore, claim 14 is subject-matter ineligible.
Regarding claim 15, claim 15 recites substantially similar limitations to claim 2, and is therefore rejected under the same analysis.
Regarding claim 16, claim 16 recites substantially similar limitations to claim 3, and is therefore rejected under the same analysis.
Regarding claim 17, claim 17 recites substantially similar limitations to claim 4, and is therefore rejected under the same analysis.
Regarding claim 18, claim 18 recites substantially similar limitations to claim 6, and is therefore rejected under the same analysis.
Regarding claim 19, claim 19 recites substantially similar limitations to claim 7, and is therefore rejected under the same analysis.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 8-11 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Garsdal et al. (“Generative time series models using Neural ODE in Variational Autoencoders”) (hereafter referred to as Garsdal).
Regarding claim 8, Garsdal teaches
A method for sensing a state of a device with continuous-time dynamics, comprising (Garsdal, page 2, 2nd column, last paragraph, “We have trained the NODE model on three different data sets to test the robustness and performance of the method across multiple data sets and use cases” where “The solar power data set comes from real life solar power data where the power output of a solar cell is measured at a 30 minute interval throughout the day” (Garsdal, page 3, 1st column, last paragraph). Examiner notes that the solar cell is the device.):
encoding each input data point of time series input data from an input state space into a latent space to produce latent data points indexed in time according to time indices of corresponding input data points (Garsdal, page 3, 1st paragraph of Section C. Models, “The VAE NODE models follow the structure seen in [1] and Figure 2 where time series data is encoded into a latent space using a time variant neural net such as a Recurrent Neural Network (RNN), or in our case a Long-Short Term Memory (LSTM) network. The latent state is represented from zt0 to ensure that decoding happens from the beginning of the time series” and Garsdal, page 2, Figure 2,
PNG
media_image1.png
542
1186
media_image1.png
Greyscale
Examiner notes that the RNN encoder takes the observed time and encodes it into the latent space according to the indices of the data points.),
propagating the latent data points backward in time with a neural Ordinary Differential Equation (ODE) approximating dynamics of the device in the latent space to estimate an initial point of latent dynamics of the device in the latent space (Garsdal, page 3, 1st paragraph of Section C. Models, “The VAE NODE models follow the structure seen in [1] and Figure 2 where time series data is encoded into a latent space using a time variant neural net such as a Recurrent Neural Network (RNN), or in our case a Long-Short Term Memory (LSTM) network. The latent state is represented from zt0 to ensure that decoding happens from the beginning of the time series. In order to enable this, we encode the time series in reverse thus ending up in zt0” where “This augmented ODE is solved backwards in time, starting from the final time step t1 and going to the initial time step t0” (Garsdal, page 2, 1st column, 1st paragraph) and Garsdal, page 2, Figure 2,
PNG
media_image2.png
542
1186
media_image2.png
Greyscale
Examiner notes that solving the ODE backwards in time is propagating the latent data points backward in time. Examiner further notes the boxed section in Figure 2 is the neural Ordinary Differential Equation approximating dynamics of the device in the latent space to estimate an initial point of latent dynamics of the device in the latent space. Examiner notes that the estimated initial point of latent dynamics is the circle zt0 that points down to t0.);
propagating the initial point of latent dynamics of the device forward in time till a time index of interest using the neural ODE to produce a state of latent dynamics of the device at the time index of interest (Garsdal, page 2, Figure 2,
PNG
media_image3.png
542
1186
media_image3.png
Greyscale
Examiner notes that the ODE Solve is the latent subnetwork that propagates the initial point zt0 forward in time until the time index of interest, tM. Examiner further notes that the state of latent dynamics of the device at the time index of interest is ztM.);
and decoding the state of latent dynamics of the device into an output state space different from the input state space to produce output data including the state of the device at the time index of interest (Garsdal, page 2, Figure 2,
PNG
media_image2.png
542
1186
media_image2.png
Greyscale
and “Taking the NODE one step further, it can be utilized in continuous time extra- and interpolation for sequential data, by implementing the NODE as a decoder of a Variational Auto Encoder” (Garsdal, page 1, Introduction). Examiner notes that the boxed area in Figure 2 is the extended decoder and the output state space different from the input state space is the Extrapolation of tn+1 and tM. Examiner further notes that the state of the device at tn+1 and tm are the dots on the curve within the data space and the time index of interest is tM.).
Regarding claim 9, Garsdal teaches
The method of claim 8, further comprising decoding the state of latent dynamics of the device into the output state space same as the input state space to reconstruct the time series input data (Garsdal, page 2, Figure 2,
PNG
media_image2.png
542
1186
media_image2.png
Greyscale
and “Taking the NODE one step further, it can be utilized in continuous time extra- and interpolation for sequential data, by implementing the NODE as a decoder of a Variational Auto Encoder” (Garsdal, page 1, Introduction). Examiner notes that the decoder is the boxed area in Figure 2. Examiner further notes that the output state space same as the input state space is the dots on the curve corresponding to t0, t1, and tN. Additionally, Examiner notes that the Prediction is the reconstruction of the time series input data.).
Regarding claim 10, Garsdal teaches
The method of claim 8, wherein the state of the device corresponds to trajectory of the device (Garsdal, page 1, 1st column, 1st paragraph, “To solve the derivatives given in Equation (2) the NODE uses a black box ordinary differential equation solver, which takes as input an initial hidden state h(0) to solve an initial value problem up to some given time T. This way the ODE solver yields a representation of a continuous hidden state trajectory, instead of a discrete amount of hidden states. This also means that any specific hidden state along the hidden trajectory can be evaluated, even with uneven step sizes, which is one of the advantages with this approach”),
and wherein the method further comprises interpolating the trajectory of the device based on the state of latent dynamics of the device (Garsdal, page 2, Section C. Using Neural ODE’s as VAE, “NODE’s are intrinsically well suited for temporal data and have many advantages due to the continuous aspect of the underlying ODE. It can be used as a generative model utilizing learned representations of a latent space. In this setting it can be used to predict or extrapolate data, as well as interpolate and impute missing data within a time series” where “Our work revolves around the NODE framework as a VAE, where it can learn the latent space distribution of a time series. In this setting the latent space corresponds to the initial state of the latent trajectory z0, which ODEsolver can take as input, in order to compute the entire latent trajectory z(t). The hidden trajectory is then evaluated at specific time steps, and these steps are then fed to a decoder that transforms it back into the temporal space” (Garsdal, page 2, 2nd column, 2nd paragraph).).
Regarding claim 11, Garsdal teaches
The method of claim 10, wherein the method further comprises extrapolating the trajectory of the device based on the state of latent dynamics of the device (Garsdal, page 2, Section C. Using Neural ODE’s as VAE, “NODE’s are intrinsically well suited for temporal data and have many advantages due to the continuous aspect of the underlying ODE. It can be used as a generative model utilizing learned representations of a latent space. In this setting it can be used to predict or extrapolate data, as well as interpolate and impute missing data within a time series” where “Our work revolves around the NODE framework as a VAE, where it can learn the latent space distribution of a time series. In this setting the latent space corresponds to the initial state of the latent trajectory z0, which ODEsolver can take as input, in order to compute the entire latent trajectory z(t). The hidden trajectory is then evaluated at specific time steps, and these steps are then fed to a decoder that transforms it back into the temporal space” (Garsdal, page 2, 2nd column, 2nd paragraph).).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claim(s) 1-5, 7, 13-17, and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Garsdal in view of Kum et al. (US 2021/0316762 A1) (hereafter referred to as Kum).
Regarding claim 1, Garsdal teaches
An Artificial Intelligence (AI) system for sensing a state of a device with continuous-time dynamics (Garsdal, page 2, 2nd column, last paragraph, “We have trained the NODE model on three different data sets to test the robustness and performance of the method across multiple data sets and use cases” where “The solar power data set comes from real life solar power data where the power output of a solar cell is measured at a 30 minute interval throughout the day” (Garsdal, page 3, 1st column, last paragraph). Examiner notes that the solar cell is the device.),
the AI system including a neural network having an autoencoder architecture adapted for dynamic transformation of time series input data from an input state space indicative of the state of the device into an output state space indicative of the state of the device, comprising (Garsdal, page 3, 1st paragraph of Section C. Models, “The VAE [variational autoencoder] NODE [neural ordinary differential equation] models follow the structure seen in [1] and Figure 2 where time series data is encoded into a latent space using a time variant neural net such as a Recurrent Neural Network (RNN), or in our case a Long-Short Term Memory (LSTM) network. The latent state is represented from zt0 to ensure that decoding happens from the beginning of the time series” where “In this setting the latent space corresponds to the initial state of the latent trajectory z0, which an ODEsolver can take as input, in order to compute the entire latent trajectory z(t). The hidden trajectory is then evaluated at specific time steps, and these steps are then fed to a decoder that transforms it back into the temporal space” (Garsdal, page 2, 2nd column, 2nd paragraph) and “in general, the NODE VAE was able to capture the dynamics of the underlying data”. Examiner notes that the initial state is the input data from an input state space indicative of the state of the device. Examiner further notes that the transformed data from the decoder is the output state space indicative of the state of the device):
an encoder configured to encode each input data point of the time series input data from the input state space into a latent space to produce latent data points indexed in time according to time indices of corresponding input data points (Garsdal, page 3, 1st paragraph of Section C. Models, “The VAE NODE models follow the structure seen in [1] and Figure 2 where time series data is encoded into a latent space using a time variant neural net such as a Recurrent Neural Network (RNN), or in our case a Long-Short Term Memory (LSTM) network. The latent state is represented from zt0 to ensure that decoding happens from the beginning of the time series” and Garsdal, page 2, Figure 2,
PNG
media_image1.png
542
1186
media_image1.png
Greyscale
Examiner notes that the RNN encoder takes the observed time and encodes it into the latent space according to the indices of the data points.)
and propagate the latent data points backward in time with a neural Ordinary Differential Equation (ODE) approximating dynamics of the device in the latent space to estimate an initial point of latent dynamics of the device in the latent space (Garsdal, page 3, 1st paragraph of Section C. Models, “The VAE NODE models follow the structure seen in [1] and Figure 2 where time series data is encoded into a latent space using a time variant neural net such as a Recurrent Neural Network (RNN), or in our case a Long-Short Term Memory (LSTM) network. The latent state is represented from zt0 to ensure that decoding happens from the beginning of the time series. In order to enable this, we encode the time series in reverse thus ending up in zt0” where “This augmented ODE is solved backwards in time, starting from the final time step t1 and going to the initial time step t0” (Garsdal, page 2, 1st column, 1st paragraph) and Garsdal, page 2, Figure 2,
PNG
media_image2.png
542
1186
media_image2.png
Greyscale
Examiner notes that encoding in reverse is propagating the latent data points backward in time. Examiner further notes the boxed section in Figure 2 is the neural Ordinary Differential Equation approximating dynamics of the device in the latent space to estimate an initial point of latent dynamics of the device in the latent space. Examiner notes that the estimated initial point of latent dynamics is the circle zt0 that points down to t0.);
a latent subnetwork configured to propagate the initial point of latent dynamics of the device forward in time till a time index of interest using the neural ODE to produce a state of latent dynamics of the device at the time index of interest (Garsdal, page 2, Figure 2,
PNG
media_image3.png
542
1186
media_image3.png
Greyscale
Examiner notes that the ODE Solve is the latent subnetwork that propagates the initial point zt0 forward in time until the time index of interest, tm. Examiner further notes that the state of latent dynamics of the device at the time index of interest is ztM.);
and an extended decoder configured to decode the state of latent dynamics of the device into the output state space different from the input state space to produce output data including the state of the device at the time index of interest (Garsdal, page 2, Figure 2,
PNG
media_image2.png
542
1186
media_image2.png
Greyscale
and “Taking the NODE one step further, it can be utilized in continuous time extra- and interpolation for sequential data, by implementing the NODE as a decoder of a Variational Auto Encoder” (Garsdal, page 1, Introduction). Examiner notes that the boxed area in Figure 2 is the extended decoder and the output state space different from the input state space is the Extrapolation of tn+1 and tm. Examiner further notes that the state of the device at tn+1 and tm are the dots on the curve within the data space and the time index of interest is tm.).
Garsdal does not teach, but Kum does teach
at least one processor; and a memory having instructions stored thereon that cause the at least one processor to execute the neural network, train the neural network, or both (Kum, page 14, paragraph 0007, “According to various embodiments, an electronic device includes a memory and a processor connected to the memory and configured to execute at least one instruction stored in the memory. The processor may be configured to detect input data having a first time interval, detect first prediction data having a second time interval based on the input data using a preset recursive network, and detect second prediction data second prediction data having a third time interval based on the input data and the first prediction data using the recursive network” where “in this case, the recursive network 200 may include a plurality of encoders 410, a plurality of attention modules 420, and a plurality of decoders 430” (Kum, page 16, paragraph 0041) and “the first encoder 610 may include a plurality of recurrent neural networks (RNN)” (Kum, page 16, paragraph 0042). Examiner notes that the recursive network has the recurrent neural networks ),
Garsdal and Kum are analogous to the claimed invention because they both use encoders and decoders with time series data. It would have been obvious to a person having ordinary skill in the art prior to the effective filing date to have implemented the autoencoder of Garsdal on the processor and memory of Kum. Thus, this would be applying a known technique (encoding and decoding time series) to a known device (processor and memory) ready for improvement to yield predictable results (encoded and decoded time series) (MPEP 2143 I. (C) Use of known technique to improve similar devices (methods, or products) in the same way).
Regarding claim 2, Garsdal in view of Kum teaches the AI system of claim 1. Garsdal further teaches
wherein the autoencoder architecture further comprising a decoder configured to decode the state of latent dynamics of the device into the output state space same as the input state space to reconstruct the time series input data (Garsdal, page 2, Figure 2,
PNG
media_image2.png
542
1186
media_image2.png
Greyscale
and “Taking the NODE one step further, it can be utilized in continuous time extra- and interpolation for sequential data, by implementing the NODE as a decoder of a Variational Auto Encoder” (Garsdal, page 1, Introduction). Examiner notes that the decoder is the boxed area in Figure 2. Examiner further notes that the output state space same as the input state space is the dots on the curve corresponding to t0, t1, and tN. Additionally, Examiner notes that the Prediction is the reconstruction of the time series input data.).
Regarding claim 3, Garsdal in view of Kum teaches the AI system of claim 1. Garsdal further teaches
wherein the state of the device corresponds to a trajectory of the device (Garsdal, page 1, 1st column, 1st paragraph, “To solve the derivatives given in Equation (2) the NODE uses a black box ordinary differential equation solver, which takes as input an initial hidden state h(0) to solve an initial value problem up to some given time T. This way the ODE solver yields a representation of a continuous hidden state trajectory, instead of a discrete amount of hidden states. This also means that any specific hidden state along the hidden trajectory can be evaluated, even with uneven step sizes, which is one of the advantages with this approach”),
and wherein the extended decoder is further configured to interpolate the trajectory of the device based on the state of latent dynamics of the device (Garsdal, page 2, Section C. Using Neural ODE’s as VAE, “NODE’s are intrinsically well suited for temporal data and have many advantages due to the continuous aspect of the underlying ODE. It can be used as a generative model utilizing learned representations of a latent space. In this setting it can be used to predict or extrapolate data, as well as interpolate and impute missing data within a time series” where “Our work revolves around the NODE framework as a VAE, where it can learn the latent space distribution of a time series. In this setting the latent space corresponds to the initial state of the latent trajectory z0, which ODEsolver can take as input, in order to compute the entire latent trajectory z(t). The hidden trajectory is then evaluated at specific time steps, and these steps are then fed to a decoder that transforms it back into the temporal space” (Garsdal, page 2, 2nd column, 2nd paragraph).).
Regarding claim 4, Garsdal in view of Kum teaches the AI system of claim 3. Garsdal further teaches
wherein the extended decoder is further configured to extrapolate the trajectory of the device based on the state of latent dynamics of the device (Garsdal, page 2, Section C. Using Neural ODE’s as VAE, “NODE’s are intrinsically well suited for temporal data and have many advantages due to the continuous aspect of the underlying ODE. It can be used as a generative model utilizing learned representations of a latent space. In this setting it can be used to predict or extrapolate data, as well as interpolate and impute missing data within a time series” where “Our work revolves around the NODE framework as a VAE, where it can learn the latent space distribution of a time series. In this setting the latent space corresponds to the initial state of the latent trajectory z0, which ODEsolver can take as input, in order to compute the entire latent trajectory z(t). The hidden trajectory is then evaluated at specific time steps, and these steps are then fed to a decoder that transforms it back into the temporal space” (Garsdal, page 2, 2nd column, 2nd paragraph).).
Regarding claim 5, Garsdal in view of Kum teaches the AI system of claim 1. Garsdal in view of Kum further teach
wherein the autoencoder architecture includes a plurality of extended decoders (Kum, page 16, paragraph 0041, “In this case, the recursive network 200 may include a plurality of encoders 410, a plurality of attention modules 420, and a plurality of decoders 430.” Examiner notes that the decoders are the extended decoders.),
and wherein each extended decoder of the plurality of extended decoders is configured to decode the state of latent dynamics into a different state space different from the input state space (Kum, page 17, paragraph 0044, “The decoder 430 may output at least one of the first prediction data (Yinitial;
Y
i
^
) or second prediction data (Yfinal;
Y
i
^
) using the feature vectors (hi) based on the importance (a-i) calculated by the attention module 420. In this case, the decoder 430 may output at least one of the first prediction data (Yinitial;
Y
i
^
) or the second prediction data (Yfinal;
Y
i
^
), based on the hidden state information and memory cell state information extracted by the encoder 410 and the multiplication result (si) calculated by the attention module 420. For example, as illustrated in FIG. 7, the decoder 430 may detect at least one of the first prediction data (Yinitial;
Y
i
^
) or the second prediction data (Yfinal;
Y
i
^
) based on the multiplication result (si) for all the surrounding vehicles 310” where “Furthermore, the prediction data (Yfinal) may include future trajectories of the surrounding vehicles 301” (Kum, page 16, paragraph 0034). Examiner notes that the state of latent dynamics are the hidden state information. Examiner further notes that the different state space from the input state space is the future predictions.).
Garsdal and Kum are analogous to the claimed invention because they both use encoders and decoders with time series data. It would have been obvious to one having ordinary skill in the art prior to the effective filing date to have modified Garsdal to have a plurality of decoders. Doing so is advantageous because “the electronic device can improve the accuracy of the final prediction data” (Kum, page 18, paragraph 0056).
Regarding claim 7, Garsdal in view of Kum teaches the AI system of claim 1. Garsdal in view of Kum further teach
wherein the device is a vehicle, wherein the input state space is a signal space parametrized on acceleration measurements of the vehicle (Kum, page 17, paragraph 0046, “Referring to FIG. 9, at operation 910, the electronic device 100 may detect input data X. In this case, the input data X may be a time-series data. The processor 180 may detect the input data X having a first time interval. For example, the electronic device 100 may be related to the vehicle 300. In such a case, the processor 180 may check moving trajectories of the surrounding vehicles 301. The processor 180 may collect information on a surrounding situation of the electronic device 100. In this case, the processor 180 may collect the information on a surrounding situation of the electronic device 100, based on at least one of image data obtained through the camera module 120 or sensing data obtained through the sensor module 130. Accordingly, the processor 180 may check moving the trajectories of the surrounding vehicles 301 based on the information on a surrounding situation of the electronic device 100. That is, the moving trajectories of the surrounding vehicles 301 may be detected as the input data X.” Examiner notes that the moving trajectories of the surrounding vehicles is the signal space parameterized on acceleration measurements.),
and wherein the output state space is a location space parametrized on coordinates of the vehicle (Kum, page 17, paragraph 0044, “The first decoder 710 may detect a lateral movement of each surrounding vehicle 310. The second decoder 720 may detect a longitudinal movement of each surrounding vehicle 310. Accordingly the decoder may generate the first prediction data (Yinitial;
Y
i
^
) or the second prediction data (Yfinal;
Y
i
^
) by combining the lateral movement and longitudinal movement of each surrounding vehicle 310” where “according to various embodiments, the processor 180 may be configured to update the future trajectory based on the moving trajectory and future trajectory of the surrounding vehicle 301 using the recursive network 200 and to output the updated future trajectory as the second prediction data (Yfinal)” (Kum, page 19, paragraph 0078). Examiner notes that the output state space is the future trajectory and the location space parametrized on coordinates of the vehicle is the future trajectory of the surrounding vehicles based on the latitudinal and longitudinal movements.).
Garsdal and Kum are analogous to the claimed invention because they both use encoders and decoders with time series data. It would have been obvious to a person having ordinary skill in the art prior to the effective filing date to have implemented the autoencoder of Garsdal on the vehicle of Kum. Thus, this would be applying a known technique (encoding and decoding time series) to a known device (a vehicle) ready for improvement to yield predictable results (encoded and decoded time series) (MPEP 2143 I. (C) Use of known technique to improve similar devices (methods, or products) in the same way).
Regarding claim 13, Garsdal teaches the method of claim 8. Garsdal does not teach, but Kum does teach
wherein the device is a vehicle, wherein the input state space is a signal space parametrized on acceleration measurements of the vehicle (Kum, page 17, paragraph 0046, “Referring to FIG. 9, at operation 910, the electronic device 100 may detect input data X. In this case, the input data X may be a time-series data. The processor 180 may detect the input data X having a first time interval. For example, the electronic device 100 may be related to the vehicle 300. In such a case, the processor 180 may check moving trajectories of the surrounding vehicles 301. The processor 180 may collect information on a surrounding situation of the electronic device 100. In this case, the processor 180 may collect the information on a surrounding situation of the electronic device 100, based on at least one of image data obtained through the camera module 120 or sensing data obtained through the sensor module 130. Accordingly, the processor 180 may check moving the trajectories of the surrounding vehicles 301 based on the information on a surrounding situation of the electronic device 100. That is, the moving trajectories of the surrounding vehicles 301 may be detected as the input data X.” Examiner notes that the moving trajectories of the surrounding vehicles is the signal space parameterized on acceleration measurements.),
and wherein the output state space is a location space parametrized on coordinates of the vehicle (Kum, page 17, paragraph 0044, “The first decoder 710 may detect a lateral movement of each surrounding vehicle 310. The second decoder 720 may detect a longitudinal movement of each surrounding vehicle 310. Accordingly the decoder may generate the first prediction data (Yinitial;
Y
i
^
) or the second prediction data (Yfinal;
Y
i
^
) by combining the lateral movement and longitudinal movement of each surrounding vehicle 310” where “according to various embodiments, the processor 180 may be configured to update the future trajectory based on the moving trajectory and future trajectory of the surrounding vehicle 301 using the recursive network 200 and to output the updated future trajectory as the second prediction data (Yfinal)” (Kum, page 19, paragraph 0078). Examiner notes that the output state space is the future trajectory and the location space parametrized on coordinates of the vehicle is the future trajectory of the surrounding vehicles based on the latitudinal and longitudinal movements.).
Garsdal and Kum are analogous to the claimed invention because they both use encoders and decoders with time series data. It would have been obvious to a person having ordinary skill in the art prior to the effective filing date to have implemented the autoencoder of Garsdal on the vehicle of Kum. Thus, this would be applying a known technique (encoding and decoding time series) to a known device (a vehicle) ready for improvement to yield predictable results (encoded and decoded time series) (MPEP 2143 I. (C) Use of known technique to improve similar devices (methods, or products) in the same way).
Regarding claim 14, Garsdal teaches
A method for sensing a state of a device with continuous-time dynamics (Garsdal, page 2, 2nd column, last paragraph, “We have trained the NODE model on three different data sets to test the robustness and performance of the method across multiple data sets and use cases” where “The solar power data set comes from real life solar power data where the power output of a solar cell is measured at a 30 minute interval throughout the day” (Garsdal, page 3, 1st column, last paragraph). Examiner notes that the solar cell is the device.),
encoding each input data point of the time series input data from the input state space into a latent space to produce latent data points indexed in time according to time indices of corresponding input data points (Garsdal, page 3, 1st paragraph of Section C. Models, “The VAE NODE models follow the structure seen in [1] and Figure 2 where time series data is encoded into a latent space using a time variant neural net such as a Recurrent Neural Network (RNN), or in our case a Long-Short Term Memory (LSTM) network. The latent state is represented from zt0 to ensure that decoding happens from the beginning of the time series” and Garsdal, page 2, Figure 2,
PNG
media_image1.png
542
1186
media_image1.png
Greyscale
Examiner notes that the RNN encoder takes the observed time and encodes it into the latent space according to the indices of the data points.)
propagating the latent data points backward in time with a neural Ordinary Differential Equation (ODE) approximating dynamics of the device in the latent space to estimate an initial point of latent dynamics of the device in the latent space (Garsdal, page 3, 1st paragraph of Section C. Models, “The VAE NODE models follow the structure seen in [1] and Figure 2 where time series data is encoded into a latent space using a time variant neural net such as a Recurrent Neural Network (RNN), or in our case a Long-Short Term Memory (LSTM) network. The latent state is represented from zt0 to ensure that decoding happens from the beginning of the time series. In order to enable this, we encode the time series in reverse thus ending up in zt0” where “This augmented ODE is solved backwards in time, starting from the final time step t1 and going to the initial time step t0” (Garsdal, page 2, 1st column, 1st paragraph) and Garsdal, page 2, Figure 2,
PNG
media_image2.png
542
1186
media_image2.png
Greyscale
Examiner notes that encoding in reverse is propagating the latent data points backward in time. Examiner further notes the boxed section in Figure 2 is the neural Ordinary Differential Equation approximating dynamics of the device in the latent space to estimate an initial point of latent dynamics of the device in the latent space. Examiner notes that the estimated initial point of latent dynamics is the circle zt0 that points down to t0.);
propagating the initial point of latent dynamics of the device forward in time till a time index of interest using the neural ODE to produce a state of latent dynamics of the device at the time index of interest (Garsdal, page 2, Figure 2,
PNG
media_image3.png
542
1186
media_image3.png
Greyscale
Examiner notes that the ODE Solve is the latent subnetwork that propagates the initial point zt0 forward in time until the time index of interest, tm. Examiner further notes that the state of latent dynamics of the device at the time index of interest is ztM.);
decoding the state of latent dynamics of the device into the output state space different from the input state space to produce output data including the state of the device at the time index of interest (Garsdal, page 2, Figure 2,
PNG
media_image2.png
542
1186
media_image2.png
Greyscale
and “Taking the NODE one step further, it can be utilized in continuous time extra- and interpolation for sequential data, by implementing the NODE as a decoder of a Variational Auto Encoder” (Garsdal, page 1, Introduction). Examiner notes that the boxed area in Figure 2 is the extended decoder and the output state space different from the input state space is the Extrapolation of tn+1 and tm. Examiner further notes that the state of the device at tn+1 and tm are the dots on the curve within the data space and the time index of interest is tm.).
Garsdal does not teach, but Kum does teach
A non-transitory computer readable storage medium embodied thereon a program executable by a processor for performing a method (Kum, page 19, paragraph 0079, “For example, a processor (e.g., the processor 180) of the computer device may invoke at least one of the one or more instructions stored in the storage medium, and may execute the instruction. This enables the computer device to operate to perform at least one function based on the invoked at least one instruction. The one or more instructions may include a code generated by a compiler or a code executable by an interpreter. The storage medium readable by the computer device may be provided in the form of a non-transitory storage medium. In this case, the term ‘non-transitory’ merely means that the storage medium is a tangible device and does not include a signal (e.g., electromagnetic wave).”):
Garsdal and Kum are analogous to the claimed invention because they both use encoders and decoders with time series data. It would have been obvious to a person having ordinary skill in the art prior to the effective filing date to have implemented the autoencoder of Garsdal on the non-transitory computer readable storage medium of Kum. Thus, this would be applying a known technique (encoding and decoding time series) to a known device (a non-transitory computer readable storage medium) ready for improvement to yield predictable results (encoded and decoded time series) (MPEP 2143 I. (C) Use of known technique to improve similar devices (methods, or products) in the same way).
Regarding claim 15, claim 15 recites substantially similar limitations to claim 2, and is therefore rejected under the same analysis.
Regarding claim 16, claim 16 recites substantially similar limitations to claim 3, and is therefore rejected under the same analysis.
Regarding claim 17, claim 17 recites substantially similar limitations to claim 4, and is therefore rejected under the same analysis.
Regarding claim 19, claim 19 recites substantially similar limitations to claim 7, and is therefore rejected under the same analysis.
Claim(s) 6 and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Garsdal in view of Kum, in further view of Blaiotta et al. (US 12,367,377 B2) (hereafter referred to as Blaiotta).
Regarding claim 6, Garsdal in view of Kum teaches the AI system of claim 1. Garsdal in view of Kum does not teach, but Blaiotta does teach
wherein the device is a mobile robot including a Wi- Fi receiver (Blaiotta, page 10, column 2, lines. 61-65, “Various features of the present invention relate to making time-series predictions relating to a computer-controlled system, such as a robot, a (semi-) autonomous vehicle, a manufacturing machine, a personal assistant, or an access control system” where “as also illustrated in FIG. 1, the data interface may be constituted by a data storage interface 120 which may access the data 030, 041, 042 from a data storage 021. For example, the data storage interface 120 may be a memory interface or a persistent storage interface, e.g., a hard disk or an SSD interface, but also a personal, local or wide area network interface such as Bluetooth, Zigbee or Wi-Fi interface or an ethernet or fiberoptic interface. The data storage 021 may be an internal data storage of the system 100, such as a hard drive or SSD, but also an external data storage, e.g., a network-accessible data storage. In some embodiments, the data 030, 041, 042 may each be accessed from a different data storage, e.g., via a different subsystem of the data storage interface 120. Each subsystem may be of a type as is described above for the data storage interface 120” (Blaiotta, page 14, column 10, lines 1-16). Examiner notes that the device is a robot that has a Wi-Fi interface or Wi-Fi receiver.),
wherein the input state space is a signal space parameterized on Wi-Fi measurements of the Wi-Fi receiver (Blaiotta, page 11, column 4, lines 45-50, “The function can take as input previous values of the one or more measurable quantities for the object and the further object, and in some cases (but not necessarily) also the values of the time-invariant laten features for the object and the further object” where “Thus, the trainable function that determines the pairwise contributions, in such cases takes as input values of the one or more measurable quantities only for the previous state” (Blaiotta, page 11, column 4, lines 62-65) where “FIG. 1 shows a system for training a decoder model for making time-series predictions of multiple interacting objects” (Blaiotta, page 14, column 9, lines 13-15) and “as also illustrated in FIG. 1, the data interface may be constituted by a data storage interface 120 which may access the data 030, 041, 042 from a data storage 021. For example, the data storage interface 120 may be a memory interface or a persistent storage interface, e.g., a hard disk or an SSD interface, but also a personal, local or wide area network interface such as Bluetooth, Zigbee or Wi-Fi interface or an ethernet or fiberoptic interface” (Blaiotta, page 14, column 10, lines 1-8). Examiner notes that the measurable quantities taken as input are the Wi-Fi measurements since the Wi-Fi receiver or Wi-Fi interface accessed them. Examiner further notes that the signal space is the Wi-Fi interface passing the input.),
and wherein the output state space is a location space parametrized on coordinates of the mobile robot (Blaiotta, page 18, column18, lines 35-55, “As shown in the figure, the decoder model may comprise a trained graph model GM, 602, and a trained local function LF, 603. The graph model GM may be applied to obtain first prediction contributions FPCi, 670, for respective objects, while the local function LF may be used to obtain second prediction contributions SPCi, 671, for the respective objects….The first and second contributions FPCi, SPCi, may then be combined, in a combination operation CMB, 680, to obtain predicted values xi,t+1, 611, of the one or more measurable quantities” where “The measurable quantities may comprise positional information about an object. The measurable quantities may for example comprise a position of the object. For example, the measurable quantities at a point in time may be represented as 2D coordinates or 3D coordinates” (Blaiotta, page 17, column 16, lines 54-58) and where “for a robot, the objects can for example be components of the robot itself, e.g., different components of a robot arm connected to each other by joints, and/or objects in the environment of the robot” (Blaiotta, page 11, column 3, lines 6-9). Examiner notes that the predicted values of the measurable quantities are the output state space. Examiner further notes that the 2D coordinates of the object are the coordinates of the mobile robot.).
Garsdal, Kum, and Blaiotta are analogous to the claimed invention because they use encoders and decoders with time series data. It would have been obvious to a person having ordinary skill in the art prior to the effective filing date to have implemented the autoencoder of Garsdal in view of Kum on the robot of Blaiotta. Thus, this would be applying a known technique (encoding and decoding time series) to a known device (a robot) ready for improvement to yield predictable results (encoded and decoded time series) (MPEP 2143 I. (C) Use of known technique to improve similar devices (methods, or products) in the same way).
Regarding claim 18, claim 18 recites substantially similar limitations to claim 6, and is therefore rejected under the same analysis.
Claim(s) 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Garsdal in view of Blaiotta et al. (US 12,367,377 B2) (hereafter referred to as Blaiotta).
Regarding claim 12, Garsdal teaches the method of claim 8. Garsdal does not teach, but Blaiotta does teach
wherein the device is a mobile robot including a Wi- Fi receiver (Blaiotta, page 10, column 2, lines. 61-65, “Various features of the present invention relate to making time-series predictions relating to a computer-controlled system, such as a robot, a (semi-) autonomous vehicle, a manufacturing machine, a personal assistant, or an access control system” where “as also illustrated in FIG. 1, the data interface may be constituted by a data storage interface 120 which may access the data 030, 041, 042 from a data storage 021. For example, the data storage interface 120 may be a memory interface or a persistent storage interface, e.g., a hard disk or an SSD interface, but also a personal, local or wide area network interface such as Bluetooth, Zigbee or Wi-Fi interface or an ethernet or fiberoptic interface. The data storage 021 may be an internal data storage of the system 100, such as a hard drive or SSD, but also an external data storage, e.g., a network-accessible data storage. In some embodiments, the data 030, 041, 042 may each be accessed from a different data storage, e.g., via a different subsystem of the data storage interface 120. Each subsystem may be of a type as is described above for the data storage interface 120” (Blaiotta, page 14, column 10, lines 1-16). Examiner notes that the device is a robot that has a Wi-Fi interface or Wi-Fi receiver.),
wherein the input state space is a signal space parameterized on Wi-Fi measurements of the Wi-Fi receiver (Blaiotta, page 11, column 4, lines 45-50, “The function can take as input previous values of the one or more measurable quantities for the object and the further object, and in some cases (but not necessarily) also the values of the time-invariant laten features for the object and the further object” where “Thus, the trainable function that determines the pairwise contributions, in such cases takes as input values of the one or more measurable quantities only for the previous state” (Blaiotta, page 11, column 4, lines 62-65) where “FIG. 1 shows a system for training a decoder model for making time-series predictions of multiple interacting objects” (Blaiotta, page 14, column 9, lines 13-15) and “as also illustrated in FIG. 1, the data interface may be constituted by a data storage interface 120 which may access the data 030, 041, 042 from a data storage 021. For example, the data storage interface 120 may be a memory interface or a persistent storage interface, e.g., a hard disk or an SSD interface, but also a personal, local or wide area network interface such as Bluetooth, Zigbee or Wi-Fi interface or an ethernet or fiberoptic interface” (Blaiotta, page 14, column 10, lines 1-8). Examiner notes that the measurable quantities taken as input are the Wi-Fi measurements since the Wi-Fi receiver or Wi-Fi interface accessed them. Examiner further notes that the signal space is the Wi-Fi interface passing the input.),
and wherein the output state space is a location space parametrized on coordinates of the mobile robot (Blaiotta, page 18, column18, lines 35-55, “As shown in the figure, the decoder model may comprise a trained graph model GM, 602, and a trained local function LF, 603. The graph model GM may be applied to obtain first prediction contributions FPCi, 670, for respective objects, while the local function LF may be used to obtain second prediction contributions SPCi, 671, for the respective objects….The first and second contributions FPCi, SPCi, may then be combined, in a combination operation CMB, 680, to obtain predicted values xi,t+1, 611, of the one or more measurable quantities” where “The measurable quantities may comprise positional information about an object. The measurable quantities may for example comprise a position of the object. For example, the measurable quantities at a point in time may be represented as 2D coordinates or 3D coordinates” (Blaiotta, page 17, column 16, lines 54-58) and where “for a robot, the objects can for example be components of the robot itself, e.g., different components of a robot arm connected to each other by joints, and/or objects in the environment of the robot” (Blaiotta, page 11, column 3, lines 6-9). Examiner notes that the predicted values of the measurable quantities are the output state space. Examiner further notes that the 2D coordinates of the object are the coordinates of the mobile robot.).
Garsdal and Blaiotta are analogous to the claimed invention because they use encoders and decoders with time series data. It would have been obvious to a person having ordinary skill in the art prior to the effective filing date to have implemented the autoencoder of Garsdal on the robot of Blaiotta. Thus, this would be applying a known technique (encoding and decoding time series) to a known device (a robot) ready for improvement to yield predictable results (encoded and decoded time series) (MPEP 2143 I. (C) Use of known technique to improve similar devices (methods, or products) in the same way).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Chen et al. (“Neural Ordinary Differential Equations”) also discusses ODE used in neural networks and decoders.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to KAITLYN R HAEFNER whose telephone number is (571)272-1429. The examiner can normally be reached Monday - Thursday: 7:15 am - 5:15 pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Michelle Bechtold can be reached at (571) 431-0762. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/K.R.H./Examiner, Art Unit 2148 /MICHELLE T BECHTOLD/Supervisory Patent Examiner, Art Unit 2148