8962
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
Claims 15-28 are pending.
This action is response to the application filed on April 10, 2023.
CROSS-REFERENCE TO RELATED APPLICATIONS
The present application claims the benefit under 35 U.S.C. § 119 of European Patent Application No. EP 22 17 1094.0 filed on May 2, 2022, which is expressly incorporated herein by reference in its entirety.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 15, 24, 26-28 are rejected under 35 U.S.C. 112, second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which applicant regards as the invention. Because claims 15, 24, 26-28 recite, "…. first output signal characterizing a likelihood of the first training input signal, and determining, by the machine learning system, a second output signal characterizing a likelihood of the second training input signal, determining a loss value, wherein the loss value characterizes a difference between the first output signal and the second output signal, and training the machine learning system based on the loss value; determining an output signal based on the input signal using the machine learning system, wherein the output signal characterizes a likelihood of the input signal; when the likelihood characterized by the output signal is equal to or below a predefined threshold ….” , which render indefinite claimed invention scope.
Examiner’s Note: May be renders the claim indefinite by failing to point out that is being performed. Applicants are advised to amend the claim so solve the 112 rejection set forth in the claim.
2173.05(b) Relative Terminology [R-07.2022]
The use of relative terminology in claim language, including terms of degree, does not automatically render the claim indefinite under 35 U.S.C. 112(b) or pre-AIA 35 U.S.C. 112, second paragraph. Seattle Box Co., Inc. v. Industrial Crating & Packing, Inc., 731 F.2d 818, 221 USPQ 568 (Fed. Cir. 1984). Acceptability of the claim language depends on whether one of ordinary skill in the art would understand what is claimed, in light of the specification.
I. TERMS OF DEGREE
Terms of degree are not necessarily indefinite. "Claim language employing terms of degree has long been found definite where it provided enough certainty to one of skill in the art when read in the context of the invention." Interval Licensing LLC v. AOL, Inc., 766 F.3d 1364, 1370, 112 USPQ2d 1188, 1192-93 (Fed. Cir. 2014) (citing Eibel Process Co. v. Minnesota & Ontario Paper Co., 261 U.S. 45, 65-66 (1923) (finding ‘substantial pitch’ sufficiently definite because one skilled in the art ‘had no difficulty … in determining what was the substantial pitch needed’ to practice the invention)). Thus, when a term of degree is used in the claim, the examiner should determine whether the specification provides some standard for measuring that degree. Hearing Components, Inc. v. Shure Inc., 600 F.3d 1357, 1367, 94 USPQ2d 1385, 1391 (Fed. Cir. 2010); Enzo Biochem, Inc., v. Applera Corp., 599 F.3d 1325, 1332, 94 USPQ2d 1321, 1326 (Fed. Cir. 2010); Seattle Box Co., Inc. v. Indus. Crating & Packing, Inc., 731 F.2d 818, 826, 221 USPQ 568, 574 (Fed. Cir. 1984). If the specification does not provide some standard for measuring that degree, a determination must be made as to whether one of ordinary skill in the art could nevertheless ascertain the scope of the claim (e.g., a standard that is recognized in the art for measuring the meaning of the term of degree). For example, in Ex parte Oetiker, 23 USPQ2d 1641 (Bd. Pat. App. & Inter. 1992), the phrases "relatively shallow," "of the order of," "the order of about 5mm," and "substantial portion" were held to be indefinite because the specification lacked some standard for measuring the degrees intended.
The claim is not indefinite if the specification provides examples or teachings that can be used to measure a degree even without a precise numerical measurement (e.g., a figure that provides a standard for measuring the meaning of the term of degree). See, e.g., Interval Licensing LLC v. AOL, Inc., 766 F.3d 1364, 1371-72, 112 USPQ2d 1188, 1193 (Fed. Cir. 2014) (observing that although there is no absolute or mathematical precision required, "[t]he claims, when read in light of the specification and the prosecution history, must provide objective boundaries for those of skill in the art").
During prosecution, an applicant may also overcome an indefiniteness rejection by providing evidence that the meaning of the term of degree can be ascertained by one of ordinary skill in the art when reading the disclosure. For example, in Enzo Biochem, the applicant submitted a declaration under 37 CFR 1.132 showing examples that met the claim limitation and examples that did not. Enzo Biochem, 599 F.3d at 1335, 94 USPQ2d at 1328 (noting that applicant overcame an indefiniteness rejection over "not interfering substantially" claim language by submitting a declaration under 37 CFR 1.132 listing eight specific linkage groups that applicant declared did not substantially interfere with hybridization or detection).
Even if the specification uses the same term of degree as in the claim, a rejection is proper if the scope of the term is not understood when read in light of the specification. While, as a general proposition, broadening modifiers are standard tools in claim drafting in order to avoid reliance on the doctrine of equivalents in infringement actions, when the scope of the claim is unclear a rejection under 35 U.S.C. 112(b) or pre-AIA 35 U.S.C. 112, second paragraph, is proper. See In re Wiggins, 488 F. 2d 538, 541, 179 USPQ 421, 423 (CCPA 1973).
When relative terms are used in claims wherein the improvement over the prior art rests entirely upon size or weight of an element in a combination of elements, the adequacy of the disclosure of a standard is of greater criticality.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 15-28 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Kviatkovsky et al (US 11537813 B1).
With respect to claims 15, 24, 26-28, Kviatkovsky et al teaches
obtaining a first training input signal and a second training input signal, wherein the first training input signal characterizes an in-distribution signal and the second training input signal characterizes a contrastive signal (Abstract, During a training phase, a first machine learning system. During training, the first system determines latent vector spaces associated with identity, appearance, and so forth. During a generation phase, latent vectors from the latent vector spaces are generated and used as input to the first machine learning system to generate candidate synthetic image data. The candidate image data is assessed to determine suitability for inclusion into a set of synthetic image data that may be used for subsequent use in training a second machine learning system to recognize an identity of a hand presented by a user. the candidate synthetic image data is compared to previously generated synthetic image data to avoid duplicative synthetic identities. The second machine learning system is then trained using the approved candidate synthetic image data (col. 2, lines 48-67, Training of a recognition system that uses machine learning involves the use of input data. deep learning neural network may be trained using a set of input data. Performance of the recognition system improves as a size of samples in the set of input data increases. samples within the input data should not be too similar, should be consistent with the data that will be provided as input to the recognition system, and so forth. Inadequate input data can result in the recognition system being trained incorrectly or insufficiently. Once trained, the recognition system may be evaluated using additional input data that is different from that used during training. the recognition system is trained using a first set of input data while the recognition system is evaluated using a second set of input data);
determining, by the machine learning system, a first output signal characterizing a likelihood of the first training input signal, and determining, by the machine learning system, a second output signal characterizing a likelihood of the second training input signal (col. 2, lines 48-67, the recognition system is trained using a first set of input data while the recognition system is evaluated using a second set of input data, with the data in the first set being disjoint or different from the data in the second set);
determining a loss value, wherein the loss value characterizes a difference between the first output signal and the second output signal (col. 4, lines 1-15, indicative of loss values associated with the loss functions associated with processing the first output image and the second output image. The backpropagation data may include loss data such as the discriminator loss data, as well as loss data associated with the embedding vector determination. For example, appearance loss data of latent vectors associated with appearance should be minimal between a first output image and a second output image that use the same synthetic appearance vector as input. In another example, identification loss data of latent vectors associated with identification should be minimal between the first output image and the second output image that use different synthetic identification vectors as input. The loss data may be provided as backpropagation data to facilitate training); and
training the machine learning system based on the loss value (FIG. 1, FIG2A-B, col. 13, lines 1-25, to train the neural network of the generative module 230. The backpropagation data 256 may comprise one or more of the discriminator loss data 254, the appearance loss data 250, the identification loss data 252, or other loss data. In some implementations, the backpropagation data 256 may include information indicative of any known relationship between the first training input 280(1) and the second training input 280(2). For example, the backpropagation data 256 may include information indicative of, or may be based on, known input vectors 360. The known input vectors 360 may comprise information about the training input 280. For example, the backpropagation data 256 may include data that the first training input 280(1) and the second training input 280(2) contained the same identification vector). FIG. 2B, a synthetic training input generator module 270 may be used to generate the training input 280. The synthetic training input generator module 270 may use information, such as latent vector space data 272 to determine one or more synthetic vectors. While training the GAN module 220, the synthetic training input generator module 270 may randomly generate first training input 280(1), second training input 280(2), and so forth. During training of the GAN module 220, the synthetic training input generator module 270 may generate pairs of training synthetic vectors. For example, the training input 280 may comprise one or more of a training synthetic demographic vector 282, a training synthetic appearance vector 284, or a training synthetic identification vector 286. The training input 280 may be provided to the generative module 230 during training).
With respect to claim 16, Kviatkovsky et al teaches determining a plurality of first output signals for a corresponding plurality of first training input signals; determining a plurality of second output signals for a corresponding plurality of second training input signals; determining the loss value based on a difference of a mean of the plurality of first output signals and a mean of the plurality of second output signals (FIG. 1, FIG2A-B, col. 13, lines 1-25, to train the neural network of the generative module 230. The backpropagation data 256 may comprise one or more of the discriminator loss data 254, the appearance loss data 250, the identification loss data 252, or other loss data. In some implementations, the backpropagation data 256 may include information indicative of any known relationship between the first training input 280(1) and the second training input 280(2)).
With respect to claim 17, Kviatkovsky et al teaches the loss value is determined according to a loss function, wherein the loss function is characterized by the following formula: n is an amount signals in the plurality of first training inputs signals, m is the amount of signals in the plurality of second training input signals, pa(-) indicates performing inference on the machine learning system parametrized by parameters 0 for an input signal is the i- th signal from the plurality of first training input signals, and x2 is the j-th signal from the plurality of second training input signals (FIG. 1, FIG2A-B, col. 13, lines 1-25, to train the neural network of the generative module 230. The backpropagation data 256 may comprise one or more of the discriminator loss data 254, the appearance loss data 250, the identification loss data 252, or other loss data. In some implementations, the backpropagation data 256 may include information indicative of any known relationship between the first training input 280(1) and the second training input 280(2)).
With respect to claim 18, Kviatkovsky et al teaches signal obtained based on a sensor signal (Abstract, During a training phase, a first machine learning system. During training, the first system determines latent vector spaces associated with identity, appearance, and so forth. During a generation phase, latent vectors from the latent vector spaces are generated and used as input to the first machine learning system to generate candidate synthetic image data. The candidate image data is assessed to determine suitability for inclusion into a set of synthetic image data that may be used for subsequent use in training a second machine learning system to recognize an identity of a hand presented by a user).
With respect to claim 19, Kviatkovsky et al teaches second training input signal is obtained by augmenting the first training input signal (Abstract, During a training phase, a first machine learning system. During training, the first system determines latent vector spaces associated with identity, appearance, and so forth. During a generation phase, latent vectors from the latent vector spaces are generated and used as input to the first machine learning system to generate candidate synthetic image data. The candidate image data is assessed to determine suitability for inclusion into a set of synthetic image data that may be used for subsequent use in training a second machine learning system to recognize an identity of a hand presented by a user).
With respect to claim 20, Kviatkovsky et al teaches to determine a feature representation from a respective input signal provided to the machine learning system and a corresponding output signal is determined based on the feature representation extracted for the respective input signal (Abstract, During a training phase, a first machine learning system. During training, the first system determines latent vector spaces associated with identity).
With respect to claim 21, Kviatkovsky et al teaches feature extraction module is trained to map similar input signals to similar feature representations (col. 1, lines 10-15, The use of the same reference numbers in different figures indicates similar or identical items or features).
With respect to claim 22, Kviatkovsky et al teaches (i) the machine learning system is a neural network including a normalizing flow or a variational autoencoder or a diffusion model, or (ii) the machine learning system includes a neural network configured to determine an output signal based on a feature representation (FIG. 1 illustrates a system to provide data to a first machine learning system that is trained to recognize a user based on images acquired using a plurality of modalities).
With respect to claim 23, Kviatkovsky et al teaches loss value characterizes either the first output signal or a negative of the second output signal, and the training includes at least one iteration in which the loss value characterizes the first output signal, and at least one iteration in which the loss value characterizes the second output signal (FIG. 1 illustrates a system to provide data to a first machine learning system that is trained to recognize a user based on images acquired using a plurality of modalities).
With respect to claim 25, Kviatkovsky et al teaches input signal characterizes an internal state of a technical system and/or a state of an environment of a technical system (FIG. 1 illustrates a system to provide data to a first machine learning system that is trained to recognize a user based on images acquired using a plurality of modalities).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ISAAC M WOO whose telephone number is (571)272-4043. The examiner can normally be reached 9:00 to 5:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tony Mahmoudi can be reached on 571-272-4078. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ISAAC M WOO/Primary Examiner, Art Unit 2163