Prosecution Insights
Last updated: April 19, 2026
Application No. 18/295,692

COMPUTER IMPLEMENTED METHOD AND APPARATUS FOR UNSUPERVISED REPRESENTATION LEARNING

Non-Final OA §101§103§112
Filed
Apr 04, 2023
Examiner
BREENE, PAUL J
Art Unit
2129
Tech Center
2100 — Computer Architecture & Software
Assignee
Robert Bosch GmbH
OA Round
1 (Non-Final)
56%
Grant Probability
Moderate
1-2
OA Rounds
4y 6m
To Grant
90%
With Interview

Examiner Intelligence

Grants 56% of resolved cases
56%
Career Allow Rate
29 granted / 52 resolved
+0.8% vs TC avg
Strong +35% interview lift
Without
With
+34.6%
Interview Lift
resolved cases with interview
Typical timeline
4y 6m
Avg Prosecution
29 currently pending
Career history
81
Total Applications
across all art units

Statute-Specific Performance

§101
31.2%
-8.8% vs TC avg
§103
44.9%
+4.9% vs TC avg
§102
8.1%
-31.9% vs TC avg
§112
14.4%
-25.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 52 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. Claims 1-11 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 1 is indefinite because it employs internally inconsistent terminology regarding the quantity computed by the recited “similarity kernel,” such that the scope of the “similarities” used throughout the claim is unclear. Specifically, claim 1 recites “providing a similarity kernel for determining a similarity between embeddings, the kernel being for determining a Euclidean distance between the embeddings,” and further recites “determining with the similarity kernel similarities for pairs” of embeddings. A Euclidean distance is a dissimilarity metric (lower values indicate greater closeness), whereas a similarity measure is typically defined such that larger values indicate greater closeness. The claim does not specify whether the “similarity” is the Euclidean distance itself, the negative of the Euclidean distance, a normalized version of distance, or a function of distance (e.g., a radial basis function). This ambiguity propagates through the remainder of the claim, including the determination of “similarities” for assigned pairs and the computation of costs based on those “similarities,” such that a person of ordinary skill in the art cannot determine with reasonable certainty the scope of the claim. Accordingly, claim 1 fails to particularly point out and distinctly claim the subject matter regarded as the invention. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Regarding claim 1 and analogous claims 10 and 11: Step 1: is the claim directed to one of the four statutory categories? Yes, the claim is directed to a method. Step 2A, prong 1: Is the claim directed to a law of nature, a natural phenomenon, or an abstract idea? Yes. The limitations: “determining with the encoder embeddings of the samples from the first domain and embeddings of the samples from the second domain; determining with the similarity kernel similarities for pairs of one embedding of a sample from the first domain and one embedding of a sample from the second domain; and determining at least one parameter of the encoder depending on a loss, wherein the loss depends on a first cost for the similarities of the pairs of one embedding of a sample from the first domain and one embedding of a sample from the second domain that are assigned to each other according to the reference assignment and an estimate for a second cost for the similarities for pairs of one embedding of a sample from the first domain and one embedding of a sample from the second domain that are assigned to each other according to a possible assignment of a plurality of possible assignments between pairs of one sample from the first domain and one sample from the second domain” are all directed to mathematical concepts under MPEP 2106.04(a)(2)(I). Step 2A, prong 2: Do the additional elements integrate into a practical application? No. The limitations: “providing an input data set including samples of a first domain and samples of a second domain; providing a reference assignment between pairs of one sample from the first domain and one sample from the second domain; providing an encoder that is configured to map a sample of the input data set depending on at least one parameter of the encoder to an embedding; providing a similarity kernel for determining a similarity between embeddings, the kernel being for determining a Euclidean distance between the embeddings;” is directed to receiving and transmitting data under MPEP 2106.05(g). Step 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception? No. The limitations: “providing an input data set including samples of a first domain and samples of a second domain; providing a reference assignment between pairs of one sample from the first domain and one sample from the second domain; providing an encoder that is configured to map a sample of the input data set depending on at least one parameter of the encoder to an embedding; providing a similarity kernel for determining a similarity between embeddings, the kernel being for determining a Euclidean distance between the embeddings;” is directed to the well-understood, routine, and conventional activity of “receiving and transmitting data” under MPEP 2106.05(d). The claim as a whole does not amount to significantly more than the judicial exception. Regarding claim 2: Step 2A, prong 1: Is the claim directed to a law of nature, a natural phenomenon, or an abstract idea? Yes. The limitation: “wherein the first cost depends on a sum of the similarities between the embeddings that are assigned to each other according to the reference assignment” is directed to a mathematical concept under MPEP 2106.04(a)(2)(I). Regarding claim 3: Step 2A, prong 1: Is the claim directed to a law of nature, a natural phenomenon, or an abstract idea? Yes. The limitation: “wherein the loss includes a difference between the first cost and the estimate for the second cost” is directed to a mathematical concept under MPEP 2106.04(a)(2)(I). Regarding claim 4: Step 2A, prong 1: Is the claim directed to a law of nature, a natural phenomenon, or an abstract idea? Yes. The limitations are “providing a function that is configured to map a plurality of sums of the similarities between embeddings that are assigned to each other according to different possible assignments to a possible cost for the plurality of possible assignments, wherein the possible cost is weighted by a weight, wherein the function is configured to map a plurality of sums of negatives of the similarities between the embeddings that are assigned to each other according to the different possible assignments to a virtual cost, wherein the weight depends on a projection including a minimum distance Euclidean projection, of the virtual cost to a simplex that has one dimension less than the plurality of possible assignments, and wherein the second cost depends on the possible cost that is weighted by the weight” are directed to a mathematical concept under MPEP 2106.04(a)(2)(I). Regarding claim 5: Step 2A, prong 1: Is the claim directed to a law of nature, a natural phenomenon, or an abstract idea? Yes. The limitations: “determining with the similarity kernel a first matrix including as its elements similarities for pairs of one embedding of a sample from the first domain and one embedding of a sample from the first domain, and a second matrix including as its elements similarities for pairs of one embedding of a sample from the second domain and one embedding of a sample from the second domain, wherein the second cost depends on a sum of the similarities between the embeddings within the first domain that are assigned according to the reference assignment, and the similarities between the embeddings within the second domain that are assigned according to the reference assignment, and on a maximum scalar product between the eigenvalues of the first matrix and the eigenvalues of the second matrix” are directed to a mathematical concept under MPEP 2106.04(a)(2)(I). Regarding claim 6: Step 2A, prong 1: Is the claim directed to a law of nature, a natural phenomenon, or an abstract idea? Yes. The limitations: “providing a matrix including as its elements the reference assignment; providing a matrix including as its elements the possible assignment; and providing a matrix including as its elements the similarities between the pairs of embeddings” are directed to a mathematical concept under MPEP 2106.04(a)(2)(I). Regarding claim 7: Step 2A, prong 1: Is the claim directed to a law of nature, a natural phenomenon, or an abstract idea? Yes. The limitations: “determining the at least one parameter of the encoder depending on a solution to an optimization problem that is defined depending on the loss” are directed to a mathematical concept under MPEP 2106.04(a)(2)(I). Regarding claim 8: Step 2A, prong 1: Is the claim directed to a law of nature, a natural phenomenon, or an abstract idea? Yes. The limitation: “includes determining a first number of first samples that is a subset of the samples including samples of the first domain and determining a second number of second samples that is a subset of the samples including samples of the second domain” is directed to a mental process of evaluation under MPEP 2106.04(a)(2)(III). Step 2A, prong 2: Do the additional elements integrate into a practical application? No. The limitation: “providing samples of the first domain and of the second domain, wherein the providing of the input data set” is directed to mere data gathering under MPEP 2106.05(g). Step 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception? No. The limitation: “providing samples of the first domain and of the second domain, wherein the providing of the input data set” is directed to the well-understood, routine, and conventional activity of receiving and transmitting data over a network under MPEP 2106.05(d). Regarding claim 9: Step 2A, prong 1: Is the claim directed to a law of nature, a natural phenomenon, or an abstract idea? Yes. The limitation: “determining with a capturing device an input; determining with the encoder a representation of the input; and determining an output for operating the technical system depending on the representation of the input” are directed to a mental process of judgment under MPEP 2106.04(a)(2)(III). Step 2A, prong 2: Do the additional elements integrate into a practical application? No. The limitation: “operating a technical system, wherein the operating of the technical system includes:” is directed to field of use under MPEP 2106.05(h). Step 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception? No. The limitation: “operating a technical system, wherein the operating of the technical system includes:” is directed to field of use under MPEP 2106.05(h). Allowable Subject Matter Claims 4-5 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims, and if the 101 and 112 rejections were overcome. Claim 4 is allowable because the prior art of record fails to teach or suggest providing a function that (i) maps, for a plurality of possible assignments, (a) sums of similarities between assigned embeddings to a possible cost and (b) sums of negatives of similarities to a virtual cost, and (ii) weights the possible cost by a weight where the weight depends on a projection that includes a minimum-distance Euclidean projection of the virtual cost to a simplex having one less dimension than the plurality of possible assignments, with the second cost depending on the weighted possible cost. This specific cost construction and simplex-projection-based weighting scheme constitutes a particular optimization structure for assignment selection that is not disclosed or rendered obvious by the cited references. Claim 5 is allowable because the prior art of record fails to teach or suggest determining, using a similarity kernel, (i) a first matrix including similarities for embedding pairs from the first domain, (ii) a second matrix including similarities for embedding pairs from the second domain, and further defining the second cost to depend on (a) sums of similarities between embeddings within each domain assigned according to a reference assignment and (b) a maximum scalar product between eigenvalues of the first matrix and eigenvalues of the second matrix. The claimed eigenvalue-based coupling term (maximum scalar product of eigenvalue sets) is a specific spectral constraint on the cost function that is not shown or suggested by the applied art and provides a patentable distinction. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-3, and 6-11 are rejected under 35 U.S.C. 103 as being unpatentable over US Pre-Grant Patent 2016/0078359 (Csurka et al; Csurka) in view of US Pre-Grant Patent 2023/0106141 (Kalantidis et al; Kalantidis). Regarding claim 1 and analogous claims 10 and 11: Csurka teaches: 1. A computer implemented method of unsupervised representation learning, the method comprising the following steps: providing an input data set including samples of a first domain and samples of a second domain; (Csurka, ¶0080) “The domain adaptation (DA) method can use unsupervised or semi-supervised learning [i.e. A computer implemented method of unsupervised representation learning, the method comprising the following steps:].” (Csurka, ¶0032) “With reference to FIG. 2, an exemplary image classification system 10 is illustrated in an operating environment. The system takes as input a new sample 12 to be classified. The system 10 assigns a class label 14 or labels probabilistically to the sample 12, based on labels of a training set 16 of training samples stored in a database 18, which for each of a set of domains, contains a collection 20, 22, 24 of training samples [i.e. providing an input data set including samples of a first domain and samples of a second domain;]. While three domains are illustrated, it will be appreciated that any number of domains may be considered.” 2. providing a reference assignment between pairs of one sample from the first domain and one sample from the second domain; (Csurka, ¶0017) “In accordance with one aspect of the exemplary embodiment, a classification system includes memory which stores, for each of a set of classes, a classifier model for assigning a class probability to a test sample from a target domain. The classifier model has been learned with training samples from the target domain and training samples from at least one source domain different from the target domain [i.e. providing a reference assignment between pairs of one sample from the first domain and one sample from the second domain;].” 3. providing a similarity kernel for determining a similarity between embeddings, the kernel being for determining a Euclidean distance between the embeddings; (Csurka, ¶0048) “In one embodiment, the distance between the sample x.sub.i and one of the class means μ.sup.c in a projected feature space, denoted d.sub.W(x.sub.i,μ.sup.c)=∥W(x.sub.i−μ.sup.c)∥.sup.2, i.e., [i.e. providing a similarity kernel for determining a similarity between embeddings,] the distance is computed as the squared Euclidean distance between instance x.sub.i and the class mean μ.sup.c in some projected feature space given by the transformation matrix W [i.e. the kernel being for determining a Euclidean distance between the embeddings;].” 4. determining with the similarity kernel similarities for pairs of one embedding of a sample from the first domain and one embedding of a sample from the second domain; (Csurka, ¶0048) “If W is the identity (I), this corresponds to the squared Euclidean distance in the original feature space. Eq. (1) can be reformulated as a multi-class softmax assignment using a mixture model (with equal weights), where the probability that an image x.sub.i belongs to the class c is an exponential function of the distance in the projected feature space [i.e. determining with the similarity kernel similarities for pairs of one embedding of a sample from the first domain and one embedding of a sample from the second domain;].” 5. and determining at least one parameter of the encoder depending on a loss, (Csurka, ¶0143, Eq. 10) “Machine Learning Res. (JMLR) 10, pp. 207-244 (2009) was used, where the ranking loss is optimized on triplets: L.sub.qpn=max(0,[1+d.sub.W(x.sub.q,x.sub.p)−d.sub.W(x.sub.q,x.sub.n)]), (10) [i.e. and determining at least one parameter of the encoder depending on a loss,]” 6. and one embedding of a sample from the second domain that are assigned to each other according to a possible assignment of a plurality of possible assignments between pairs of one sample from the first domain and one sample from the second domain. (Csurka, ¶0089, Eq. 2) “Where an initial set of labeled samples in the target domain is available, this step may include using an NCM classifier as exemplified in Eq. (2) for each domain to predict the labels for the labeled samples…[i.e. according to a possible assignment of a plurality of possible assignments between pairs of one sample from the first domain and one sample from the second domain.] In computing the class predictions, the current transformation matrix W.sub.r may be used for embedding the domain samples and class means in the projected space [i.e. and one embedding of a sample from the second domain that are assigned to each other]. The average classification accuracy of this classifier can be used directly as the respective new domain-specific weight w.sub.d.sup.r or otherwise used to compute an update for the weight.” Csurka does not explicitly teach, Kalantidis teaches: 1. providing an encoder that is configured to map a sample of the input data set depending on at least one parameter of the encoder to an embedding; (Kalantidis, ¶0034) “Turning now to the drawings, FIG. 1 shows an example system 100 for training a dimensionality reduction model. The dimensionality reduction model in the system 100 is embodied in or a component of an encoder 102, which is configured to receive data such as can be represented by an input vector, in an input space that is a higher dimensional representation space, e.g., a D-dimensional space and generate an output vector in an output space that is a lower-dimensional representation space, e.g., a d-dimensional space, where D is greater than d [i.e. providing an encoder that is configured to map a sample of the input data set].” (Kalantidis, ¶0035) “The datapoints 104a-104d can represent inputs that are used for a variety of processing tasks. For instance, each datapoint 104a-104d can represent a token (e.g., a word, phrase, sentence, paragraph, symbol, etc.), a document, an image, an image patch (arbitrary part of an image), a video, a waveform, a 3D model, a 3D point cloud, embeddings of tabular data, etc [i.e. depending on at least one parameter of the encoder to an embedding;].” 2. determining with the encoder embeddings of the samples from the first domain and embeddings of the samples from the second domain; (Kalantidis, ¶0087) “Exploiting the fact that batch normalization (BN) is linear during inference (as it reduces to a linear scaling applied to the features that can be embedded in the weights of an adjacent linear layer), ƒ.sub.θ is alternatively formulated as a multi-layer linear model, where ƒ.sub.θ is a sequence of l layers, each composed of a linear layer followed by a BN layer [i.e. determining with the encoder embeddings of the samples from the first domain and embeddings of the samples from the second domain;].” 3. wherein the loss depends on a first cost for the similarities of the pairs of one embedding of a sample from the first domain and one embedding of a sample from the second domain that are assigned to each other according to the reference assignment and an estimate for a second cost for the similarities for pairs of one embedding of a sample from the first domain (Kalantidis, ¶0056) “The similarity preservation loss has the objective of maintaining invariance to changes (e.g., distortions, augmentations, transformations, alterations, etc.) during dimensionality reduction. It can be computed, for instance, by computing a cross-correlation between the first 124a and 124b [i.e. wherein the loss depends on a first cost for the similarities of the pairs of one embedding of a sample from the first domain] and second 124c and 124d augmented dimensional vectors [i.e. and one embedding of a sample from the second domain that are assigned to each other according to the reference assignment] over the batch of b augmented dimension vector pairs 122a and 122b for common dimensions, such as dimension 0≤i≤d′” (Kalantidis, ¶0058) “The cross-correlation matrix 130 can be normalized such that the optimal sum at common dimensions (C.sub.ii) across the batch b is equal to one (identity) to compute similarity preservation loss, while the optimal sum at dimensions other than common dimensions C.sub.ij across the batch b is equal to zero to compute redundancy reduction loss, as shown in the identity matrix 132 [i.e. and an estimate for a second cost for the similarities for pairs of one embedding of a sample from the first domain].” One of ordinary skill in the art, at the time the invention was filed, would have been motivated to modify Csurka with Kalantidis. The motivation is to incorporate the dimensional reduction encoder and losses in order to improve the system of Csurka by “effectively and in an unsupervised manner learn low-dimensional spaces where local neighborhoods of the input space are preserved (Kalantidis, ¶0032).” Regarding claim 2: The combination of Csurka and Kalantidis teach the method of claim 1. Kalantidis teaches: 1. wherein the first cost depends on a sum of the similarities between the embeddings that are assigned to each other according to the reference assignment “For instance, FIG. 1 shows a cross-correlation matrix 130 of size d′×d′ computed between the first 124a and 124b and second 124c and 124d augmented dimensional vectors averaged over the batch of b augmented dimension vector pairs 122a and 122b. [i.e. wherein the first cost depends on a sum of the similarities between the embeddings that are assigned to each other according to the reference assignment].” Examiner notes that to take an average, the sum of the set must be taken. One of ordinary skill in the art, at the time the invention was filed, would have been motivated to modify Csurka with Kalantidis. The motivation is the same as claim 1. Regarding claim 3: The combination of Csurka and Kalantidis teach the method of claim 1. Kalantidis teaches: 1. wherein the loss includes a difference between the first cost and the estimate for the second cost. (Kalantidis, ¶0084, Eq. 1) “The loss is composed of two terms. The first term encourages the diagonal elements to be equal to 1. This makes the learned representations invariant to applied distortions, that is, the datapoints moving along the input manifold in the neighborhood of a training vector are encouraged to share similar representations in the output space. The second term pushes off-diagonal elements towards zero, reducing the redundancy between output dimensions, which is highly desirable for dimensionality reduction. The loss can be used to learn the parameters θ of the encoder ƒ.sub.θ and the parameters ϕ of the projector g.sub.ϕ [i.e. wherein the loss includes a difference between the first cost and the estimate for the second cost].” One of ordinary skill in the art, at the time the invention was filed, would have been motivated to modify Csurka with Kalantidis. The motivation is the same as claim 1. Regarding claim 6: The combination of Csurka and Kalantidis teach the method of claim 1. Csurka teaches: 1. providing a matrix including as its elements the reference assignment; providing a matrix including as its elements the possible assignment; and providing a matrix including as its elements the similarities between the pairs of embeddings. (Csurka, ¶0038) “The classifier models can each be in the form of a classification function, which may include the domain-specific weights which are each used to weight a decreasing function of the computed distance between the representation 50 of the test image [i.e. providing a matrix including as its elements the reference assignment;] and respective domain-specific class representation (in the feature space projected by matrix W) [i.e. and providing a matrix including as its elements the similarities between the pairs of embeddings.], in order to compute a probability that the image 12 should be labeled with a given class [i.e. providing a matrix including as its elements the possible assignment;].” One of ordinary skill in the art, at the time the invention was filed, would have been motivated to modify Csurka with Kalantidis. The motivation is the same as claim 1. Regarding claim 7: The combination of Csurka and Kalantidis teach the method of claim 1. Kalantidis teaches: 1. determining the at least one parameter of the encoder depending on a solution to an optimization problem that is defined depending on the loss. (Kalantidis, ¶0010) “A similarity preservation loss and a redundancy reduction loss between the first and second augmented dimension vectors are computed over the batch of b augmented dimension vector pairs, and the parameters of the dimensionality reduction model are optimized to minimize a total loss based on the computed similarity preservation loss and the computed redundancy reduction loss.” One of ordinary skill in the art, at the time the invention was filed, would have been motivated to modify Csurka with Kalantidis. The motivation is the same as claim 1. Regarding claim 8: The combination of Csurka and Kalantidis teach the method of claim 1. Csurka teaches: 1. providing samples of the first domain and of the second domain, wherein the providing of the input data set includes determining a first number of first samples that is a subset of the samples including samples of the first domain and determining a second number of second samples that is a subset of the samples including samples of the second domain. (Csurka, ¶0017) “The classifier model has been learned with training samples from the target domain and training samples from at least one source domain different from the target domain [i.e. providing samples of the first domain and of the second domain,]. Each classifier model models the respective class as a mixture of components. The mixture of components includes a component for each of the at least one source domain and a component for the target domain [i.e. wherein the providing of the input data set includes determining a first number of first samples that is a subset of the samples including samples of the first domain and determining a second number of second samples that is a subset of the samples including samples of the second domain].” Regarding claim 9: The combination of Csurka and Kalantidis teach the method of claim 1. Csurka teaches: 1. operating a technical system, wherein the operating of the technical system includes: determining with a capturing device an input; and determining an output for operating the technical system depending on the representation of the input (Csurka, ¶0032) “With reference to FIG. 2, an exemplary image classification system 10 is illustrated in an operating environment [i.e. operating a technical system, wherein the operating of the technical system includes: determining with a capturing device an input;]. The system takes as input a new sample 12 to be classified. The system 10 assigns a class label 14 or labels probabilistically to the sample 12, based on labels of a training set 16 of training samples stored in a database 18, which for each of a set of domains, contains a collection 20, 22, 24 of training samples [i.e. and determining an output for operating the technical system depending on the representation of the input].” Csurka does not teach; Kalantidis teaches: 1. determining with the encoder a representation of the input; (Kalantidis, ¶0034) “Turning now to the drawings, FIG. 1 shows an example system 100 for training a dimensionality reduction model. The dimensionality reduction model in the system 100 is embodied in or a component of an encoder 102, which is configured to receive data such as can be represented by an input vector, in an input space that is a higher dimensional representation space, e.g., a D-dimensional space and generate an output vector in an output space that is a lower-dimensional representation space, e.g., a d-dimensional space, where D is greater than d [i.e. determining with the encoder a representation of the input;].” Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to PAUL JUSTIN BREENE whose telephone number is (571)272-6320. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web- based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Michael J Huntley can be reached on 303-297-4307. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786 9199 (IN USA OR CANADA) or 571-272-1000. /P.J.B./ Examiner, Art Unit 2129 /MICHAEL J HUNTLEY/Supervisory Patent Examiner, Art Unit 2129
Read full office action

Prosecution Timeline

Apr 04, 2023
Application Filed
Jan 24, 2026
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12585959
Framework for Learning to Transfer Learn
2y 5m to grant Granted Mar 24, 2026
Patent 12579427
EMBEDDING OPTIMIZATION FOR MACHINE LEARNING MODELS
2y 5m to grant Granted Mar 17, 2026
Patent 12578718
MODEL CONSTRUCTION SUPPORT SYSTEM AND MODEL CONSTRUCTION SUPPORT METHOD
2y 5m to grant Granted Mar 17, 2026
Patent 12572792
GOAL-SEEK ANALYSIS WITH SPATIAL-TEMPORAL DATA
2y 5m to grant Granted Mar 10, 2026
Patent 12505356
DATA ENRICHMENT ON INSULATED APPLIANCES
2y 5m to grant Granted Dec 23, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
56%
Grant Probability
90%
With Interview (+34.6%)
4y 6m
Median Time to Grant
Low
PTA Risk
Based on 52 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month