Prosecution Insights
Last updated: April 19, 2026
Application No. 17/556,642

DISCOVERING DISTRIBUTION SHIFTS IN EMBEDDINGS

Non-Final OA §103§112
Filed
Dec 20, 2021
Examiner
WONG, WILLIAM
Art Unit
2144
Tech Center
2100 — Computer Architecture & Software
Assignee
Microsoft Technology Licensing, LLC
OA Round
3 (Non-Final)
30%
Grant Probability
At Risk
3-4
OA Rounds
4y 11m
To Grant
57%
With Interview

Examiner Intelligence

Grants only 30% of cases
30%
Career Allow Rate
120 granted / 397 resolved
-24.8% vs TC avg
Strong +27% interview lift
Without
With
+26.9%
Interview Lift
resolved cases with interview
Typical timeline
4y 11m
Avg Prosecution
33 currently pending
Career history
430
Total Applications
across all art units

Statute-Specific Performance

§101
11.4%
-28.6% vs TC avg
§103
45.8%
+5.8% vs TC avg
§102
14.3%
-25.7% vs TC avg
§112
23.5%
-16.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 397 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This action is in response to communications filed on 01/28/2026. Claim 19 has been canceled. Claims 1-18 and 20 are pending and have been examined. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 01/28/2026 has been entered. Claim Objections Claims 1-2, 6, 8-9, 13, and 20 are objected to because of the following informalities: As per claim 1, it appears that “a level of the fitness of the embedding model” in line 25 should be e.g. “a level of fitness of the embedding model” (note that “fitness of the embedding model” is not previously recited). This similarly applies to claims 8 and 20. As per claim 2, it appears that the word “and” should be inserted after “space,” in line 3. This similarly applies to claims 6, 9, and 13. Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 1-18 and 20 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Claim 1 is amended to recite “for each of the plurality of views, which are generated by subsampling the reference embedding space, determine a distance value representing a distance between the evaluation embedding space and said each view… in response to determining that the level of fitness is below a threshold based on comparisons between the evaluation embedding space and subsampled portions of the reference embedding space, select a new embedding model for evaluation relative to the evaluation dataset, wherein the selected new embedding model operates as a replacement for the embedding model, and wherein the selected new embedding model is a candidate that is potentially more suitable for the evaluation dataset relative to the embedding model and that is potentially able to facilitate early intervention to prevent degraded performance of downstream tasks”. However, the specification does not support the above features. With respect to “for each of the plurality of views, which are generated by subsampling the reference embedding space, determine a distance value representing a distance between the evaluation embedding space and said each view”, applicant points to the abstract and paragraphs 40, 42, and 59 and for alleged support. However, it appears that these paragraphs are being mischaracterized. For example, paragraph 42 states that “If we are using multiple views, the distances from each of them to the reference dataset are aggregated using a suitable statistic (for example, a median). The distance value resulting is called a distance star or a distance threshold. Referring to Figure 4, the distance value between the evaluation dataset and the reference dataset is then compared with the distance threshold (act 406)”. To rephrase this, a distance value “between the evaluation dataset and the reference dataset” (not each view) is compared with a distance threshold calculated from e.g. a median of “the distances from each of [the views] to the reference dataset”. As can be seen, “the distances from each of [the views are] to the reference dataset”, not to “the evaluation embedding space” as claimed. The specification is silent as to determining a distance between the evaluation embedding space and each view of each of the plurality of views. As such, the claim lacks written description. With respect to “in response to determining that the level of fitness is below a threshold based on comparisons between the evaluation embedding space and subsampled portions of the reference embedding space, select a new embedding model for evaluation relative to the evaluation dataset, wherein the selected new embedding model operates as a replacement for the embedding model, and wherein the selected new embedding model is a candidate that is potentially more suitable for the evaluation dataset relative to the embedding model and that is potentially able to facilitate early intervention to prevent degraded performance of downstream tasks”, applicant cites paragraphs 29, 42, 44, and 59 for alleged support. However, it appears that these paragraphs are being mischaracterized. Similar to above, “the distances [or comparisons] from each of [subsampled portions are] to the reference dataset” (e.g. in paragraph 42), not to “the evaluation embedding space” as claimed. Paragraph 42 also merely describes determining a level of fitness of the embedding model, but it is not associated with selection of a new embedding model. While paragraph 29 describes “Such action could include obtaining and evaluating new embedding models”, it is in the context of an embedding model “still being fit for the time being”, i.e. the embedding model is not replaced by a new selected embedding model. The specification is also silent as to any “candidate”. As such, the claim lacks written description. Independent claims 8 and 20 also recite the same limitations and therefore have the same problem. Due at least to their dependency upon claims 1 or 8, dependent claims 2-7 and 9-18 also lack written description. The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-18 and 20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. The term “early” in claim 1 is a relative term which renders the claim indefinite. The term “early” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. What is considered “early” varies depending on person, context, etc. As such, the claim is indefinite. Independent claims 8 and 20 also recite the same limitations and therefore have the same problem. Due at least to their dependency upon claims 1 or 8, dependent claims 2-7 and 9-18 are also indefinite. The term “potentially” in claim 1 raises question as to whether the features following the term are required by the claim. As such, the claim is indefinite. Independent claims 8 and 20 also recite the same limitations and therefore have the same problem. Due at least to their dependency upon claims 1 or 8, dependent claims 2-7 and 9-18 are also indefinite. Independent claim 1 appears to recite a contingent limitation, e.g. based on determining that the level of fitness is below a threshold. As the Examiner construes the claim, the contingency conditions themselves are not actively recited. For example, it not explicitly clear whether the level of fitness is determined to be below a threshold in the claim. Accordingly, it is not clear if the contingencies are satisfied, and therefore if the claim's consequential actions to a satisfied contingency are required. See, e.g., MPEP 2111.04(II). The effect is that the claim is rendered vague and indefinite under 35 U.S.C. 112(b) and therefore rejected accordingly. Independent claims 8 and 20 also recite the same limitations and therefore have the same problem. The dependent claims 2-7 and 9-18 include the same or similar limitations as claim 1 discussed here, without curing its deficiencies, and are therefore rejected under the same rationale. Applicants can overcome the rejection by making affirmative the condition, thereby necessitating the performance of a consequential action. Response to Arguments Previous rejections under 35 USC 101 have been withdrawn in view of amendments. Applicant’s arguments with respect to the prior art rejections have been considered but are moot in view of new grounds of rejection. See Lin et al. (US 20120284212 A1) below. However, applicant argues in substance that the references allegedly do not teach subsampling of a reference embedding space or making early intervention determination before model deployment. However, examiner respectfully disagrees. For example, Jin teaches “training data set (also referred to herein as the reference data)… generate a first set of model-based features based on the training data… a subset of the first set of model-based features can be extracted as second training data features” (e.g. in paragraphs 23 and 28), i.e. subsampling of a reference embedding space. Ratnesh Kumar also teaches subsampling (e.g. in paragraphs 7, 29, and 49-50, “determine the meaningful sample(s)… determine a subset of the samples from each batch that are to be used by an optimizer to increase the ability of the DNN to learn effectively and converge more quickly to an acceptable or optimal accuracy… embeddings 110 computed by the DNN 108 for a batch of the image data 102 may be sampled using a batch sampling variant during batch sampling… sample from different views… various batch sampling variants may be used”). In response to applicant's argument that the references fail to show certain features of the invention, it is noted that the features upon which applicant relies (i.e., make early intervention determination before model deployment) are not recited in the rejected claim(s). Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993). The claims do not recite any context for what “early” means, particularly not that it is “before model deployment”. See also issues with respect to 35 USC 112 above. Since models can be automatically replaced (e.g. see newly cited Lin et al. (US 20120284212 A1) below), the combination teaches “early intervention”. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1, 6-8, 13-18, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Jin et al. (US 20200311557 A1) in view of Luo (US 20060110046 A1), Ratnesh Kumar et al. (US 20220392234 A1), and Lin et al. (US 20120284212 A1). As per independent claim 1, Jin teaches a computing system that evaluates a fit of an embedding model for an evaluation dataset (e.g. in paragraph 3, “a target data acceptability component that determines whether application of the target neural network model to the target data set will generate results with an acceptable level of accuracy based on the degree of correspondence”), said computing system comprising: one or more processors and hardware storage devices that store instructions that are executable by the one or more processors to cause the computing system to (e.g. in paragraph 3, “a system can comprise a memory that stores computer executable components and a processor that executes the computer executable components stored in the memory”): access a reference embedding space generated by applying an embedding model to a reference dataset (e.g. in paragraphs 23, 28, 42, 50, and 56, “training data set (also referred to herein as the reference data)… subset of layers of the DNN model (e.g., one or more layers of the DNN model [i.e. embedding model] excluding the final output layer) can be applied to the training data set to generate a first set of model-based features based on the training data set……extracted as second training data features… first training data features 304 (e.g., feature vectors) can be extracted from the training data set 124”, i.e. reference embedding space); obtain a plurality of views of the reference embedding space (e.g. in paragraphs 23, 28, 42, 50, and 56, “generate a first set of model-based features based on the training data… a subset of the first set of model-based features can be extracted as second training data features… Although the visualization exemplified in FIG. 2 depicts…two nodes Z are used to represent the feature vectors from eight input images, it should be appreciated that the dimensionality of the input data set and the resulting extracted feature vectors can vary… first training data features 304 (e.g., feature vectors) can be extracted from the training data set”, i.e. views, and figure 2); obtain an evaluation embedding space generated by applying the embedding model to an evaluation dataset (e.g. in paragraphs 28, 42, 50, and 57, “The same subset of layers of the DNN model can also be applied to the target data set to generate a second set of model-based features. This second set of model-based features or a subset of the second set of model-based features can be extracted as second target data feature… first target data features 308 (e.g., feature vectors) can also be extracted from the target data set 126”, i.e. evaluation embedding space, used to determine acceptability as seen below); for each of the plurality of views, determine a distance value representing a distance between the evaluation embedding space and said each view (e.g. in paragraphs 28, 38, 51 and 58-59, “generate a first set of model-based features based on the training data… a subset of the first set of model-based features can be extracted as second training data features [i.e. view]… set of features or feature vectors extracted from the training data set 124 are referred to herein as first training data features or first training data feature vectors [i.e. view]… first degree of correspondence can then be determined between the first training data features 304 and the first target data features 308…employ one or more statistical and/or machine learning based approaches (e.g.,…Mahalanobis distance analysis on multi-dimensions, t-SNE analysis on lower dimensions to get similarity distances…) to…determine a degree of correspondence between the first training data features 304 and the first target data features… determine a degree of correspondence between the second training data features/feature vectors and the second target data features/feature vectors… distance measurement”); compare each distance value with a distance threshold to enable early detection of model fitness and based on the comparison, determine a level of the fitness of the embedding model for the evaluation dataset (e.g. in paragraphs 28-29 52-53, 58, and 62, “within the scope of the DNN model… outside the scope of DNN model… determine whether a measurement value representative of the degree of correspondence meets an acceptability criterion (e.g., a minimum threshold… determined to be acceptable… determined to be unacceptable… in some implementations, the confidence score can be a binary value, representative of acceptable (e.g., within the scope of the training data set 124) or unacceptable (e.g., outside the scope of the training data set 124). In another embodiment, the confidence score can correspond to the degree of correspondence, such that the higher the degree of correspondence, the higher the confidence score… results generated based on application of the DNN model to the target data set can be associated with a high degree of accuracy (e.g., in accordance with a predefined accuracy scale) that reflects the first and/or second degree of correspondence”); and in response to determining that the level of fitness is below a threshold based on comparisons between the evaluation embedding space and subsampled portions of the reference embedding space (e.g. in paragraphs 28, 52, 59, and 61-62, “generate a first set of model-based features based on the training data… a subset of the first set of model-based features can be extracted as second training data features [i.e. subsampled portions]… determine whether a measurement value representative of the degree of correspondence meets an acceptability criterion (e.g., a minimum threshold… determined to be acceptable… determine a degree of correspondence between the second training data features/feature vectors and the second target data features/feature vectors… based a determination that the target data set 126 is outside the scope of the target neural network model 128 and/or association of the target data set 126 with an unacceptable confidence score (e.g., relative to a minimum confidence score), the model acceptability component 118 can prevent application of the target neural network model 128 to the target data set 126”), select a new embedding model for evaluation relative to the evaluation dataset, wherein the selected new embedding model is a candidate that is potentially more suitable for the evaluation dataset relative to the embedding model and that is potentially able to facilitate early intervention to prevent degraded performance of downstream tasks (e.g. in paragraphs 21, 55, and 62, “evaluating and defining the scope of data-driven deep learning models… intermediary layers generate output parameters/features that are fed as inputs to subsequent downstream layers… the target data acceptability component 108 can authorize the target data set 126 for application to the DNN model. In addition, results generated based on application of the DNN model to the target data set can be associated with a high degree of accuracy (e.g., in accordance with a predefined accuracy scale) that reflects the first and/or second degree of correspondence”, i.e. model among models that is potentially more suitable and has higher accuracy is used, i.e. early intervention to prevent degraded performance), but does not specifically teach determine a distance threshold for a distance metric using the plurality of views of the reference embedding space, the plurality of views which are generated by subsampling the reference embedding space and wherein the embedding model is configured to structure the reference embedding space to have a first number of dimensions, wherein the embedding model is further configured to structure the evaluation embedding space to have a second number of dimensions, the second number of dimensions being the same as the first number of dimensions such that the reference embedding space and the evaluation embedding space are structured, by the embedding model, to have a same dimension size, and wherein dimensions of the evaluation embedding space are structured, by the embedding model, to correspond to dimensions of the reference embedding space and wherein the selected new embedding model operates as a replacement for the embedding model. However, Jin teaches reference information including the plurality of views of the reference embedding space (e.g. in paragraphs 23, 28, 42, 50, and 56, “generate a first set of model-based features based on the training data… extracted as second training data features… Although the visualization exemplified in FIG. 2 depicts…two nodes Z are used to represent the feature vectors from eight input images, it should be appreciated that the dimensionality of the input data set and the resulting extracted feature vectors can vary… first training data features 304 (e.g., feature vectors) can be extracted from the training data set” and figure 2) and Luo teaches determine a distance threshold for a distance metric using reference information (e.g. in paragraphs 23 and 38-39, “a distance distribution of the training [i.e. reference] dataset for the study shape and illustrate the way to find a threshold… the classification problem can be simplified as determining an appropriate threshold to obtain a "good" discrimination of distances”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Jin to include the teachings of Luo because one of ordinary skill in the art would have recognized the benefit of minimizing error, but does not specifically teach the plurality of views which are generated by subsampling the reference embedding space, wherein the embedding model is configured to structure the reference embedding space to have a first number of dimensions, wherein the embedding model is further configured to structure the evaluation embedding space to have a second number of dimensions, the second number of dimensions being the same as the first number of dimensions such that the reference embedding space and the evaluation embedding space are structured, by the embedding model, to have a same dimension size, and wherein dimensions of the evaluation embedding space are structured, by the embedding model, to correspond to dimensions of the reference embedding space and wherein the selected new embedding model operates as a replacement for the embedding model. However, Ratnesh Kumar teaches a plurality of views which are generated by subsampling a reference embedding space (e.g. in paragraphs 7, 29, and 49-50, “determine the meaningful sample(s)… determine a subset of the samples from each batch that are to be used by an optimizer to increase the ability of the DNN to learn effectively and converge more quickly to an acceptable or optimal accuracy… embeddings 110 computed by the DNN 108 for a batch of the image data 102 may be sampled using a batch sampling variant during batch sampling… sample from different views… various batch sampling variants may be used”) and an embedding model being configured to structure an embedding space to have a first number of dimensions, wherein the embedding model is further configured to structure another embedding space to have a second number of dimensions, the second number of dimensions being the same as the first number of dimensions such that the embedding space and the another embedding space are structured, by the embedding model, to have a same dimension size, and wherein dimensions of the another embedding space are structured, by the embedding model, to correspond to dimensions of the embedding space (e.g. in paragraphs 24, 38-39, 46-47, and 61, “an instantiation of the DNN… a width, W, a height, H, and color channels, C… and… a batch size, B… DNN 108 may be trained to compute the embeddings 110 with an embedding dimension… the DNN 108 may be trained to compute the embeddings 110 with an embedding dimension of 128 units while producing accurate and efficient results… generate the embedding 110A… generate the embedding 110B”, i.e. embedding model comprising DNN 108 generates multiple embedding spaces with corresponding dimensions at size of 128). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of the combination to include the teachings of Ratnesh Kumar because one of ordinary skill in the art would have recognized the benefit of producing accurate and efficient results, but does not specifically teach wherein the selected new embedding model operates as a replacement for the embedding model. However, Lin teaches a selected new model operating as a replacement for a model (e.g. in paragraphs 108-109, “a trained predictive model can be selected to provide to the client computing system 202. For example, the new accuracy scores associated with the available trained predictive models can be compared, and the most accurate trained predictive model selected… a different trained predictive model is selected as being the most accurate… Changing the trained predictive model that is accessible by the client computing system 202 can be invisible to the client computing system”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of the combination to include the teachings of Lin because one of ordinary skill in the art would have recognized the benefit of seamlessly improving accuracy. As per claim 6, the rejection of claim 1 is incorporated and the combination further teaches a first view of the plurality of views of the reference embedding space being a first subsample of the reference embedding space, a second view of the plurality of views of the reference embedding space representing a second subsample of the reference embedding space (e.g. Jin, in paragraph 28, “generate a first set of model-based features based on the training data… a subset of the first set of model-based features can be extracted as second training data features”; Ratnesh Kumar, in paragraphs 7, 29, and 49-50, “determine the meaningful sample(s)… determine a subset of the samples from each batch that are to be used by an optimizer to increase the ability of the DNN to learn effectively and converge more quickly to an acceptable or optimal accuracy… embeddings 110 computed by the DNN 108 for a batch of the image data 102 may be sampled using a batch sampling variant during batch sampling… sample from different views… various batch sampling variants may be used”). As per claim 7, the rejection of claim 1 is incorporated and the combination further teaches the reference dataset comprising a training dataset (e.g. Jin, in paragraph 23, “training data set (also referred to herein as the reference data)”). Claims 8 and 13-14 are the method claim corresponding to systems claim 1 and 6-7, and are rejected under the same reasons set forth. As per claim 15, the rejection of claim 8 is incorporated and the combination further teaches the level of fitness comprising whether or not the embedding model is acceptable for use with the evaluation dataset (e.g. Jin, in paragraphs 4, 28-29, and 52-53, “determine whether the application of the target neural network model to the target data set will generate the results with the acceptable level of accuracy… within the scope of the DNN model… outside the scope of DNN model”). As per claim 16, the rejection of claim 8 is incorporated and the combination further teaches wherein obtaining the evaluation embedding space is performed by the computing system applying the embedding model to the evaluation dataset (e.g. Jin, in paragraphs 28, 42, 50, and 57, “The same subset of layers of the DNN model can also be applied to the target data set to generate a second set of model-based features. This second set of model-based features or a subset of the second set of model-based features can be extracted as second target data feature… first target data features 308 (e.g., feature vectors) can also be extracted from the target data set 126”). As per claim 17, the rejection of claim 8 is incorporated and the combination further teaches wherein obtaining the reference embedding space is performed by the computing system applying the embedding model to the reference dataset (e.g. Jin, in paragraphs 23, 28, 42, 50, and 56, “training data set (also referred to herein as the reference data)… subset of layers of the DNN model (e.g., one or more layers of the DNN model [i.e. embedding model] excluding the final output layer) can be applied to the training data set to generate a first set of model-based features based on the training data set”). As per claim 18, the rejection of claim 8 is incorporated and the combination further teaches the reference embedding space and the evaluation embedding space each having greater than three dimensions (e.g. Jin, in paragraphs 23, 28 and 42, “a subset of layers of the DNN model (e.g., one or more layers of the DNN model excluding the final output layer) can be applied to the training data set to generate a first set of model-based features based on the training data set. This first set of model-based features or a subset of the first set of model-based features can be extracted as second training data features. The same subset of layers of the DNN model can also be applied to the target data set to generate a second set of model-based features. This second set of model-based features or a subset of the second set of model-based features can be extracted as second target data features… Although the visualization exemplified in FIG. 2 depicts…two nodes Z are used to represent the feature vectors from eight input images, it should be appreciated that the dimensionality of the input data set and the resulting extracted feature vectors can vary” and figure 2 showing layers with more than 3 nodes; Ratnesh Kumar, in paragraphs 38 and 47, “a width, W, a height, H, and color channels, C… and… a batch size, B… embedding dimension of 128 units”). Claim 20 is the product claim corresponding to system claim 1, and is rejected under the same reasons set forth and the combination further teaches one or more hardware storage devices that store instructions that are executable by the one or more processors (e.g. Jin, in paragraph 3, “a system can comprise a memory that stores computer executable components and a processor that executes the computer executable components stored in the memory”). Claims 2 and 9 are rejected under 35 U.S.C. 103 as being unpatentable over Jin et al. (US 20200311557 A1) in view of Luo (US 20060110046 A1), Ratnesh Kumar et al. (US 20220392234 A1), and Lin et al. (US 20120284212 A1) and further in view of Tan et al. (US 20200227030 A1). As per claim 2, the rejection of claim 1 is incorporated, but the combination does not specifically teach, as a whole, a first view of the plurality of views of the reference embedding space being a sub sample of or an entirety of the reference embedding space, a second view of the plurality of views of the reference embedding space representing a perturbation of the first view of the reference embedding space. However, Tan teaches a first view of a plurality of views of a reference embedding space being a sub sample of or an entirety of the reference embedding space and a second view of the plurality of views of the reference embedding space representing a perturbation of the first view of the reference embedding space (e.g. in paragraphs 25 and 38, “training data… perturbation… the original intent model (158) is augmented with synthetic data and subject to adversarial training [i.e. perturbation]… One or more thresholds may be applied to narrow the set of synthetic data. For example, in one embodiment, a first threshold is applied with respect to sampling synthetic data, and a second threshold is applied to a second subset within the sampling of the applied first threshold… optimizing the worst synthetic data within the sample”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of the combination to include the teachings of Tan because one of ordinary skill in the art would have recognized the benefit of improving functionality of a model. Claim 9 is the method claim corresponding to system claim 2, and is rejected under the same reasons set forth. Claims 3 and 10 are rejected under 35 U.S.C. 103 as being unpatentable over Jin et al. (US 20200311557 A1) in view of Luo (US 20060110046 A1), Ratnesh Kumar et al. (US 20220392234 A1), and Lin et al. (US 20120284212 A1) and further in view of Gan et al. (US 20210089872 A1). As per claim 3, the rejection of claim 1 is incorporated, but the combination does not specifically teach the distance value being a distribution shift value between the reference embedding space and the evaluation embedding space. However, Gan teaches a distance value being a distribution shift value between a reference embedding space and an evaluation embedding space (e.g. in paragraphs 53 and 126-127, “assumption that the data distributions of training datasets and deployment (e.g. testing) datasets are the same is not always valid… variability between training and operational (or test) data distributions [i.e. embedding spaces]… This shift in data distribution between training domains and testing/deployment domains is sometimes referred to as “domain shift”… Wasserstein Distance is used as a metric to compute the distances between two distributions”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of the combination to include the teachings of Gan because one of ordinary skill in the art would have recognized the benefit of determining variability between spaces. Claim 10 is the method claim corresponding to system claim 3, and is rejected under the same reasons set forth. Claims 4-5 and 11-12 are rejected under 35 U.S.C. 103 as being unpatentable over Jin et al. (US 20200311557 A1) in view of Luo (US 20060110046 A1), Ratnesh Kumar et al. (US 20220392234 A1), and Lin et al. (US 20120284212 A1) and further in view of Banville et al. (US 20100014741 A1). As per claim 4, the rejection of claim 1 is incorporated and the combination further teaches wherein determining the distance threshold is based on computing a value of an aggregate statistic of the distance metric for the plurality of views of the reference embedding space (e.g. Jin, in paragraphs 23, 28, 42, 50, and 56, “generate a first set of model-based features based on the training data…extracted as second training data features… Although the visualization exemplified in FIG. 2 depicts…two nodes Z are used to represent the feature vectors from eight input images, it should be appreciated that the dimensionality of the input data set and the resulting extracted feature vectors can vary… first training data features 304 (e.g., feature vectors) can be extracted from the training data set” and figure 2; Luo, in paragraphs 38-39, “Combining the distances together forms a distribution of the similarity distances of the training dataset… determining an appropriate threshold”), but does not specifically teach generated using a highest perturbation level that satisfies a user-specified performance criteria. However, Banville teaches generating a space using a highest perturbation level that satisfies a user-specified performance criteria (e.g. in paragraphs 23 and 61, “a gate boundary (and/or perturbations thereof) can be defined based on one or more limits… limits can be referred to as a boundary. In some embodiments, processing at the gating module can be performed, for example, based on one or more conditions (e.g., threshold values within a condition) and…based on one or more user preferences (e.g., a customizable user preference)”, i.e. highest). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of the combination to include the teachings of Banville because one of ordinary skill in the art would have recognized the benefit of facilitating user preferences. As per claim 5, the rejection of claim 4 is incorporated and the combination further teaches wherein the user-specified performance criteria is a value of a function that decreases as the distance metric increases (e.g. Jin, in paragraphs 51-52, “Mahalanobis distance analysis on multi-dimensions, t-SNE analysis on lower dimensions to get similarity distances… determine whether a measurement value representative of the degree of correspondence meets an acceptability criterion (e.g., a minimum threshold”; Luo, in paragraphs 38-39, “similarity… in general, are small and close to each other, which would be anticipated since they represent the same type object and appear relatively similar to the average shape. In contrast, the distances of shapes in the dissimilar shape group (84) present a large variation”, i.e. value of similarity function is smaller, i.e. decreases, when distance is larger, i.e. increases; Banville, in paragraphs 23 and 61, “based on one or more conditions (e.g., threshold values within a condition) and…based on one or more user preferences”). Claims 11-12 are the method claims corresponding to system claims 4-5, and are rejected under the same reasons set forth. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. For example, Makadia et al. (US 20120109858 A1) teaches “acceptability of the training of the model may be evaluated by seeing how close the trained model comes to providing the correct ranking of the resources for the annotation pair… approach in the Weston paper involves training on an "embedding space" representation of arbitrary dimension, where distance between two items in the space denotes their similarity” (e.g. in paragraphs 42 and 51). Any inquiry concerning this communication or earlier communications from the examiner should be directed to WILLIAM WONG whose telephone number is (571)270-1399. The examiner can normally be reached Monday-Friday 9am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, TAMARA KYLE can be reached at (571)272-4241. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /W.W/Examiner, Art Unit 2144 02/21/2026 /TAMARA T KYLE/Supervisory Patent Examiner, Art Unit 2144
Read full office action

Prosecution Timeline

Dec 20, 2021
Application Filed
Mar 22, 2025
Non-Final Rejection — §103, §112
Jul 01, 2025
Response Filed
Oct 17, 2025
Final Rejection — §103, §112
Nov 17, 2025
Interview Requested
Nov 26, 2025
Applicant Interview (Telephonic)
Nov 29, 2025
Examiner Interview Summary
Jan 28, 2026
Request for Continued Examination
Feb 06, 2026
Response after Non-Final Action
Feb 21, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12572252
CONTROLLING A 2D SCREEN INTERFACE APPLICATION IN A MIXED REALITY APPLICATION
2y 5m to grant Granted Mar 10, 2026
Patent 12530707
CUSTOMER EFFORT EVALUATION IN A CONTACT CENTER SYSTEM
2y 5m to grant Granted Jan 20, 2026
Patent 12511846
XR DEVICE-BASED TOOL FOR CROSS-PLATFORM CONTENT CREATION AND DISPLAY
2y 5m to grant Granted Dec 30, 2025
Patent 12504944
METHODS AND USER INTERFACES FOR SHARING AUDIO
2y 5m to grant Granted Dec 23, 2025
Patent 12423561
METHOD AND APPARATUS FOR KEEPING STATISTICAL INFERENCE ACCURACY WITH 8-BIT WINOGRAD CONVOLUTION
2y 5m to grant Granted Sep 23, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
30%
Grant Probability
57%
With Interview (+26.9%)
4y 11m
Median Time to Grant
High
PTA Risk
Based on 397 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month