Prosecution Insights
Last updated: April 19, 2026
Application No. 17/935,747

TRANSFER KNOWLEDGE FROM AUXILIARY DATA FOR MORE INCLUSIVE MACHINE LEARNING MODELS

Non-Final OA §101§102§103
Filed
Sep 27, 2022
Examiner
WONG, WILLIAM
Art Unit
2144
Tech Center
2100 — Computer Architecture & Software
Assignee
AT&T Intellectual Property I, L.P.
OA Round
1 (Non-Final)
30%
Grant Probability
At Risk
1-2
OA Rounds
4y 11m
To Grant
57%
With Interview

Examiner Intelligence

Grants only 30% of cases
30%
Career Allow Rate
120 granted / 397 resolved
-24.8% vs TC avg
Strong +27% interview lift
Without
With
+26.9%
Interview Lift
resolved cases with interview
Typical timeline
4y 11m
Avg Prosecution
33 currently pending
Career history
430
Total Applications
across all art units

Statute-Specific Performance

§101
11.4%
-28.6% vs TC avg
§103
45.8%
+5.8% vs TC avg
§102
14.3%
-25.7% vs TC avg
§112
23.5%
-16.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 397 resolved cases

Office Action

§101 §102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This action is in response to communications filed on 09/27/2022. Claims 1-20 are pending and have been examined. Information Disclosure Statement The information disclosure statement (IDS) submitted was filed on 09/27/2022. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Objections Claims 10 and 16 are objected to because of the following informalities: As per claim 10, it appears that “comprising” in line 4 should be replaced with “the operations comprising” for clarity. This similarly applies to claim 16. Appropriate correction is required. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claims recite a method, system, and medium comprising generating, generating, and training. The limitations “generating… generating…” as recited in claim 1 are each a process, under the broadest reasonable interpretation, covering performance of the limitations in the mind or by pen and paper (See Berkheimer v. HP, Inc., 881 F.3d 1360, 1366, 125 USPQ2d 1649 (Fed. Cir. 2018)) but for the recitation of generic computer components. That is, other than reciting “by a device comprising a processor” and “used to train a first machine learning model”, the limitation “generating…a common feature space comprising first data features, wherein the first data features are present in training data” in the context of the claim encompasses the user making observations and evaluations. Other than reciting “by the device”, the limitation “generating, by the device, a combined learned feature representation, the combined learned feature representation being representative of the first data features of the common feature space and second data features that are unique to the training data” in the context of the claim encompasses the user making evaluations. If a claimed limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “mental processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. Accordingly, the claim recites an abstract idea. This judicial exception is not integrated into a practical application. In particular, the claim recites additional elements. The claim recites “by a device comprising a processor” and “used to train a first machine learning model”. The elements are recited at a high-level of generality, such that it amounts to no more than mere instructions to apply the exception using a generic computer component (e.g. See MPEP 2106.05(f)) and/or amounts to generally linking the use of the judicial exception to a particular technological environment or field of use (e.g. see MPEP 2106.05(h)). The limitation “training, by the device, a second machine learning model based on the combined learned feature representation” also amounts to no more than mere instructions to apply the exception using a generic computer component (e.g. See MPEP 2106.05(f)) and/or amounts to generally linking the use of the judicial exception to a particular technological environment or field of use (e.g. see MPEP 2106.05(h)). Accordingly, the additional elements do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements are no more than a generic computer component and/or field of use. Therefore, the claims are not patent eligible. Claims 10 and 16 also recite similar claim language as claim 1, and thus have the same issues. It is noted, with respect to claim 10, that the claim recites “a processor; and a memory that stores executable instructions that, when executed by the processor, facilitate performance of operations” to perform the method. The elements are recited at a high-level of generality, such that it amounts to no more than mere instructions to apply the exception using a generic computer component (e.g. See MPEP 2106.05(f)). It is noted, with respect to claim 16, that the claim further recites “non-transitory machine-readable medium, comprising executable instructions that, when executed by a processor of first network equipment, facilitate performance of operations”. The elements are also recited at a high-level of generality, such that it amounts to no more than mere instructions to apply the exception using a generic computer component (e.g. See MPEP 2106.05(f)). Accordingly, the additional elements do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea and are not sufficient to amount to significantly more than the judicial exception. Regarding claim 2, the claim does not include any additional elements that integrate the abstract idea into a practical application or are sufficient to amount to significantly more than the judicial exception. For example, the claim further describes the combined learned feature representation, which is part of the mental steps (encompassing a user making evaluations), and recites “training, by the device, a third machine learning model…generating the combined learned feature representation using the third machine learning model”, which amounts to no more than mere instructions to apply the exception using a generic computer component (e.g. See MPEP 2106.05(f)) and/or amounts to generally linking the use of the judicial exception to a particular technological environment or field of use (e.g. see MPEP 2106.05(h)). This similarly applies to claims 11 and 17. Regarding claim 3, the claim does not include any additional elements that are sufficient to amount to significantly more than the judicial exception. For example, the claim further describes the generating, which is part of the mental steps (encompassing a user making evaluations), and recites “applying the combined data features to the third machine learning model…”, which amounts to no more than mere instructions to apply the exception using a generic computer component (e.g. See MPEP 2106.05(f)) and/or amounts to generally linking the use of the judicial exception to a particular technological environment or field of use (e.g. see MPEP 2106.05(h)). This similarly applies to claims 12 and 18. Regarding claim 4, the claim does not include any additional elements that integrate the abstract idea into a practical application or are sufficient to amount to significantly more than the judicial exception. For example, the claim further recites “classifying…the auxiliary data…resulting in first labels being applied to the auxiliary data”, which is a mental step (encompassing a user making evaluations and/or by pen and paper) and “applying…second labels to the auxiliary data by altering at least one of the first labels”, which is a mental step (encompassing a user making evaluations and/or by pen and paper). While “via the third machine learning model” and “by the device” are noted, each amount to no more than mere instructions to apply the exception using a generic computer component (e.g. See MPEP 2106.05(f)) and/or amounts to generally linking the use of the judicial exception to a particular technological environment or field of use (e.g. see MPEP 2106.05(h)). This similarly applies to claims 13 and 19. Regarding claim 5, the claim does not include any additional elements that integrate the abstract idea into a practical application or are sufficient to amount to significantly more than the judicial exception. For example, the claim further recites “removing,…from the combined data features, a transformed subset of the training data features and the auxiliary data features”, which is a mental step (encompassing a user making evaluations and/or by pen and paper). While “the training of the second machine learning model” and “by the device” are noted, each amounts to no more than mere instructions to apply the exception using a generic computer component (e.g. See MPEP 2106.05(f)) and/or amounts to generally linking the use of the judicial exception to a particular technological environment or field of use (e.g. see MPEP 2106.05(h)). Regarding claim 6, the claim does not include any additional elements that integrate the abstract idea into a practical application or are sufficient to amount to significantly more than the judicial exception. For example, the claim merely further describes the generating comprising adding and removing, which are mental steps (encompassing a user making evaluations and/or by pen and paper). This similarly applies to claims 14 and 20. Regarding claim 7, the claim does not include any additional elements that integrate the abstract idea into a practical application or are sufficient to amount to significantly more than the judicial exception. For example, the claim describes the threshold amount and the removing, which is part of the mental steps (encompassing a user making evaluations and/or by pen and paper) and further includes “until a change in accuracy of the first machine learning model…”, which amounts to generally linking the use of the judicial exception to a particular technological environment or field of use (e.g. see MPEP 2106.05(h)). This similarly applies to claim 15. Regarding claim 8, the claim does not include any additional elements that integrate the abstract idea into a practical application or are sufficient to amount to significantly more than the judicial exception. For example, the claim further recites “classifying, …based on input medical data, a medical condition associated with the input medical data”, which is a mental step (encompassing a user making evaluations and/or by pen and paper). While “the training of the second machine learning model” and “by the second machine learning model” are noted, this amounts to no more than mere instructions to apply the exception using a generic computer component (e.g. See MPEP 2106.05(f)) and/or amounts to generally linking the use of the judicial exception to a particular technological environment or field of use (e.g. see MPEP 2106.05(h)). Regarding claim 9, the claim does not include any additional elements that integrate the abstract idea into a practical application or are sufficient to amount to significantly more than the judicial exception. For example, the claim further recites “determining…locations for respective virtual objects”, which is a mental step (encompassing a user making evaluations and/or by pen and paper). While “the training of the second machine learning model”, “via the second machine learning model”, and “with an augmented reality application” are noted, these are no more than mere instructions to apply the exception using a generic computer component (e.g. See MPEP 2106.05(f)) and/or amounts to generally linking the use of the judicial exception to a particular technological environment or field of use (e.g. see MPEP 2106.05(h)). Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-4, 10-13, and 16-19 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Narlikar et al. (US 20210406601 A1). As per independent claim 1, Narlikar teaches a method, comprising: generating, by a device comprising a processor (e.g. in paragraph 111, “at least one processor”), a common feature space comprising first data features (e.g. in paragraphs 40 ,105 and 107, “a feature space that is common to all media formats… Features shared by all data modalities are merged… An embedding using existing data modalities can be learned”), wherein the first data features are present in training data used to train a first machine learning model (e.g. in paragraphs 50, 73, 75, 104, and 107, “classifier resources (e.g., training data… fully supervised data of existing modalities… models that perform these tasks for existing data modalities may be stored in one or more data repositories (e.g., as the feature classifiers 122 in the database 115… An embedding using existing data modalities can be learned… model A can be trained over existing data modalities”), and wherein the first data features are present in auxiliary data that are independent of the training data (e.g. in paragraph 50, “any new [auxiliary] modality that may have resources or features common to data for which classifiers already exist”); generating, by the device, a combined learned feature representation, the combined learned feature representation being representative of the first data features of the common feature space and second data features that are unique to the training data (e.g. in paragraphs 50, 71-73, and 106, “any…modality that may have resources or features common to data for which classifiers already exist… embedding for each data modality can be learned”, i.e. each modality includes features representing a combination of common features and its unique features); and training, by the device, a second machine learning model based on the combined learned feature representation (e.g. in paragraphs 50, 73, and 104-107, “classifiers… can be trained over existing data modalities”, “independent models can be created for each data modality”, “model outputs can be concatenated to create a new feature embedding. This embedding can be used as input to a final model for training”, and/or “compute the model outputs prior to the final prediction (e.g., softmax) layer of both A and B, which we denote X and Y, respectively. We then train a “projection layer” P to match Y with X. At inference time, we pass incoming data points through B and the projection layer P”). As per claim 2, the rejection of claim 1 is incorporated and Narlikar further teaches wherein the combined learned feature representation is further representative of third data features that are unique to the auxiliary data, and wherein the method further comprises: training, by the device, a third machine learning model with at least a portion of the first data features of the common feature space, wherein the generating of the combined learned feature representation comprises generating the combined learned feature representation using the third machine learning model (e.g. in paragraphs 73, 98, and 107, “both the weakly supervised data in the new [auxiliary] modality and fully supervised data of existing modalities… the features generated in the first step can be combined in one or more ways to construct a vector feature representation, such as: by concatenating the generated common features directly, by concatenating learned embeddings for each modality, or by projecting the new data modality to an embedding space learned from existing labeled modalities… we pass points of the new modality to B and simultaneously pass the shared features between the existing and new modalities as input”). As per claim 3, the rejection of claim 2 is incorporated and Narlikar further teaches wherein the generating of the combined learned feature representation comprises: combining/augmenting training data features, represented in the training data, with auxiliary data features, represented in the auxiliary data and not represented in the training data, resulting in combined data features/an augmented feature set and applying the combined data features/augmented feature set to the third machine learning model, resulting in the combined learned feature representation (e.g. in paragraphs 73, 98, and 107, “both the weakly supervised data in the new [auxiliary] modality and fully supervised data of existing modalities… the features generated in the first step can be combined in one or more ways to construct a vector feature representation, such as: by concatenating the generated common features directly, by concatenating learned embeddings for each modality, or by projecting the new data modality to an embedding space learned from existing labeled modalities… we pass points of the new modality to B and simultaneously pass the shared features between the existing and new modalities as input”). As per claim 4, the rejection of claim 3 is incorporated and Narlikar further teaches classifying, by the device, the auxiliary data via the third machine learning model, resulting in first labels being applied to the auxiliary data via the third machine learning model (e.g. in paragraphs 39 and 107, “automatically correlate and label previously unlabeled content across complex formats, which can be used to train a classification model for content of a new or un-modeled content modality or format… unlabeled data point that shares edges with labeled data points can be assigned a weighted combination of its neighbor's labels… project data points from the new modality to the embedding space for classification”) and applying, by the device, second/pseudo labels to the auxiliary data by altering at least one of the first labels (e.g. in paragraph 101, “unlabeled data point that shares edges with labeled data points can be assigned a weighted combination of its neighbor's labels. The algorithm can iteratively update [i.e. alter] this assignment until convergence”). Claims 10-13 are the system claims corresponding to method claims 1-4 and are rejected under the same reasons set forth and Narlikar further teaches a processor and a memory that stores executable instructions that, when executed by the processor, facilitate performance of operations (e.g. in paragraph 111, “at least one processor and a memory, e.g., a processing circuit. The memory can store processor-executable instructions that, when executed by processor, cause the processor to perform one or more of the operations”). Claims 16-19 are the medium claims corresponding to method claims 1-4 and are rejected under the same reasons set forth and Narlikar further teaches a non-transitory machine-readable medium, comprising executable instructions that, when executed by a processor of first network equipment, facilitate performance of operations (e.g. in paragraphs 111-112, “at least one processor and a memory, e.g., a processing circuit. The memory can store processor-executable instructions that, when executed by processor, cause the processor to perform one or more of the operations… can include computer networks”). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 5-6, 14, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Narlikar et al. (US 20210406601 A1) in view of Takata et al. (US 20190304603 A1). As per claim 5, the rejection of claim 3 is incorporated, but Narlikar does not specifically teach prior to the training of the second machine learning model, removing, by the device and from the combined data features, a transformed subset of the training data features and the auxiliary data features. However, Narlikar teaches an entity comprising combined data features (e.g. in paragraphs 41 and 105, “construct a common feature space that can connect data and resources across new and existing data modalities”) and Takata teaches prior to training of a machine learning model, removing, by a device and from an entity, a transformed subset of training data features and auxiliary data features (e.g. in paragraphs 23, 46, 52, 59, and 64-65, “assess similarity indicating that feature preparation of [an entity] is compatible with the patient feature data… A machine learning model is generated [i.e. training] using results [i.e. prior] of the feature preparation and patient feature data; and a prediction is provided using the machine learning model… determines if this system can reuse [the entity] to prepare an appropriate training model for the test data… tuning the [entity] to remove reusable features that fail to satisfy…a sample data distribution criteria… feature extraction and transformation… determines if the difference of the sample data distribution between the test and the reusable [entity] is less than a distribution criteria or threshold”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Narlikar to include the teachings of Takata because one of ordinary skill in the art would have recognized the benefit of model training with more suitable data. As per claim 6, the rejection of claim 1 is incorporated and Narlikar further teaches wherein the generating of the common feature space comprises: adding the second data features to an entity comprising the common feature space (e.g. in paragraphs 40 ,105 and 107, “a feature space that is common to all media formats… Features shared by all data modalities are merged… An embedding using existing data modalities can be learned”), but does not specifically teach removing selected ones of the first data features from the common feature space in response to the selected ones of the first data features having a first distribution in the training data that differs from a second distribution of the selected ones of the first data features in the auxiliary data by at least a threshold amount, resulting in a remaining feature space. However, Takata teaches removing selected ones of first data features from an entity in response to the selected ones of the first data features having a first distribution in training data that differs from a second distribution of the selected ones of the first data features in auxiliary data by at least a threshold amount, resulting in a remaining feature space (e.g. in paragraphs 23, 46, 52, 59, and 64-65, “assess similarity indicating that feature preparation of [an entity] is compatible with the patient feature data… determines if this system can reuse [the entity] to prepare an appropriate training model for the test data… tuning the [entity] to remove reusable features that fail to satisfy…a sample data distribution criteria… determines if the difference of the sample data distribution between the test [i.e. auxiliary] and the reusable [entity comprising training data] is less than a distribution criteria or threshold… not less…removes”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Narlikar to include the teachings of Takata because one of ordinary skill in the art would have recognized the benefit of model training with more suitable data. Claims 14 and 20 correspond to method claim 6 and are rejected under the same reasons set forth. Claims 7 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Narlikar et al. (US 20210406601 A1) in view of Takata et al. (US 20190304603 A1) and further in view of Herman-Saffar et al. (US 20230004991 A1) and Sutherland (US 20210241035 A1). As per claim 7, the rejection of claim 6 is incorporated and the combination further teaches wherein the threshold amount is a first threshold amount (e.g. Takata, in paragraphs 52 and 64-65, “remove reusable features that fail to satisfy…a sample data distribution criteria… determines if the difference of the sample data distribution between the test [i.e. auxiliary] and the reusable [entity comprising training data] is less than a distribution criteria or threshold… not less…removes”), but does not specifically teach wherein the removing of the selected ones of the first data features comprises iteratively removing the selected ones of the first data features from the remaining feature space until a change in accuracy of the first machine learning model, resulting from the iteratively removing of the selected ones of the first data features, is at least a second threshold amount. However, Herman-Saffar teaches removing selected ones of first data features iteratively removing the selected ones of the first data features from a remaining feature space according to a change in accuracy of a machine learning model, resulting from the iteratively removing of the selected ones of the first data features (e.g. in paragraph 47, “Such unimportant features may be features that have a lower impact on the accuracy of the result of the ML model. In one or more embodiments, such features are iteratively removed while assessing the impact on the accuracy of the model”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of the combination to include the teachings of Herman-Saffar because one of ordinary skill in the art would have recognized the benefit of assessing accuracy impact, but does not specifically teach wherein the change in accuracy comprises until a change in accuracy is at least a second threshold amount. However, Sutherland teaches change in accuracy including until a change in accuracy is at least a threshold amount (e.g. in paragraph 55, “machine learning system 101 performs the filtering process iteratively… until a target detection model quality (e.g., indicated by model accuracy or other equivalent metric) is achieved [i.e. threshold amount]… until the rate of improvements in accuracy decreases below a threshold (e.g.,…not resulting in further improvements or where the improvements are marginal) [i.e. threshold amount]”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of the combination to include the teachings of Sutherland because one of ordinary skill in the art would have recognized the benefit of facilitating model accuracy (also amounts a simple substitution that yields predictable results [e.g. see KSR Int'l Co v. Teleflex Inc., 550 US 398,82 USPQ2d 1385,1396 (U.S. 2007) and MPEP 2143(B)]). Claim 15 is the system claim corresponding to method claim 7, and is rejected under the same reasons set forth. Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Narlikar et al. (US 20210406601 A1) in view of Tabibiazar et al. (US 20080300797 A1). As per claim 8, the rejection of claim 1 is incorporated, but Narlikar does not specifically teach in response to the training of the second machine learning model, classifying, by the second machine learning model and based on input medical data, a medical condition associated with the input medical data. However, Tabibiazar teaches in response to training of a machine learning model, classifying, by the machine learning model and based on input medical data, a medical condition associated with the input medical data (e.g. in paragraphs 21, 24, 37, 122, and 134-136, “use in clinical medicine and biomedical research for improved tools… improved clinical outcomes…. classifying a sample obtained from a mammalian subject by obtaining a dataset associated with a sample, wherein the dataset comprises protein expression levels… classification is selected from the group consisting of an atherosclerotic disease classification, a healthy classification, a vascular inflammation classification, a medication exposure classification, a no medication exposure classification, and a coronary calcium score classification, and classifying the sample according to the output of the process… inputted into a predictive model… machine learning algorithm are applied to the appropriate reference or training data to determine the parameters for analytical processes suitable for a variety of atherosclerotic classifications”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Narlikar to include the teachings of Tabibiazar because one of ordinary skill in the art would have recognized the benefit of improving research and/or clinical outcomes. Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Narlikar et al. (US 20210406601 A1) in view of Eyler et al. (US 20190017839 A1). As per claim 8, the rejection of claim 1 is incorporated, but Narlikar does not specifically teach in response to the training of the second machine learning model, determining, via the second machine learning model, locations for respective virtual objects associated with an augmented reality application. However, Eyler teaches in response to training of a machine learning model, determining, via the machine learning model, locations for respective virtual objects associated with an augmented reality application (e.g. in paragraphs 22, 31, 98, 168, and 182, “augmented reality transportation system performs analyses of the historical information (e.g., by way of machine learning models and/or neural networks) to inform future decisions on route recommendations, pickup location assignments, drop-off location assignments, etc… augmented reality transportation system, in at least one example, also identifies one or more “no pickup” locations… the augmented reality transportation system 106 utilizes a neural network or other analytical device or model to extrapolate conclusions from the historical data. For example, the augmented reality transportation system 106 trains a machine learning model to analyze…historical information… augmented reality elements of FIG. 5 in addition to other augmented reality elements such as the pickup location route elements 602a, 602b, and 602c (referred to herein collectively as “pickup location route elements 602”… further includes an augmented reality transportation vehicle location element 502, an augmented reality pickup location element 504 (labeled “Pickup Zone”), and an augmented reality no pickup location element 506 (labeled “No Pickup””). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Narlikar to include the teachings of Eyler because one of ordinary skill in the art would have recognized the benefit of assisting users with transportation. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. For example, Dai et al. ("Boosting for Transfer Learning," in Proc. 24th International Conference on Machine Learning, Corvalis, Jun. 2007, pp. 193-200, 8 pages, cited in the IDS dated 09/27/2022) teaches “allows users to utilize a small amount of newly labeled data to leverage the old data to construct a high-quality classification model for the new data… Although the training data are more or less out-dated, there are certain parts of the data that can still be reused. That is, knowledge learned from this part of the data can still be of use in training a classifier for the new data… Our key idea is to use boosting to filter out [i.e. remove] the diff-distribution training data that are very different from the same-distribution data… The remaining diff-distribution data are treated as the additional training data which greatly boost the confidence of the learned model even when the same-distribution training data are scarce… learning with auxiliary data, where the labeled diff-distribution data are treated as the auxiliary data… chooses the most helpful diff-distribution training data” (e.g. in pages 1-3 and 5). Any inquiry concerning this communication or earlier communications from the examiner should be directed to WILLIAM WONG whose telephone number is (571)270-1399. The examiner can normally be reached Monday-Friday 9am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, TAMARA KYLE can be reached at (571)272-4241. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /W.W/Examiner, Art Unit 2144 12/27/2025 /TAMARA T KYLE/Supervisory Patent Examiner, Art Unit 2144
Read full office action

Prosecution Timeline

Sep 27, 2022
Application Filed
Dec 27, 2025
Non-Final Rejection — §101, §102, §103
Mar 27, 2026
Interview Requested
Apr 08, 2026
Applicant Interview (Telephonic)
Apr 14, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12572252
CONTROLLING A 2D SCREEN INTERFACE APPLICATION IN A MIXED REALITY APPLICATION
2y 5m to grant Granted Mar 10, 2026
Patent 12530707
CUSTOMER EFFORT EVALUATION IN A CONTACT CENTER SYSTEM
2y 5m to grant Granted Jan 20, 2026
Patent 12511846
XR DEVICE-BASED TOOL FOR CROSS-PLATFORM CONTENT CREATION AND DISPLAY
2y 5m to grant Granted Dec 30, 2025
Patent 12504944
METHODS AND USER INTERFACES FOR SHARING AUDIO
2y 5m to grant Granted Dec 23, 2025
Patent 12423561
METHOD AND APPARATUS FOR KEEPING STATISTICAL INFERENCE ACCURACY WITH 8-BIT WINOGRAD CONVOLUTION
2y 5m to grant Granted Sep 23, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
30%
Grant Probability
57%
With Interview (+26.9%)
4y 11m
Median Time to Grant
Low
PTA Risk
Based on 397 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month