Prosecution Insights
Last updated: April 19, 2026
Application No. 18/542,460

Accelerated Coordinate Encoding: Learning to Relocalize in Minutes Using RBG and Poses

Non-Final OA §103
Filed
Dec 15, 2023
Examiner
NWUHA, LOUIS TOCHUKWU ENE
Art Unit
2674
Tech Center
2600 — Communications
Assignee
Niantic, Inc.
OA Round
1 (Non-Final)
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 0 resolved
-62.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
11 currently pending
Career history
11
Total Applications
across all art units

Statute-Specific Performance

§101
8.7%
-31.3% vs TC avg
§103
78.3%
+38.3% vs TC avg
§102
13.0%
-27.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 0 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION 2. The United States Patent & Trademark Office appreciates the application that is by the inventor/assignee. The United States Patent & Trademark Office reviewed the following application and has made the following comments below. Drawings 3. The drawings are objected to as failing to comply with 37 CFR 1.84(p)(4) because reference character 145 has been used to designate both ACE Relocalizer Module and Universal Game Module in Fig. 1. Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance. Claim Rejections - 35 USC § 103 4. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. 5. Claims 1, 2, 4, 6, 8, 9, 11, 13, 15-16, 18, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Shotton et al. (US Patent Pub. No. US 2018/0285697 A1, hereafter referred to as Shotton) in view of Eder et al. (US Patent Pub. No. 2021/0279957 A1, hereafter referred to as Eder) in further view of Murez et al. (US Patent Pub No. 11620527 B2, hereafter referred to as Murez) furthermore in view of Sinha et al. (US Patent Pub No. 12322129 B2, hereafter referred to as Sinha). 6. Regarding Claim 1, Shotton teaches a method comprising: receiving a set of training images of one or more environments (Fig. 6 and paragraph 16, 44-46, and 60, Shotton teaches receiving a training set of images of scenes where image elements have labels indicating their corresponding scene coordinates, where the flow diagram is a depiction of the method of training a random decision forest. The Examiner interprets the training set of images of scenes where image elements have labels as a training set of images of one or more environments.) and corresponding metadata (paragraph 29, Shotton teaches information about the certainty of the image-element scene coordinates, which is an example of metadata describing an attribute of the primary data, which is the correspondence between the image elements and scene coordinates, being available. The Examiner interprets the information about the certainty of the image-element scene coordinates as corresponding metadata to the set of training images.), training, by a relocalizer training system (paragraph 24, Shotton teaches a camera pose tracker for relocalizing a mobile camera such as a smart phone in trained random decision forest for a first scene. The Examiner interprets the camera pose tracker system that is used for relocalizing a mobile camera as a relocalizer training system.), a relocalizer model using the set of training images and corresponding metadata (Fig. 5 and paragraph 33, Shotton teaches labeled training images of scene A being used to train random decision forests to enable image elements to predict the correspondence between image elements and scene coordinates using the camera pose tracker system to determine the pose of the object. The Examiner interprets the use of labeled training images to predict the correspondence between image elements and scene coordinates to determine a pose of an object using a camera pose tracker system as a relocalized model using training images and corresponding metadata.) Shotton does not teach the metadata comprising camera pose and intrinsics, the relocalizer model configured to predict scene coordinates corresponding to pixels in an image of an environment, wherein the relocalizer model comprises a scene-agnostic convolutional network, a scene-specific regression network, receiving a set of query images of an environment, applying, by the relocalizer training system, a trained relocalizer model to the set of query images of the environment to generate predicted scene coordinates corresponding to the pixels in a query image, and applying, by the relocalizer training system, a pose solver algorithm to the predicted scene coordinates to generate a camera pose. Eder is in the same field of art of relocalization to generate predicted coordinates based on an image. Further Eder teaches the metadata comprising camera pose and intrinsics (paragraph 48, 50, and 61, Eder teaches using metadata comprising of data associated images, videos, natural language, camera trajectory, and geometry as a part of the virtual representation, which includes using virtual space, to generate a representation of a physical location with spatially localized information. The Examiner interprets the metadata comprising data associated images, videos, natural language, camera trajectory, and geometry as metadata comprising camera pose and intrinsics.), the relocalizer model configured to predict scene coordinates corresponding to pixels in an image of an environment (paragraphs 62, 125, and 170, Eder teaches a model that executes a camera relocalization process using relative camera poses associated with additional images with respect to registered RGB or RGB-D images and a 3D model in the form of machine learning training that uses information from the description data such as training images as an input for scene capture, scene annotation, and scene editing ultimately to predict the geometric composition of a location, which is an example of predicting scene coordinates, among other elements that provide information about the location. The Examiner interprets the model that executes a camera relocalization process as a relocalizer model and predicting the geometric composition of a location among other elements that provide information about the location using training images which include pixels as predicting scene coordinates corresponding to pixels in an image of an environment.), a scene-specific regression network (paragraph 175, Eder teaches a relocalizer model that accurately predicts location with a geometric reconstruction framework that includes a Structure-from-Motion, SFM, or simultaneous localization and mapping, which is an example of a scene-specific regression network. The Examiner interprets SFM or simultaneous localization and mapping as a scene-specific regression network.), and applying, by the relocalizer training system, a pose solver algorithm to the predicted scene coordinates to generate a camera pose (paragraph 47, Eder teaches a class of algorithms that use image pixels of the relocalizer system to computationally solve for and estimate intrinsics and extrinsic camera parameters known as SFM, which can be applied to both ordered and unordered image data. The Examiner interprets this as applying the relocalizer training system to predict scene coordinates to generate a camera pose using a pose solver algorithm.). Therefore it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Shotton by adding metadata based on camera pose and intrinsics, predictions based on image pixels, a scene-specific regression network, and a pose solver algorithm that is taught by Eder to enhance user experiences in determining precise location and orientation of a camera when virtual elements are overlaid on the depiction of real-world environments; thus one of ordinary skill in the art would be motivated to combine the references since they are both in the same field of art of relocalization to generate predicted coordinates based on an image (paragraphs 47-48, 50, 61-62, 125, 170, 175, Eder). Shotton in view of Eder does not teach wherein the relocalizer model comprises a scene-agnostic convolutional network, receiving a set of query images of an environment, and applying, by the relocalizer training system, a trained relocalizer model to the set of query images of the environment to generate predicted scene coordinates corresponding to the pixels in a query image. Murez is in the same field of art of relocalization to generate predicted coordinates based on an image. Further Murez teaches wherein the relocalizer model comprises a scene-agnostic convolutional network (paragraphs 14 and 20, Murez teaches a deep CNN that is adapted for predictions using image data for a new target image domain without requiring new annotations, domain agnostic representations, which is an example of using a scene-agnostic convolutional network which learns to understand information without retraining, by determining domain agnostic features that map from the annotated source image domain and a target image domain to a joint latent space, and using the domain agnostic features to map the joint latent space to annotations for a target image domain. The Examiner interprets this as a relocalizer model that comprises a scene-agnostic convolutional network.). Therefore it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Shotton by adding the scene-agnostic convolutional network that is taught by Murez to enhance the training of the camera relocalization model for unseen environment without retraining; thus one of ordinary skill in the art would be motivated to combine the references since they are both in the same field of art of relocalization to generate predicted coordinates based on an image (paragraphs 14 and 20, Murez). Shotton in view of Eder in further view of Murez does not teach receiving a set of query images of an environment and applying, by the relocalizer training system, a trained relocalizer model to the set of query images of the environment to generate predicted scene coordinates corresponding to the pixels in a query image. Sinha is in the same field of art of relocalization to generate predicted coordinates based on an image. Further Sinha teaches receiving a set of query images of an environment (Figs. 2-3 and col 2 lines 39-41, 47-51, and col 4 lines 1-8, Sinha teaches query images being used as inputs for detecting a scene landmark by way of machine learning models, where sets of query images are received to depict specific environments such as an interior of a domestic kitchen and 2D locations of a plurality of the pre-specified 3D scene landmarks. The Examiner interprets this use of query images as receiving a set of query images of an environment.) and applying, by the relocalizer training system, a trained relocalizer model to the set of query images of the environment to generate predicted scene coordinates corresponding to the pixels in a query image (Fig. 3 and col 4 lines 1-27, Sinha teaches using query images as a part of the localization system to generate predicted scene coordinates based on the image data of the query image, where the models encode information above the 3D scene landmarks, which is trained to solve the task of identifying 2D locations in the image to depict accurate 3D scene landmarks. The Examiner interprets using the image data of the query image as applying the set of query images to the relocalizer training system and the query images being used to encode information above 3D scene landmarks to depict accurate 3D scene landmarks as generating predicted scene coordinates corresponding to the pixels in a query image.). Therefore it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Shotton by adding training the relocalizer model using a set of query images for coordinates based on query image pixels that is taught by Sinha to enhance the pinpointing of location and orientation in the prediction of scene coordinates; thus one of ordinary skill in the art would be motivated to combine the references since they are both in the same field of art of relocalization to generate predicted coordinates based on an image (Figs. 2-3 and col 2 lines 39-41, 47-51, and col 4 lines 1-27, Sinha). Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date. 7. In regards to Claim 2, Shotton in view of Eder in further view of Murez furthermore in view of Sinha teaches wherein the scene-agnostic convolutional network of the relocalizer model is pre-trained on the set of training images of one or more environments and corresponding metadata using image-level training and curriculum training (paragraphs 47 and 54, Murez teaches the initial preliminary use of tens of thousands to millions of training images, along with evaluation based on their corresponding images, for the domain agnosticism of the model disclosed. The Examiner interprets this as pre-training on the set of training images.). 8. In regards to Claim 4, Shotton in view of Eder in further view of Murez furthermore in view of Sinha teaches wherein the relocalizer training system trains the scene-specific regression network in a buffer generation stage (paragraphs 89, 115, 155, and 186, Eder teaches the tentative or temporary storage of accumulation in memory from the geometric reconstruction framework that includes SFM using a texture atlas, which is built from pixel data stored in a buffer generation stage, as well as on one or more servers, which use buffer generation stages to handle data flow, manage congestion, and smooth out processing for multi-stage image operations. The Examiner interprets this as the scene-specific regression network of the relocalizer training system being trained in a buffer generation stage.). and a main training loop stage (Figs. 4B and 7, paragraphs 100-105 and 132, Eder teaches the geometric reconstruction framework that includes SFM for machine learning methods that include a continuous looping stage and feedback information to guide the collection of additional description data that relate to particular portions of the location. The Examiner interprets this as the scene-specific regression network of the relocalizer training system being trained in a main training loop stage.). 9. In regards to Claim 6, Shotton in view of Eder in further view of Murez furthermore in view of Sinha teaches wherein the main training loop stage of training the scene- specific regression network includes: shuffling entries of the training buffer at a beginning of each epoch (Fig. 7A and 7B, paragraphs 14 and 68, Murez teaches several learning iterations to start the cycle of processing the data buffer for a deep convolutional neural network used to perform scene-specific regression network tasks, focusing on mapping features to annotations such as poses and labels. The Examiner interprets the several learning iterations to start the cycle of processing the data buffer as shuffling entries of the training buffer at a beginning of each epoch.); generating training batches, each training batch including random features and associated mapping poses (Fig. 7A and 7B, paragraphs 14 and 68, Murez teaches measuring the total loss for a batch of annotated and unannotated data obtained by encoding data from both domains into a joint latent space, involving sampling features such as the data in the domain X and Y and their annotations, ground truth, also known as mapped poses, via the machine learning algorithm. The Examiner interprets this as the generation of training batches that include random features and associated poses.); and training the scene-specific regression network using the training batches (Fig. 7A and 7B, paragraphs 14 and 68, Murez teaches backpropagation through encoders and decoders to minimize the loss until the loss is smaller than the threshold. The Examiner interprets this as training the scene-specific regression network using the training batches.). 10. Regarding Claim 8, Shotton teaches a non-transitory computer-readable medium comprising stored instructions that, when executed by one or more computing devices, cause the one or more computing devices to collectively: receive a set of training images of one or more environments and corresponding metadata (Fig.6, paragraphs 16, 29, 44-46, and 60, Shotton teaches receiving a training set of images of scenes where image elements have labels indicating their corresponding scene coordinates and information about the certainty of the image-element scene coordinates, which is an example of metadata describing an attribute of the primary data. The Examiner interprets the training set of images of scenes where image elements have labels as training set of images of one or more environments and information about the certainty of the image-element scene coordinates as corresponding metadata to the set of training images.) and train a relocalizer model using the set of training images (Fig. 5 and paragraph 33, Shotton teaches labeled training images of scene A being used to train random decision forests to enable image elements to predict the correspondence between image elements and scene coordinates using the camera pose tracker system to determine the pose of the object. The Examiner interprets the use of labeled training images to predict the correspondence between image elements and scene coordinates to determine a post of an object using a camera pose tracker system as a relocalized model using training images.). Shotton does not teach the metadata comprising camera pose and intrinsics, the relocalizer model configured to predict scene coordinates corresponding to pixels in an image of an environment, wherein the relocalizer model comprises a scene-agnostic convolutional network, and a scene-specific regression network, receive a set of query images of an environment, apply a trained relocalizer model to the set of query images of the environment to generate predicted scene coordinates corresponding to the pixels in the query image, and apply a pose solver algorithm to the predicted scene coordinates to generate a camera pose. Eder is in the same field of art of relocalization to generate predicted coordinates based on an image. Further Eder teaches the metadata comprising camera pose and intrinsics (paragraph 48, 50, and 61, Eder teaches using metadata comprising of data associated images, videos, natural language, camera trajectory, and geometry as a part of the virtual representation, which includes using virtual space, to generate a representation of a physical location with spatially localized information. The Examiner interprets the metadata comprising data associated images, videos, natural language, camera trajectory, and geometry as metadata comprising camera pose and intrinsics.), the relocalizer model configured to predict scene coordinates corresponding to pixels in an image of an environment (paragraphs 62, 125, and 170, Eder teaches a model that executes a camera relocalization process using relative camera poses associated with additional images with respect to registered RGB or RGB-D images and a 3D model in the form of machine learning training that uses information from the description data such as training images as an input for scene capture, scene annotation, and scene editing ultimately to predict the geometric composition of a location, which is an example of predicting scene coordinates, among other elements that provide information about the location. The Examiner interprets the model that executes a camera relocalization process as a relocalizer model and predicting the geometric composition of a location among other elements that provide information about the location using training images which include pixels to predict scene coordinates corresponding to pixels in an image of an environment.), a scene-specific regression network (paragraph 175, Eder teaches where the relocalizer model that accurately predicts location has a geometric reconstruction framework that includes a Structure-from-Motion, SFM, or simultaneous localization and mapping, which is an example of a scene-specific regression network. The Examiner interprets SFM or simultaneous localization and mapping as a scene-specific regression network.), and apply a pose solver algorithm to the predicted scene coordinates to generate a camera pose (paragraph 47, Eder teaches a class of algorithms that use image pixels of the relocalizer system to computationally solve for and estimate intrinsics and extrinsic camera parameters known as SFM, which can be applied to both ordered and unordered image data. The Examiner interprets this as applying the relocalizer training system to predict scene coordinates to generate a camera pose using a pose solver algorithm.). Therefore it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Shotton by adding metadata based on camera pose and intrinsics, predictions based on image pixels, a scene-specific regression network, and a pose solver algorithm that is taught by Eder to enhance user experiences in determining precise location and orientation of a camera when virtual elements are overlaid on the depiction of real-world environments; thus one of ordinary skill in the art would be motivated to combine the references since they are both in the same field of art of relocalization to generate predicted coordinates based on an image (paragraphs 47-48, 50, 61-62, 125, 170, 175, Eder). Shotton in view of Eder does not teach wherein the relocalizer model comprises a scene-agnostic convolutional network, receive a set of query images of an environment, and apply a trained relocalizer model to the set of query images of the environment to generate predicted scene coordinates corresponding to the pixels in the query image. Murez is in the same field of art of relocalization to generate predicted coordinates based on an image. Further Murez teaches wherein the relocalizer model comprises a scene-agnostic convolutional network (paragraphs 14 and 20, Murez teaches a deep CNN that is adapted for predictions using image data for a new target image domain without requiring new annotations, domain agnostic representations, which is an example of using a scene-agnostic convolutional network which learns to understand information without retraining, by determining domain agnostic features that map from the annotated source image domain and a target image domain to a joint latent space, and using the domain agnostic features to map the joint latent space to annotations for a target image domain. The Examiner interprets this as a relocalizer model that comprises a scene-agnostic convolutional network.). Therefore it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Shotton by adding the scene-agnostic convolutional network that is taught by Murez to enhance the training of the camera relocalization model for unseen environment without retraining; thus one of ordinary skill in the art would be motivated to combine the references since they are both in the same field of art of relocalization to generate predicted coordinates based on an image (paragraphs 14 and 20, Murez). Shotton in view of Eder in further view of Murez does not teach receive a set of query images of an environment and apply a trained relocalizer model to the set of query images of the environment to generate predicted scene coordinates corresponding to the pixels in the query image. Sinha is in the same field of art of relocalization to generate predicted coordinates based on an image. Further Sinha teaches receive a set of query images of an environment (Figs. 2-3 and col 2 lines 39-41, 47-51, and col 4 lines 1-8, Sinha teaches query images being used as inputs for detecting a scene landmark by way of machine learning models, where sets of query images are received to depict specific environments such as an interior of a domestic kitchen and 2D locations of a plurality of the pre-specified 3D scene landmarks. The Examiner interprets this use of query images as receiving a set of query images of an environment.) and apply a trained relocalizer model to the set of query images of the environment to generate predicted scene coordinates corresponding to the pixels in the query image (Fig. 3 and col 4 lines 1-27, Sinha teaches using query images as a part of the localization system to generate predicted scene coordinates based on the image data of the query image, where the models encode information above the 3D scene landmarks, which is trained to solve the task of identifying 2D locations in the image to depict accurate 3D scene landmarks. The Examiner interprets using the image data of the query image as applying the set of query images to the relocalizer training system and the query images being used to encode information above 3D scene landmarks to depict accurate 3D scene landmarks as generating predicted scene coordinates corresponding to the pixels in a query image.). Therefore it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Shotton by adding training the relocalizer model using a set of query images for coordinates based on query image pixels that is taught by Sinha to enhance the pinpointing of location and orientation in the prediction of scene coordinates; thus one of ordinary skill in the art would be motivated to combine the references since they are both in the same field of art of relocalization to generate predicted coordinates based on an image (Figs. 2-3 and col 2 lines 39-41, 47-51, and col 4 lines 1-27, Sinha). Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date. 11. In regards to Claim 9, Shotton in view of Eder in further view of Murez furthermore in view of Sinha teaches wherein the scene-agnostic convolutional network of the relocalizer model is pre-trained on the set of training images of one or more environments and corresponding metadata using image-level training and curriculum training (paragraphs 47 and 54, Murez teaches the initial preliminary use of tens of thousands to millions of training images, along with evaluation based on their corresponding images, for the agnostic network disclosed. The Examiner interprets this as pre-training on the set of training images.). 12. In regards to Claim 11, Shotton in view of Eder in further view of Murez furthermore in view of Sinha teaches wherein the scene-specific regression network is trained in a buffer generation stage (paragraphs 89, 115, 155, and 186, Eder teaches the tentative or temporary storage of accumulation in memory from the geometric reconstruction framework that includes a SFM using a texture atlas, which is built from pixel data stored in a buffer generation stage, as well as on one or more servers, which use buffer generation stages to handle data flow, manage congestion, and smooth out processing for multi-stage image operations. The Examiner interprets this as the scene-specific regression network of the relocalizer training system being trained in a buffer generation stage.). and a main training loop stage (Figs. 4B and 7, paragraphs 100-105 and 132, Eder teaches the geometric reconstruction framework that includes SFM for machine learning methods that include a continuous looping stage and feedback information to guide the collection of additional description data that relate to particular portions of the location. The Examiner interprets this as the scene-specific regression network of the relocalizer training system being trained in a main training loop stage.). 13. In regards to Claim 13, Shotton in view of Eder in further view of Murez furthermore in view of Sinha teaches wherein the main training loop comprises instructions that, when executed by a processor, cause the processor to: shuffling entries of the training buffer at a beginning of each epoch (Fig. 7A and 7B, paragraphs 14 and 68, Murez teaches several learning iterations to start the cycle of processing the data buffer for a deep convolutional neural network used to perform scene-specific regression network tasks, focusing on mapping features to annotations such as poses and labels. The Examiner interprets the several learning iterations to start the cycle of processing the data buffer as shuffling entries of the training buffer at a beginning of each epoch.); generating training batches, each training batch including random features and associated mapping poses (Fig. 7A and 7B, paragraphs 14 and 68, Murez teaches measuring the total loss for a batch of annotated and unannotated data obtained by encoding data from both domains into a joint latent space, involving sampling features such as the data in the domain X and Y and their annotations, ground truth, also known as mapped poses, via the machine learning algorithm. The Examiner interprets this as the generation of training batches that include random features and associated poses.); and training the scene-specific regression network using the training batches (Fig. 7A and 7B, paragraphs 14 and 68, Murez teaches backpropagation through encoders and decoders to minimize the loss until the loss is smaller than the threshold. The Examiner interprets this as training the scene-specific regression network using the training batches.). 14. Regarding Claim 15, Shotton teaches a computer system, comprising: one or more computer processors; and one or more memories comprising stored instructions that when executed by the one or more computer processors causes the computer system to: receive a set of training images of one or more environments and corresponding metadata (Fig.6, paragraphs 16, 29, 44-46, and 60, Shotton teaches receiving a training set of images of scenes where image elements have labels indicating their corresponding scene coordinates and information about the certainty of the image-element scene coordinates, which is an example of metadata describing an attribute of the primary data. The Examiner interprets the training set of images of scenes where image elements have labels as training set of images of one or more environments and information about the certainty of the image-element scene coordinates as corresponding metadata to the set of training images.) and train a relocalizer model using the set of training images (Fig. 5 and paragraph 33, Shotton teaches labeled training images of scene A being used to train random decision forests to enable image elements to predict the correspondence between image elements and scene coordinates using the camera pose tracker system to determine the pose of the object. The Examiner interprets the use of labeled training images to predict the correspondence between image elements and scene coordinates to determine a post of an object using a camera pose tracker system as a relocalized model using training images.). Shotton does not teach the metadata comprising camera pose and intrinsics, the relocalizer model configured to predict scene coordinates corresponding to pixels in an image of an environment, wherein the relocalizer model comprises a scene-agnostic convolutional network, a scene-specific regression network, receive a set of query images of an environment, apply a trained relocalizer model to the set of query images of the environment to generate predicted scene coordinates corresponding to the pixels in the query image, and apply a pose solver algorithm to the predicted scene coordinates to generate a camera pose. Eder is in the same field of art of relocalization to generate predicted coordinates based on an image. Further Eder teaches the metadata comprising camera pose and intrinsics (paragraph 48, 50, and 61, Eder teaches using metadata comprising of data associated images, videos, natural language, camera trajectory, and geometry as a part of the virtual representation, which includes using virtual space, to generate a representation of a physical location with spatially localized information. The Examiner interprets the metadata comprising data associated images, videos, natural language, camera trajectory, and geometry as metadata comprising camera pose and intrinsics.), the relocalizer model configured to predict scene coordinates corresponding to pixels in an image of an environment (paragraphs 62, 125, and 170, Eder teaches a model that executes a camera relocalization process using relative camera poses associated with additional images with respect to registered RGB or RGB-D images and a 3D model in the form of machine learning training that uses information from the description data such as training images as an input for scene capture, scene annotation, and scene editing ultimately to predict the geometric composition of a location, which is an example of predicting scene coordinates, among other elements that provide information about the location. The Examiner interprets the model that executes a camera relocalization process as a relocalizer model and predicting the geometric composition of a location among other elements that provide information about the location using training images which include pixels to predict scene coordinates corresponding to pixels in an image of an environment.), a scene-specific regression network (paragraph 175, Eder teaches where the relocalizer model that accurately predicts location has a geometric reconstruction framework that includes a Structure-from-Motion, SFM, or simultaneous localization and mapping, which is an example of a scene-specific regression network. The Examiner interprets SFM or simultaneous localization and mapping as a scene-specific regression network.), and apply a pose solver algorithm to the predicted scene coordinates to generate a camera pose (paragraph 47, Eder teaches a class of algorithms that use image pixels of the relocalizer system to computationally solve for and estimate intrinsics and extrinsic camera parameters known as SFM, which can be applied to both ordered and unordered image data. The Examiner interprets this as applying the relocalizer training system to predict scene coordinates to generate a camera pose using a pose solver algorithm.). Therefore it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Shotton by adding metadata based on camera pose and intrinsics, predictions based on image pixels, a scene-specific regression network, and a pose solver algorithm that is taught by Eder to enhance user experiences in determining precise location and orientation of a camera when virtual elements are overlaid on the depiction of real-world environments; thus one of ordinary skill in the art would be motivated to combine the references since they are both in the same field of art of relocalization to generate predicted coordinates based on an image (paragraphs 47-48, 50, 61-62, 125, 170, 175, Eder). Shotton in view of Eder does not teach wherein the relocalizer model comprises a scene-agnostic convolutional network, receive a set of query images of an environment, and apply a trained relocalizer model to the set of query images of the environment to generate predicted scene coordinates corresponding to the pixels in the query image. Murez is in the same field of art of relocalization to generate predicted coordinates based on an image. Further Murez teaches wherein the relocalizer model comprises a scene-agnostic convolutional network (paragraphs 14 and 20, Murez teaches a deep CNN that is adapted for predictions using image data for a new target image domain without requiring new annotations, domain agnostic representations, which is an example of using a scene-agnostic convolutional network which learns to understand information without retraining, by determining domain agnostic features that map from the annotated source image domain and a target image domain to a joint latent space, and using the domain agnostic features to map the joint latent space to annotations for a target image domain. The Examiner interprets this as a relocalizer model that comprises a scene-agnostic convolutional network.). Therefore it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Shotton by adding the scene-agnostic convolutional network that is taught by Murez to enhance the training of the camera relocalization model for unseen environment without retraining; thus one of ordinary skill in the art would be motivated to combine the references since they are both in the same field of art of relocalization to generate predicted coordinates based on an image (paragraphs 14 and 20, Murez). Shotton in view of Eder in further view of Murez does not teach receive a set of query images of an environment and apply a trained relocalizer model to the set of query images of the environment to generate predicted scene coordinates corresponding to the pixels in the query image. Sinha is in the same field of art of relocalization to generate predicted coordinates based on an image. Further Sinha teaches receive a set of query images of an environment (Figs. 2-3 and col 2 lines 39-41, 47-51, and col 4 lines 1-8, Sinha teaches query images being used as inputs for detecting a scene landmark by way of machine learning models, where sets of query images are received to depict specific environments such as an interior of a domestic kitchen and 2D locations of a plurality of the pre-specified 3D scene landmarks. The Examiner interprets this use of query images as receiving a set of query images of an environment.) and apply a trained relocalizer model to the set of query images of the environment to generate predicted scene coordinates corresponding to the pixels in the query image (Fig. 3 and col 4 lines 1-27, Sinha teaches using query images as a part of the localization system to generate predicted scene coordinates based on the image data of the query image, where the models encode information above the 3D scene landmarks, which is trained to solve the task of identifying 2D locations in the image to depict accurate 3D scene landmarks. The Examiner interprets using the image data of the query image as applying the set of query images to the relocalizer training system and the query images being used to encode information above 3D scene landmarks to depict accurate 3D scene landmarks as generating predicted scene coordinates corresponding to the pixels in a query image.). Therefore it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Shotton by adding training the relocalizer model using a set of query images for coordinates based on query image pixels that is taught by Sinha to enhance the pinpointing of location and orientation in the prediction of scene coordinates; thus one of ordinary skill in the art would be motivated to combine the references since they are both in the same field of art of relocalization to generate predicted coordinates based on an image (Figs. 2-3 and col 2 lines 39-41, 47-51, and col 4 lines 1-27, Sinha). Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date. 15. In regards to Claim 16, Shotton in view of Eder in further view of Murez furthermore in view of Sinha teaches wherein the scene-agnostic convolutional network of the relocalizer model is pre-trained on the set of training images of one or more environments and the corresponding metadata using image-level training and curriculum training (paragraphs 47 and 54, Murez teaches the initial preliminary use of tens of thousands to millions of training images, along with evaluation based on their corresponding images, for the agnostic network disclosed. The Examiner interprets this as pre-training on the set of training images.). 16. In regards to Claim 18, Shotton in view of Eder in further view of Murez furthermore in view of Sinha teaches wherein the scene-specific regression network is trained in a buffer generation stage (paragraphs 89, 115, 155, and 186, Eder teaches the tentative or temporary storage of accumulation in memory from the geometric reconstruction framework that includes a SFM using a texture atlas, which is built from pixel data stored in a buffer generation stage, as well as on one or more servers, which use buffer generation stages to handle data flow, manage congestion, and smooth out processing for multi-stage image operations. The Examiner interprets this as the scene-specific regression network of the relocalizer training system being trained in a buffer generation stage.) and a main training loop stage (Figs. 4B and 7, paragraphs 100-105 and 132, Eder teaches the geometric reconstruction framework that includes SFM for machine learning methods that include a continuous looping stage and feedback information to guide the collection of additional description data that relate to particular portions of the location. The Examiner interprets this as the scene-specific regression network of the relocalizer training system being trained in a main training loop stage.). 17. In regards to Claim 20, Shotton in view of Eder in further view of Murez furthermore in view of Sinha teaches wherein the main training loop comprises instructions that, when executed by a processor, cause the processor to: shuffling entries of the training buffer at a beginning of each epoch (Fig. 7A and 7B, paragraphs 14 and 68, Murez teaches several learning iterations to start the cycle of processing the data buffer for a deep convolutional neural network used to perform scene-specific regression network tasks, focusing on mapping features to annotations such as poses and labels. The Examiner interprets the several learning iterations to start the cycle of processing the data buffer as shuffling entries of the training buffer at a beginning of each epoch.); generating training batches, each training batch including random features and associated mapping poses (Fig. 7A and 7B, paragraphs 14 and 68, Murez teaches measuring the total loss for a batch of annotated and unannotated data obtained by encoding data from both domains into a joint latent space, involving sampling features such as the data in the domain X and Y and their annotations, ground truth, also know mapped poses, via the machine learning algorithm. The Examiner interprets this as the generation of training batches that include random features and associated poses.); and training the scene-specific regression network using the training batches (Fig. 7A and 7B, paragraphs 14 and 68, Murez teaches backpropagation through encoders and decoders to minimize the loss until the loss is smaller than the threshold. The Examiner interprets this as training the scene-specific regression network using the training batches.). 18. Claims 3, 5, 10, 12, 17 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Shotton et al. (US Patent Pub. No. US 2018/0285697 A1, hereafter referred to as Shotton) in view of Eder et al. (US Patent Pub. No. 2021/0279957 A1, hereafter referred to as Eder) in further view of Murez et al. (US Patent Pub No. 11620527 B2, hereafter referred to as Murez) furthermore in view of Sinha et al. (US Patent Pub No. 12322129 B2, hereafter referred to as Sinha) moreover in view of Liu et al. (US Patent Pub No. US 2022/0172386 A1, hereafter referred to as Liu). 19. Regarding Claim 3, Shotton in view of Eder in further view of Murez furthermore in view of Sinha teaches the method of Claim 1 that trains a relocalizer model using images to predict scene coordinates. Shotton in view of Eder in further view of Murez furthermore in view of Sinha does not teach wherein the relocalizer model includes more than one scene-specific regression network attached to an end of the scene-agnostic convolutional network. Liu in the same field of art of relocalization to generate predicted coordinates based on an image. Further Liu teaches wherein the relocalizer model includes more than one scene-specific regression network attached to an end of the scene-agnostic convolutional network (Fig. 6 and paragraphs 72 and 83, Liu teaches a shared convolutional neural network followed by specific task-oriented layers from an anchor point image through feature extraction into a feature space, where data passes through a multi-layer perceptron, MLP, into a metric space to calculate feature similarity and errors. The last two layers of the neural network, which follow one another, are an MLP and after removing the last multilayer perceptron MLP, scene recognition and relocalization is performed in the SLAM system. The Examiner interprets this as a relocalizer model that includes specific regression networks attached to an end of the scene-agnostic convolutional network.). Therefore it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Shotton by adding more than one scene-specific regression network to the end of the scene-agnostic convolutional network that is taught by Liu to achieve higher accuracy, better generalization, and increased robustness in the prediction of scene coordinates; thus one of ordinary skill in the art would be motivated to combine the references since they are both in the same field of art of relocalization to generate predicted coordinates based on an image (Figs. 6 and paragraphs 72 and 83, Liu). Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date. 20. Regarding Claim 5, Shotton in view of Eder in further view of Murez furthermore in view of Sinha teaches the method of Claim 1 that trains a relocalizer model using images to predict scene coordinates and accessing a set of training images of an environment (paragraph 8, Eder teaches a obtaining a plurality of images or videos of a location to perform the disclosed localization method. The Examiner interprets this as accessing a set of training images of an environment.). Shotton in view of Eder in further view of Murez furthermore in view of Sinha does not teach applying the scene-agnostic convolutional network to the set of training images to extract features from the training images, constructing a fixed sized training buffer, and populating the fixed sized training buffer by copying the extracted features from the training images into the training buffer. Liu is in the same field of art of relocalization to generate predicted coordinates based on an image. Further Liu teaches applying the scene-agnostic convolutional network to the set of training images to extract features from the training images (Fig. 3-5, paragraphs 14-15, 61, and 65-67, Liu teaches training iteration of the intersection over union, IOU-based image depth feature extraction network, where the network is designed to extract global descriptors from images to identify similarities between them, using an unsupervised deep learning model with better generalization capabilities to simply distinguish between positive and negative samples, regardless of the specific scene. The Examiner interprets the training iteration of the IOU-based image depth feature extraction network by way of general capabilities as applying a scene-agnostic convolutional network to a set of images to extract features from the training images.), constructing a fixed sized training buffer (Fig. 4 and paragraph 79, Liu teaches a feature being randomly selected from the positive sample features and then being sent to a feature queue that has a fixed length, which is an example of a fixed sized buffer, to serve as a negative sample feature for the next iteration. The Examiner interprets the feature queue that has a fixed length as a fixed sized training buffer.); and populating the fixed sized training buffer by copying the extracted features from the training images into the training buffer (Fig. 4, paragraphs 75 and 79 Liu teaches a mechanism for maintain a training buffer by using feature queues and process of pushing in features, here the system populates the buffer by taking a feature extracted from a training image in a current iteration and pushing in that feature into the feature queue. The buffer is maintained by adding a new feature, pushing in, and removing the oldest feature, popping out, maintaining the fixed length of the buffer. The Examiner interprets the pushing in and popping out of features to maintain the fixed length of the feature queue as populating the fixed buffer by copying extracted features from the training images into the training buffer.). Therefore it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Shotton by adding the scene-agnostic convolutional network to the set of training images and a fixed sized training buffer for feature that is taught by Liu to achieve higher accuracy, better generalization, and increased robustness in the prediction of scene coordinates; thus one of ordinary skill in the art would be motivated to combine the references since they are both in the same field of art of relocalization to generate predicted coordinates based on an image (Figs. 3-5 and paragraphs 14-15, 61, 65-67, 75, and 79, Liu). Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date 21. Regarding Claim 10, Shotton in view of Eder in further view of Murez furthermore in view of Sinha teaches the method of Claim 8 that trains a relocalizer model using images to predict scene coordinates. Shotton in view of Eder in further view of Murez furthermore in view of Sinha does not teach wherein the relocalizer model includes more than one scene-specific regression network attached to an end of the scene-agnostic convolutional network. Liu in the same field of art of relocalization to generate predicted coordinates based on an image. Further Liu teaches wherein the relocalizer model includes more than one scene-specific regression network attached to an end of the scene-agnostic convolutional network (Fig. 6 and paragraphs 72 and 83, Liu teaches a shared convolutional neural network followed by specific task-oriented layers from an anchor point image through feature extraction into a feature space, where data passes through a multi-layer perceptron, MLP, into a metric space to calculate feature similarity and errors. The last two layers of the neural network, which follow one another, are an MLP and after removing the last multilayer perceptron MLP, scene recognition and relocalization is performed in the SLAM system. The Examiner interprets this as a relocalizer model that includes specific regression networks attached to an end of the scene-agnostic convolutional network.). Therefore it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Shotton by adding more than one scene-specific regression network to the end of the scene-agnostic convolutional network that is taught by Liu to achieve higher accuracy, better generalization, and increased robustness in the prediction of scene coordinates; thus one of ordinary skill in the art would be motivated to combine the references since they are both in the same field of art of relocalization to generate predicted coordinates based on an image (Figs. 6 and paragraphs 72 and 83, Liu). Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date. 22. Regarding Claim 12, Shotton in view of Eder in further view of Murez furthermore in view of Sinha teaches the method of Claim 8 that trains a relocalizer model using images to predict scene coordinates and wherein the buffer generation stage comprises instructions that, when executed by a processor, cause the processor to: accessing a set of training images of an environment (paragraph 8, Eder teaches a obtaining a plurality of images or videos of a location to perform the disclosed localization method. The Examiner interprets this as accessing a set of training images of an environment.). Shotton in view of Eder in further view of Murez furthermore in view of Sinha does not teach applying the scene-agnostic convolutional network to the set of training images to extract features from the training images, constructing a fixed sized training buffer, and populating the fixed sized training buffer by copying the extracted features from the training images into the training buffer. Liu is in the same field of art of relocalization to generate predicted coordinates based on an image. Further Liu teaches applying the scene-agnostic convolutional network to the set of training images to extract features from the training images (Fig. 3-5, paragraphs 14-15, 61, and 65-67, Liu teaches training iteration of the intersection over union, IOU-based image depth feature extraction network, where the network is designed to extract global descriptors from images to identify similarities between them, using an unsupervised deep learning model with better generalization capabilities to simply distinguish between positive and negative samples, regardless of the specific scene. The Examiner interprets the training iteration of the IOU-based image depth feature extraction network by way of general capabilities as applying a scene-agnostic convolutional network to a set of images to extract features from the training images.), constructing a fixed sized training buffer (Fig. 4 and paragraph 79, Liu teaches a feature being randomly selected from the positive sample features and then being sent to a feature queue that has a fixed length, which is an example of a fixed sized buffer, to serve as a negative sample feature for the next iteration. The Examiner interprets the feature queue that has a fixed length as a fixed sized training buffer.); and populating the fixed sized training buffer by copying the extracted features from the training images into the training buffer (Fig. 4, paragraphs 75 and 79 Liu teaches a mechanism for maintain a training buffer by using feature queues and process of pushing in features, here the system populates the buffer by taking a feature extracted from a training image in a current iteration and pushing in that feature into the feature queue. The buffer is maintained by adding a new feature, pushing in, and removing the oldest feature, popping out, maintaining the fixed length of the buffer. The Examiner interprets the pushing in and popping out of features to maintain the fixed length of the feature queue as populating the fixed buffer by copying extracted features from the training images into the training buffer.). Therefore it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Shotton by adding the scene-agnostic convolutional network to the set of training images and a fixed sized training buffer for feature that is taught by Liu to achieve higher accuracy, better generalization, and increased robustness in the prediction of scene coordinates; thus one of ordinary skill in the art would be motivated to combine the references since they are both in the same field of art of relocalization to generate predicted coordinates based on an image (Figs. 3-5 and paragraphs 14-15, 61, 65-67, 75, and 79, Liu). Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date 23. Regarding Claim 17, Shotton in view of Eder in further view of Murez furthermore in view of Sinha teaches the method of Claim 15 that trains a relocalizer model using images to predict scene coordinates. Shotton in view of Eder in further view of Murez furthermore in view of Sinha does not teach wherein the relocalizer model includes more than one scene-specific regression network attached to an end of the scene-agnostic convolutional network. Liu in the same field of art of relocalization to generate predicted coordinates based on an image. Further Liu teaches wherein the relocalizer model includes more than one scene-specific regression network attached to an end of the scene-agnostic convolutional network (Fig. 6 and paragraphs 72 and 83, Liu teaches a shared convolutional neural network followed by specific task-oriented layers from an anchor point image through feature extraction into a feature space, where data passes through a multi-layer perceptron, MLP, into a metric space to calculate feature similarity and errors. The last two layers of the neural network, which follow one another, are an MLP and after removing the last multilayer perceptron MLP, scene recognition and relocalization is performed in the SLAM system. The Examiner interprets this as a relocalizer model that includes specific regression networks attached to an end of the scene-agnostic convolutional network.). Therefore it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Shotton by adding more than one scene-specific regression network to the end of the scene-agnostic convolutional network that is taught by Liu to achieve higher accuracy, better generalization, and increased robustness in the prediction of scene coordinates; thus one of ordinary skill in the art would be motivated to combine the references since they are both in the same field of art of relocalization to generate predicted coordinates based on an image (Figs. 6 and paragraphs 72 and 83, Liu). Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date. 24. Regarding Claim 19, Shotton in view of Eder in further view of Murez furthermore in view of Sinha teaches the method of Claim 15 that trains a relocalizer model using images to predict scene coordinates and wherein the buffer generation stage comprises instructions that, when executed by a processor, cause the processor to: accessing a set of training images of an environment (paragraph 8, Eder teaches a obtaining a plurality of images or videos of a location to perform the disclosed localization method. The Examiner interprets this as accessing a set of training images of an environment.). Shotton in view of Eder in further view of Murez furthermore in view of Sinha does not teach applying the scene-agnostic convolutional network to the set of training images to extract features from the training images, constructing a fixed sized training buffer, and populating the fixed sized training buffer by copying the extracted features from the training images into the training buffer. Liu is in the same field of art of relocalization to generate predicted coordinates based on an image. Further Liu teaches applying the scene-agnostic convolutional network to the set of training images to extract features from the training images (Fig. 3-5, paragraphs 14-15, 61, and 65-67, Liu teaches training iteration of the intersection over union, IOU-based image depth feature extraction network, where the network is designed to extract global descriptors from images to identify similarities between them, using an unsupervised deep learning model with better generalization capabilities to simply distinguish between positive and negative samples, regardless of the specific scene. The Examiner interprets the training iteration of the IOU-based image depth feature extraction network by way of general capabilities as applying a scene-agnostic convolutional network to a set of images to extract features from the training images.), constructing a fixed sized training buffer (Fig. 4 and paragraph 79, Liu teaches a feature being randomly selected from the positive sample features and then being sent to a feature queue that has a fixed length, which is an example of a fixed sized buffer, to serve as a negative sample feature for the next iteration. The Examiner interprets the feature queue that has a fixed length as a fixed sized training buffer.), and populating the fixed sized training buffer by copying the extracted features from the training images into the training buffer (Fig. 4, paragraphs 75 and 79 Liu teaches a mechanism for maintain a training buffer by using feature queues and process of pushing in features, here the system populates the buffer by taking a feature extracted from a training image in a current iteration and pushing in that feature into the feature queue. The buffer is maintained by adding a new feature, pushing in, and removing the oldest feature, popping out, maintaining the fixed length of the buffer. The Examiner interprets the pushing in and popping out of features to maintain the fixed length of the feature queue as populating the fixed buffer by copying extracted features from the training images into the training buffer.). Therefore it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Shotton by adding the scene-agnostic convolutional network to the set of training images and a fixed sized training buffer for feature that is taught by Liu to achieve higher accuracy, better generalization, and increased robustness in the prediction of scene coordinates; thus one of ordinary skill in the art would be motivated to combine the references since they are both in the same field of art of relocalization to generate predicted coordinates based on an image (Figs. 3-5 and paragraphs 14-15, 61, 65-67, 75, and 79, Liu). Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date. Allowable Subject Matter 25. Claims 7 and 14 objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion 26. Any inquiry concerning this communication or earlier communications from the examiner should be directed to LOUIS NWUHA whose telephone number is (571)272 -0219. The examiner can normally be reached Monday to Friday 8 am to 5 pm. 27. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. 28. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Oneal Mistry can be reached at 3134464912. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. 29. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /LOUIS NWUHA/Examiner, Art Unit 2674 /ONEAL R MISTRY/Supervisory Patent Examiner, Art Unit 2674
Read full office action

Prosecution Timeline

Dec 15, 2023
Application Filed
Jan 20, 2026
Non-Final Rejection — §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
Grant Probability
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 0 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month