Prosecution Insights
Last updated: April 19, 2026
Application No. 18/276,020

PRESENTING VIRTUAL REPRESENTATION OF REAL SPACE USING SPATIAL TRANSFORMATION

Non-Final OA §102§103
Filed
Aug 04, 2023
Examiner
BADER, ROBERT N.
Art Unit
2611
Tech Center
2600 — Communications
Assignee
Realsee (Beijing) Technology Co. Ltd.
OA Round
1 (Non-Final)
44%
Grant Probability
Moderate
1-2
OA Rounds
3y 1m
To Grant
70%
With Interview

Examiner Intelligence

Grants 44% of resolved cases
44%
Career Allow Rate
173 granted / 393 resolved
-18.0% vs TC avg
Strong +26% interview lift
Without
With
+26.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
32 currently pending
Career history
425
Total Applications
across all art units

Statute-Specific Performance

§101
9.9%
-30.1% vs TC avg
§103
48.7%
+8.7% vs TC avg
§102
13.9%
-26.1% vs TC avg
§112
19.5%
-20.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 393 resolved cases

Office Action

§102 §103
,Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Election/Restrictions Claims 4-6 and 13-16 are withdrawn from further consideration pursuant to 37 CFR 1.142(b) as being drawn to a nonelected groups II and II, there being no allowable generic or linking claim. Election was made without traverse in the reply filed on 1/5/26. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1, 2, and 11 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by “A Compact RGB-D Map Representation dedicated to Autonomous Navigation” by Tawsif Gokhool (hereinafter Gokhool). Regarding claim 1, the limitations “A method for presenting a virtual representation of a real space, comprising: obtaining a plurality of color images and a plurality of depth images respectively corresponding to the plurality of color images, wherein the plurality of color images correspond to respective partial scenes of the real space that are observed at a plurality of observation points in the real space, and the plurality of depth images respectively contain depth information of the respective partial scenes; for any observation point, superimposing a color image in the plurality of color images that corresponds to the observation point and a depth image in the plurality of depth images that corresponds to the color image to obtain a superimposed image; respectively mapping respective superimposed images corresponding to the plurality of observation points to a plurality of spheres … such that any of the spheres corresponds to a respective one of the plurality of observation points and comprises a plurality of vertices, any vertex having a respective color information and a respective depth information” are taught by Gokhool (Gokhool, e.g. sections 3-5, describes a system for 3D mapping using spherical RGBD images captured at a plurality of locations using a mobile robot. Gokhool’s system includes steps for capturing the plurality of spherical RGBD images by the robot at locations along a trajectory, e.g. sections 4.1, 5.1, 5.2, where each RGBD image comprises a color image I and depth image D, and is used to define a spherical point cloud comprising vertices having the corresponding color and depth for each pixel of the image, e.g. sections 3.7, 4.2.2, 4.3, 4.3.1, corresponding to the claimed obtaining a plurality of color/depth images corresponding to partial scenes of a real space at a plurality of observation points, superimposing the color image and depth image for any/each observation point, and mapping the superimposed images to a plurality of spheres comprising a plurality of vertices having respective color and depth information.) The limitations (addressed out of order) “respectively mapping respective superimposed images corresponding to the plurality of observation points to a plurality of spheres in a virtual space”, “for any vertex of the spheres, performing shading on the vertex based on the color information of the vertex, to obtain, for presentation, respective virtual representations, in the virtual space, of the respective partial scenes of the real space” are taught by Gokhool (Gokhool, e.g. sections 4.1, 4.2.2, 4.7.3, 4.7.3.1, 5.2, teaches that pose of each spherical RGBD image is determined in a common coordinate system such that the point cloud representation, i.e. the combined vertices projected from the keyframe RGBD images, can be rendered as an image for display as in figures 4.17, 4.22, 5.5, 5.8, 5.16, 5.18, i.e. the keyframe spherical RGBD images are mapped in a virtual space, the common coordinate system, where the resulting point cloud vertices are shaded as part of rendering an image for display, corresponding to the claimed mapping respective superimposed images in a virtual space, and performing shading of vertices based on color information in order to obtain a presentation of the respective partial scenes of the real space.) The limitations “performing spatial transformation on the vertices of the plurality of spheres in the virtual space based on a relative spatial relationship between the plurality of observation points in the real space; for any vertex of the spheres, performing spatial editing on the vertex based on the depth information of the vertex” are taught by Gokhool (Gokhool, e.g. sections 4.2-4.6, 5.3.4, teaches that the visual odometry involves estimating and updating/transforming the poses of the keyframe spherical RGBD images in the virtual space in order to accurately determine the relative spatial relationship between the keyframes, and further, e.g. sections 5.2-5.3.3, 5.4-5.4.10, teaches performing depth map fusion based on inverse warping of nearby spherical RGBD images in order to improve the resulting combined point cloud, wherein the depth map fusion involves modifying the vertices of a spherical RGBD image based on the corresponding depth image value, e.g. equations 5.2-5.4, wherein both the visual odometry pose updating and inverse warping operations correspond to the claimed performing spatial transformation on the vertices of the plurality of spheres in the virtual space based on a relative spatial relationship between the plurality of observation points in the real space, and the depth map fusion corresponds to the claimed spatial editing on vertices based on their depth information.) Regarding claim 2, the limitation “wherein the performing spatial editing on the vertex based on the depth information of the vertex comprises: moving coordinates of the vertex by an offset distance along a normal of the vertex, wherein the offset distance corresponds to the depth information of the vertex” is taught by Gokhool (As noted in the claim 1 rejection above, Gokhool, e.g. sections 5.2-5.3.3, 5.4-5.4.10, teaches performing depth map fusion based on inverse warping of nearby spherical RGBD images in order to improve the resulting combined point cloud, wherein the depth map fusion involves modifying the vertices of a spherical RGBD image based on the corresponding depth image value, e.g. equations 5.2-5.4. Gokhool’s modified vertex depth value corresponds to the claimed moving coordinates of the vertex by an offset distance along a normal of the vertex, i.e. Gokhool, e.g. section 3.7.2, figure 3.1.6, page 82, paragraph 3, teaches that the spherical RGBD images comprise a spherical grid of uniformly sampled vertices having color and depth values, where the depth value corresponds to an offset distance along the normal of the vertex as shown in figure 3.1.6, where the normal of the vertex shown with the arrow from the center of the sphere to the vertex position on the unit sphere p, has an associated depth value/offset distance along the arrow/normal corresponding to the 3D location of the point q. Further, Gokhool’s determining of modified depth values for the vertices using the weighted average filtering of equations 5.2-5.4 corresponds to determining an offset distance corresponding to the depth information of the vertex, i.e. the modified vertex depth determined by the weighted average filtering is a modified offset/depth value along the normal of the vertex as in figure 3.16, where the modified offset/depth value is a function of the unmodified offset/depth value, i.e. as claimed, the modified offset/depth value corresponds to the vertex’s depth value.) Regarding claim 11, the limitation “wherein the plurality of spheres have a same radius” is taught by Gokhool (Gokhool, e.g. section 3.7.2, figure 3.1.6, page 82, paragraph 3, teaches that the spherical RGBD images comprise a spherical grid of uniformly sampled vertices having color and depth values, where the depth value corresponds to an offset distance along the normal of the vertex as shown in figure 3.1.6, where the normal of the vertex shown with the arrow from the center of the sphere to the vertex position on the unit sphere p, has an associated depth value/offset distance along the arrow/normal corresponding to the 3D location of the point q. Gokhool does not teach or otherwise suggest changing the radius of the sphere in the spherical RGBD images, i.e. all of the spheres use the same unit sphere radius of 1.) Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over “A Compact RGB-D Map Representation dedicated to Autonomous Navigation” by Tawsif Gokhool (hereinafter Gokhool) as applied to claim 1 above, and further in view of U.S. Patent Application 2020/0027261 A1 (hereinafter Briggs). Regarding claim 7, the limitation “wherein the performing shading on the vertex of the color information based on the vertex comprises: inputting the color information of the vertex and the coordinates of the vertex to a fragment shader to perform the shading” is implicitly taught by Gokhool (As noted in the claim 1 rejection above, Gokhool, e.g. sections 4.1, 4.2.2, 4.7.3, 4.7.3.1, 5.2, teaches that pose of each spherical RGBD image is determined in a common coordinate system such that the point cloud representation, i.e. the combined vertices projected from the keyframe RGBD images, can be rendered as an image for display as in figures 4.17, 4.22, 5.5, 5.8, 5.16, 5.18. That is, Gokhool’s rendered images comprise pixels having colors generated based on shading the combined vertices from the RGBD images, which is necessarily dependent on the vertex color and coordinates. Further, while the broadest reasonable interpretation of a fragment shader would arguably include any rendering program generating pixel, i.e. fragment, color/shading values, in the interest of compact prosecution, because Gokhool does not describe details of rendering images of the combined vertices/point cloud representation, Briggs is cited for teaching rendering from spherical RGBD images can be performed using a fragment shader of a GPU.) However, this limitation is taught by Briggs (Briggs, e.g. abstract, paragraphs 21-80, describes a system for rendering 360 degree panoramic content comprising color and depth data using vertex and fragment shaders. Briggs, e.g. paragraphs 38-41, teaches that the 360 depth content can be represented using a sphere centered on the origin of the 360 depth content, where the vertices of the sphere are modified according to the depth using the vertex shader, e.g. paragraphs 42, 48-54, followed by rendering fragment/pixel values using the fragment shader, e.g. paragraphs 45-47, 59-79, where the fragment shader operates in part by determining intersection points and their associated colors using the modified depth map/sphere vertices, e.g. paragraphs 54, 64-77. That is, Briggs teaches that rendering images from spherical RGBD images can be performed using vertex and fragment shader programs operating on a GPU, where one of ordinary skill in the art of computer graphics processing would know that GPU shader implementations of image rendering are often more computationally efficient than general CPU/software based rendering approaches.) Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to implement Gokhool’s spherical RGBD mapping system using GPU vertex and fragment shader programs as taught by Briggs because in addition to Briggs teaching the use of GPU vertex and fragment shader programs for the same purpose of rendering images from spherical RGBD images, one of ordinary skill in the art of computer graphics processing would have known that it is often advantageous to implement image rendering using GPU vertex and fragment shader programs, i.e. that GPU shader implementations of image rendering are often more computationally efficient than general CPU/software based rendering approaches. In Gokhool’s modified system, the images of the combined vertices/point cloud representation would be rendered using GPU vertex and fragment shaders, i.e. the colors and positions of the vertices/points of the combined representation would be used as input to the GPU fragment shader program determining fragment/pixel values of the rendered image by evaluating which vertices/points are along the viewing ray for the corresponding output image pixel as taught by Briggs. Claims 8, 9, and 12 are rejected under 35 U.S.C. 103 as being unpatentable over “A Compact RGB-D Map Representation dedicated to Autonomous Navigation” by Tawsif Gokhool (hereinafter Gokhool) as applied to claim 1 above, and further in view of U.S. Patent Application Publication 2019/0026958 A1 (hereinafter Gausebeck). Regarding claim 8, the limitation “presenting, in a view, a first virtual representation of the respective virtual representations, wherein the first virtual representation corresponds to a current observation point in the plurality of observation points” is implicitly taught by Gokhool (As noted in the claim 1 rejection above, Gokhool, e.g. sections 4.1, 4.2.2, 4.7.3, 4.7.3.1, 5.2, teaches that pose of each spherical RGBD image is determined in a common coordinate system such that the point cloud representation, i.e. the combined vertices projected from the keyframe RGBD images, can be rendered as an image for display as in figures 4.17, 4.22, 5.5, 5.8, 5.16, 5.18. While one of ordinary skill in the art would have found it implicit that Gokhool’s combined vertices/point cloud representation could be rendered from any selected viewpoint, i.e. including the keyframe/observation points, Gokhool does not discuss navigation controls for viewing the combined vertices/point cloud representation from other viewpoints than the exemplary overhead viewpoints of the figures. Therefore in the interest of compact prosecution Gausebeck is cited for explicitly teaching viewpoint navigation controls in an analogous system.) However, this limitation is taught by Gausebeck (Gausebeck, e.g. abstract, paragraphs 30-255, describes a system for combining point cloud data from spherical RGBD images of an environment, e.g. paragraphs 32, 98-126, 135, 182, and providing navigation controls, e.g. paragraphs 86-94, including a walking mode of navigation which consists of moving between waypoints corresponding to capture locations, e.g. paragraphs 88, 90. That is, Gausebeck teaches in addition to an overhead floorplan view as shown by Gokhool, walking mode, dollhouse/orbit mode, and a feature view mode can be provided as alternative viewpoint navigation modes for an analogous system rendering images of 3D maps combining data from a plurality of spherical RGBD images.) Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Gokhool’s spherical RGBD mapping system to include Gausebeck’s additional viewpoint navigation modes for controlling the viewpoint for rendering images of Gokhool’s combined vertices/point cloud representation because Gausebeck teaches the additional viewpoint navigation modes are provided in addition to the overhead floorplan view already supported by Gokhool’s system, and one of ordinary skill in the art would recognize the advantage of additional navigation modes, i.e. depending on the user’s intent, different navigation modes may more easily achieve desired viewpoints. In Gokhool’s modified system using Gausebeck’s walking mode, the viewpoint would move between waypoints defined based on the capture locations of the spherical RGBD images, i.e. the keyframe locations, corresponding to the claim requirement to present in a first view, a first representation from a first/current observation point of the plurality of observation points. Similarly, with respect to depending claim 9, when the user causes movement to the next waypoint, the view would move to the next waypoint and render the corresponding image for display, corresponding to the claimed refreshing the view to present a second virtual representation corresponding to another observation point in response to detecting a user operation indicating a movement from the current observation point to another observation point. Regarding claim 9, the limitations are similar to those treated in the above rejection(s) and are met by the references as discussed in claim 8 above. Regarding claim 12, the limitation “wherein the obtaining the plurality of color images and the plurality of depth images respectively corresponding to the plurality of color images comprises: receiving the plurality of color images and the plurality of depth images from a server” is not explicitly taught by Gokhool (While Gokhool, e.g. section 4.7, describes using several pre-generated spherical RGBD image datasets for testing the different disclosed techniques, Gokhool does not explicitly indicate how the datasets were transferred from the camera device(s) to the computing device performing the processing to generate the combined vertices/point cloud representation(s) from the datasets. While one of ordinary skill in the art would understand that spherical RGBD image datasets can be received from a server, i.e. it is conventional to use well-known datasets available on the internet for testing 2D and 3D image processing algorithms, e.g. the Lena image, the Stanford bunny, the Utah teapot, etc., in the interest of compact prosecution Gausebeck is cited for teaching, in an analogous system, that one of ordinary skill in the art would recognize that separate devices may capture image data and perform image processing, with a server computer being used to facilitate data transfer.) However, this limitation is taught by Gausebeck (Gausebeck, e.g. abstract, paragraphs 30-255, describes a system for combining point cloud data from spherical RGBD images of an environment, e.g. paragraphs 32, 98-126, 135, 182. Further, Gausebeck, e.g. paragraphs 95, 172, teaches that there may be separate capture, client, and server devices, each performing separate parts of the process including capturing image data, receiving/transferring image data, processing image data into reconstructed models, receiving/transferring reconstructed model data, and displaying reconstructed model data, i.e. one of ordinary skill in the art would have recognized that separate devices may capture image data and perform image processing, with a server computer being used to facilitate data transfer.) Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Gokhool’s spherical RGBD mapping system to receive spherical RGBD image datasets from a server storing spherical RGBD image datasets as taught by Gausebeck, because one of ordinary skill in the art would understand that spherical RGBD image datasets can be received from a server, i.e. it is conventional to use well-known datasets available on the internet for testing 2D and 3D image processing algorithms, e.g. the Lena image, the Stanford bunny, the Utah teapot, etc., and because Gausebeck teaches that a variety of configurations of separate capture, server, and client devices can be used for implementing an analogous mapping system. In the modified system the sever would store and provide datasets to different client systems for use in systems which process on spherical RGBD image datasets, including Gokhool’s system. Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over “A Compact RGB-D Map Representation dedicated to Autonomous Navigation” by Tawsif Gokhool (hereinafter Gokhool) as applied to claim 1 above, and further in view of U.S. Patent Application Publication 2015/0358613 A1 (hereinafter Sandrew). Regarding claim 10, the limitation “wherein the performing the spatial editing on the vertex based on the depth information of the vertex is performed before the spatial transformation on the vertices of the plurality of spheres in the virtual space” is not explicitly taught by Gokhool (As discussed in the claim 1 rejection above, Gokhool, e.g. sections 4.2-4.6, 5.3.4, teaches that the visual odometry involves estimating and updating/transforming the poses of the keyframe spherical RGBD images in the virtual space in order to accurately determine the relative spatial relationship between the keyframes, and further, e.g. sections 5.2-5.3.3, 5.4-5.4.10, teaches performing depth map fusion based on inverse warping of nearby spherical RGBD images in order to improve the resulting combined point cloud, wherein both the visual odometry pose updating and inverse warping operations correspond to the claimed performing spatial transformation and the depth map fusion corresponds to the claimed spatial editing. That is, Gokhool’s spatial editing is performed after spatial transformation, and Gokhool does not explicitly teach performing other spatial editing on the spherical RGBD images prior to performing the visual odometry.) However, this limitation is taught by Sandrew (Sandrew, e.g. paragraphs 69-122, describes a system for editing 3D models of spherical RGBD images, e.g. paragraphs 70-78 describe generating a spherical surface having image and depth values, where the depth values can be edited by a user using an interface, e.g. paragraphs 97-107. Sandrew, e.g. figures 20, 22, paragraphs 104-107, teaches that objects in the spherical image models can have their associated depth value(s) increased or decreased based on user input, i.e. performing spatial editing on vertices based on their depth information. That is, Sandrew teaches that a user interface can be provided to allow user(s) to edit the depths of vertices in a spherical RGBD image being used to construct a 3D virtual model.) Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Gokhool’s spherical RGBD mapping system to include Sandrew’s spherical RGBD image editing interface in order to allow users to manually edit/correct individual spherical RGBD images in a spherical RGBD image dataset as taught by Sandrew. In Gokhool’s modified system, Sandrew’s spherical RGBD image editing interface could be used to edit individual spherical RGBD images at any point during the process, i.e. both before or after Gokhool’s visual odometry and inverse warping operations, such that as claimed, spatial editing can be performed prior to the spatial transformation, in addition to performing Gokhool’s depth map fusion/spatial editing step after spatial transformation. Claims 29 and 30 are rejected under 35 U.S.C. 103 as being unpatentable over “A Compact RGB-D Map Representation dedicated to Autonomous Navigation” by Tawsif Gokhool (hereinafter Gokhool). Regarding claims 29 and 30, the limitations are similar to those treated in the above rejection(s) and are met by the references as discussed in claim 1 above, except for the limitations requiring implementation using one or more processors executing one or more programs stored in one or more storage devices or non-transitory media, which are implicitly taught by Gokhool, i.e. although Gokhool does not describe the computing system used to implement the processing of the system, one of ordinary skill in the art would have found it implicit, if not inherent, that Gokhool’s system would be implemented using programmable processor(s) executing program(s) stored on conventional storage media, i.e. non-transitory media, due to the processing required by Gokhool’s system, and Gokhool does not suggest or disclose the use of an unconventional computing platform. Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to implement Gokhool’s spherical RGBD mapping system using conventional computing components, i.e. as noted above, one of ordinary skill in the art would have found it implicit, if not inherent, that Gokhool’s system would be implemented using programmable processor(s) executing program(s) stored on conventional storage media, i.e. non-transitory media. Allowable Subject Matter Claim 3 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The following is a statement of reasons for the indication of allowable subject matter: Depending claim 3 recites that the spatial editing further comprises obtaining the offset distance by normalizing an obtained depth value of the vertex and multiplying the normalized depth value by a radius of the sphere where the vertex is located. As discussed in the rejections of claims 2 and 11 above, Gokhool, e.g. section 3.7.2, figure 3.1.6, page 82, paragraph 3, teaches that the spherical RGBD images comprise a spherical grid of uniformly sampled vertices having color and depth values, where the depth value corresponds to an offset distance along the normal of the vertex as shown in figure 3.1.6, where the normal of the vertex shown with the arrow from the center of the sphere to the vertex position on the unit sphere p, has an associated depth value/offset distance along the arrow/normal corresponding to the 3D location of the point q. Gokhool does not teach or otherwise suggest changing the radius of the sphere in the spherical RGBD images, i.e. all of the spheres use the same unit sphere radius of 1, such that Gokhool’s depth map fusion/spatial editing does not include multiplying the radius of the sphere by a normalized depth value of the vertex to obtain the offset distance. Furthermore, the cited prior art references do not teach or otherwise suggest determining a spherical RGBD image vertex offset distance by multiplying the radius of the sphere by a normalized depth value of the vertex as claimed, such that the scope of depending claim 3, when considered as a whole along with the limitations of parent claims 1 and 2, is not anticipated by, or otherwise obvious in view of, the cited prior art references. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to ROBERT BADER whose telephone number is (571)270-3335. The examiner can normally be reached 11-7 m-f. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tammy Goddard can be reached at 571-272-7773. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ROBERT BADER/Primary Examiner, Art Unit 2611
Read full office action

Prosecution Timeline

Aug 04, 2023
Application Filed
Mar 06, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586334
SYSTEMS AND METHODS FOR RECONSTRUCTING A THREE-DIMENSIONAL OBJECT FROM AN IMAGE
2y 5m to grant Granted Mar 24, 2026
Patent 12586335
SYSTEMS AND METHODS FOR RECONSTRUCTING A THREE-DIMENSIONAL OBJECT FROM AN IMAGE
2y 5m to grant Granted Mar 24, 2026
Patent 12541916
METHOD FOR ASSESSING THE PHYSICALLY BASED SIMULATION QUALITY OF A GLAZED OBJECT
2y 5m to grant Granted Feb 03, 2026
Patent 12536728
SHADOW MAP BASED LATE STAGE REPROJECTION
2y 5m to grant Granted Jan 27, 2026
Patent 12505615
GENERATING THREE-DIMENSIONAL MODELS USING MACHINE LEARNING MODELS
2y 5m to grant Granted Dec 23, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
44%
Grant Probability
70%
With Interview (+26.4%)
3y 1m
Median Time to Grant
Low
PTA Risk
Based on 393 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month