Prosecution Insights
Last updated: April 17, 2026
Application No. 18/365,140

Arcuate Imaging for Altered Reality Visualization

Non-Final OA §103
Filed
Aug 03, 2023
Examiner
BARHAM, RYAN ALLEN
Art Unit
2613
Tech Center
2600 — Communications
Assignee
unknown
OA Round
3 (Non-Final)
54%
Grant Probability
Moderate
3-4
OA Rounds
2y 8m
To Grant
99%
With Interview

Examiner Intelligence

Grants 54% of resolved cases
54%
Career Allow Rate
7 granted / 13 resolved
-8.2% vs TC avg
Strong +60% interview lift
Without
With
+60.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
19 currently pending
Career history
32
Total Applications
across all art units

Statute-Specific Performance

§101
2.1%
-37.9% vs TC avg
§103
48.2%
+8.2% vs TC avg
§102
45.4%
+5.4% vs TC avg
§112
2.8%
-37.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 13 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/22/2025 has been entered. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-4, 9-14, and 19-20 is/are rejected under 35 U.S.C. 103 as being anticipated by Holzer (US 11095869 B2), and further in light of Zeiger (US 8314790 B1). Regarding claim 1, Holzer teaches a method comprising: receiving, at data processing hardware, a first image of a target individual captured by a first image capturing device at a first image capturing location, wherein the first image includes a first view (col., 20, lines 26-32: “In the present example embodiment, three cameras 412, 414, and 416 are positioned at location A 422, location B 424, and location X 426, respectively, in proximity to an object of interest 408.”); receiving, at the data processing hardware, a second image of the target individual captured by a second image capturing device at a second image capturing location, wherein the second image includes a second view (col., 20, lines 26-32, as above); receiving, at the data processing hardware, a third image of the target individual captured by a third image capturing device at a third image capturing location, wherein the third image includes a third view (col., 20, lines 26-32, as above); generating, by the data processing hardware, a three-dimensional (3D) composite image from the first image, the second image, and the third image (col. 20, lines 22-26: “With reference to FIG. 4, shown is one example of multiple camera frames that can be fused together into a three-dimensional (3D) model to create an immersive experience in a Multi-View Interactive Digital Media Representation (MIDMR).”/col. 6, lines 48-54: “According to various embodiments, a multi-view interactive digital media (MIDM) is used herein to describe any one of various images (or other media data) used to represent a dynamic surrounding view of an object of interest and/or contextual background. Such dynamic surrounding view may be referred to herein as multi-view interactive digital media representation (MIDMR).”); generating, by the data processing hardware, a set of reference markers corresponding to features represented by the composite image (col. 10, lines 43-50: “The general view may further include one or more selectable tags embedded within image frames of the general object MIDMR. These selectable tags may be visible or invisible to the user, and correspond to various features or components of the object of interest. Selection of a tag may trigger a specific view of the corresponding feature or component which displays a more detailed specific feature MIDMR.”); determining, by the data processing hardware, distances between the set of reference markers (col. 47, lines 28-41: “In some embodiments, a transformed keypoint in the first frame is considered an inlier if the transformation T1 correctly transforms the keypoint to match the corresponding keypoint in the second frame. In some embodiments, this can be determined by computing the L2 distance between a transformed keypoint and its corresponding keypoint on the second image. For example, a transformed keypoint on a first frame N may be denoted as K{circumflex over ( )} and its corresponding keypoint on the second frame N+1 may be denoted as K′. The L2 distance is computed as ∥A{circumflex over ( )}−A′∥, which corresponds to the distance between two 2D points. If the distance between any keypoint correspondence is within a predetermined threshold distance in any direction, then the correspondence will be determined to be an inlier.”); obtaining, by the data processing hardware, a two-dimensional (2D) reference image based on the determined distances (col. 59, lines 38-42: “At 3003, the keyframes in an array 3100 may then be projected on a 2D graph as nodes 3155, such as 2D graph 3150 shown in FIG. 31B. For example, structured image may be represented on a 2D graph 3150 where each node 3155 corresponds to a keyframe.”), wherein: the reference image represents a set of reference features associated with the features corresponding to the set of reference markers (col. 11, lines 38-43: “In particular, data such as, but not limited to two-dimensional (2D) images 104 can be used to generate MIDM. These 2D images can include color image data streams such as multiple image sequences, video data, etc., or multiple images in any of various formats for images, depending on the application.”); generating, by the data processing hardware, a three-dimensional (3D) overlay of the reference image by aligning the set of reference features with locations of the features corresponding to the set of reference markers (col. 58, lines 31-42: “Images may then be captured manually and/or automatically. A coverage map may be displayed on top of the bounding sphere 3101 overlaid on the current view of the camera. The current location of the camera may be projected onto the surface of the sphere 3101 as a small dot (not shown). Every previously recorded image of the scene is also projected onto the coverage map and occupies a square mark corresponding to the range of views that it covers under predetermined sampling criterion. As shown in FIG. 31A, each captured camera location 3105 is marked by a square brush mark, including camera locations 3105-A, 3105-B, and 3105-C.”); and rendering, by the data processing hardware, a graphical representation of the composite image with the three-dimensional (3D) overlay (col. 58, lines 42-45: “The goal is to control the movement of the camera and to “paint” the surface of the sphere 3101 with marks 3105, such that enough data to generate a high quality rendering of the object is obtained.”), wherein each of the first image capturing location, the second image capturing location, and the third image capturing location are distinct locations that collectively form a convex arc around the target individual (col. 8, lines 11-17: “In some embodiments, artificial images may be linearly interpolated based on images captured along a linear camera translation, such as an concave and/or convex arc. However, in some embodiments, images may be captured along a camera translation comprising multiple directions, such as a light field comprising multiple image captures from multiple camera locations.”). Holzer fails to teach wherein the features corresponding to the set of reference markers are features on an external anatomical layer of the target individual, and the set of reference features represents an internal anatomical layer of the target individual. Zeiger teaches a set of reference markers corresponding to features on an external layer of a target individual, representing an internal anatomical layer of the target individual (col. 10, lines 45-50: “Any internal structure or portion of the 3D object 402 can be rendered by the 3D enhanced web browser 124 and viewed from any position using the searchable data of the 3D object 140, including, with reference to a human body, inner anatomical layers, biological systems, organs, tissues, cells, and molecular structures.”, also see Fig. 4A illustrates an external anatomical layer such as outer surface, and Figs. 4B and 4C illustrate internal anatomical layers). It would have been obvious to one familiar in the art prior to the effective filing date to incorporate Zeiger’s anatomical layering into Holzer’s visualization method if one wished to visualize the anatomy of a human body using Holzer’s method. Zieger’s method would allow for viewing of internal and external anatomical layers within the virtual environment of Holzer. Regarding claim 2, Holzer and Zeiger teach the method of claim 1. Holzer further teaches wherein the first image capturing device includes the data processing hardware (col. 76, lines 43-48: “In some embodiments, when acting under the control of appropriate software or firmware, the processor 3901 is responsible for various processes, including processing inputs through various computational layers and algorithms, as described herein.”). Regarding claim 3, Holzer and Zeiger teach the method of claim 1. Holzer further teaches wherein at least two of the first image capturing device, the second image capturing device, or the third image capturing device are the same image capturing device (col. 6, line 64 – col. 7, line 2: “The data used to generate a MIDMR can come from a variety of sources. In particular, data such as, but not limited to, two-dimensional (2D) images can be used to generate MIDMR. Such 2D images may be captured by a camera moving along a camera translation, which may or may not be uniform.”). Regarding claim 4, Holzer and Zeiger teach the method of claim 1. Holzer further teaches wherein the first image capturing device, the second image capturing device, and the third image capturing device are the same image capturing device (col. 6, line 64 – col. 7, line 2, in claim 3 rejection). Regarding claim 9, Holzer and Zeiger teach the method of claim 1. Holzer further teaches wherein the set of reference markers includes a plurality of reference markers (col. 39, lines 10-16: “A predetermined number keypoints with the highest Harris score may then be selected. For example, 1,000 keypoints may be identified and selected on the first frame. The corresponding 1,000 keypoints on the second frame can then be identified using a Kanade-Lucas-Tomasi (KLT) feature tracker to track keypoints between the two image frames.” “Keypoints” are interpreted here as “reference markers.”). Regarding claim 10, Holzer and Zeiger teach the method of claim 1. Holzer further teaches wherein each of the first image capturing device, the second image capturing device, and the third image capturing device are mobile relative to the target individual (col. 27, lines 46-49: “Depending on the application, the input device and output device can both be included in a mobile device, etc.”). Regarding claim 11, Holzer teaches a system comprising: data processing hardware (col. 76, lines 43-48: “In some embodiments, when acting under the control of appropriate software or firmware, the processor 3901 is responsible for various processes, including processing inputs through various computational layers and algorithms, as described herein.”); and memory hardware in communication with the data processing hardware, the memory hardware storing instructions that when executed by the data processing hardware perform operations comprising the method of claim 1 (as above, in claim 1 rejection). Claim 12 is functionally identical to claim 2, and differs only in that it depends on claim 11 rather than claim 1. As such, it is rejected on the same grounds as claim 2. Claim 13 is functionally identical to claim 3, and differs only in that it depends on claim 11 rather than claim 1. As such, it is rejected on the same grounds as claim 3. Claim 14 is functionally identical to claim 4, and differs only in that it depends on claim 11 rather than claim 1. As such, it is rejected on the same grounds as claim 4. Claim 19 is functionally identical to claim 9, and differs only in that it depends on claim 11 rather than claim 1. As such, it is rejected on the same grounds as claim 9. Claim 20 is functionally identical to claim 10, and differs only in that it depends on claim 11 rather than claim 1. As such, it is rejected on the same grounds as claim 10. Claim(s) 6-8 and 16-18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Holzer (US 11095869 B2) and Zeiger (US 8314790 B1) as applied to claims 1 and 11 above, and further in view of Shakib (US 10127722 B2). Regarding claim 6, Holzer and Zeiger teach the method of claim 1. Shakib further teaches wherein: the 3D composite image represents an external anatomical layer of the target individual; and the reference image represents a 2D internal anatomical layer (col. 15, lines 4-8: “The representation of the 3D model can be a 2D image of the 3D model, a 3D reconstruction of the 3D model including 3D data, or a 3D reconstruction of the 3D model including a combination of 2D and 3D data.”). It would have been obvious to one familiar in the art prior to the effective filing date of the claimed invention to include Shakib’s layering with Holzer’s recording method, as both are in the same field of endeavor of 3D object modeling with arcuate imaging. Indeed, Holzer makes a point of explicitly eschewing this method, favoring instead a MIDMR-based approach (col. 9, lines 54-61: “Such MIDMR provides a three-dimensional view of the content without rendering and/or storing an actual three-dimensional model using polygon generation or texture mapping over a three-dimensional mesh and/or polygon model. The three-dimensional effect provided by the MIDMR is generated simply through stitching of actual two-dimensional images and/or portions thereof, and grouping of stereoscopic pairs of images.” Advantages to this method are enumerated in the following paragraph.). This demonstrates that the layering method is both well-known to those familiar in the art and that it was clearly anticipated by Holzer. Regarding claim 7, Holzer and Zeiger teach the method of claim 1. Shakib further teaches wherein: the 3D composite image represents an internal anatomical layer of the target individual; and the reference image represents a 2D external anatomical layer (col. 15, lines 4-8, as above in claim 6 rejection). It would have been obvious to one familiar in the art prior to the effective filing date of the claimed invention to include Shakib’s layering with Holzer’s recording method, as both are in the same field of endeavor of 3D object modeling with arcuate imaging. Indeed, Holzer makes a point of explicitly eschewing this method, favoring instead a MIDMR-based approach (col. 9, lines 54-61, as above in claim 6 rejection.). This demonstrates that the layering method is both well-known to those familiar in the art and that it was clearly anticipated by Holzer. Regarding claim 8, Holzer and Zeiger teach the method of claim 1. Shakib further teaches wherein rendering the graphical representation includes overlaying the 3D overlay on the composite image (col. 21, lines 30-37: “the navigation component 408 and/or the transition component 412 can identify available 2D image data from some number of original 2D captures and the projection component 416 can project the 2D image data onto the 3D surface textures of a base 3D model/mesh, overlaying and replacing the base mesh texture with a strength based on correlation of the current view to that of the original scan frames.”). It would have been obvious to one familiar in the art prior to the effective filing date of the claimed invention to include Shakib’s layering with Holzer’s recording method, as both are in the same field of endeavor of 3D object modeling with arcuate imaging. Indeed, Holzer makes a point of explicitly eschewing this method, favoring instead a MIDMR-based approach (col. 9, lines 54-61, as above in claim 6 rejection.). This demonstrates that the layering method is both well-known to those familiar in the art and that it was clearly anticipated by Holzer. Claim 16 is functionally identical to claim 6, and differs only in that it depends on claim 11 rather than claim 1. As such, it is rejected on the same grounds as claim 6. Claim 17 is functionally identical to claim 7, and differs only in that it depends on claim 11 rather than claim 1. As such, it is rejected on the same grounds as claim 7. Claim 18 is functionally identical to claim 8, and differs only in that it depends on claim 11 rather than claim 1. As such, it is rejected on the same grounds as claim 8. Response to Arguments Applicant's arguments filed 12/22/2025 have been fully considered but they are not persuasive. With respect to Holzer (see applicant’s remarks, p. 7-8, “Representative Claim 1”), applicant has amended the claim and recited new features to claim 1, supported by the Specification, which has been amended to recite: "determining, by the data processing hardware, distances between the set of reference markers; [and] obtaining, by the data processing hardware, a two- dimensional (2D) reference image based on the determined distances." Applicant alleges that the initially-cited sources of Holzer and Zeiger fail to teach this aspect of the claimed invention. The examiner has reexamined independent claim 1, and has included additional references to demonstrate that Holzer teaches these new elements, see detail of the office action above. Independent claim 11 has been amended by the applicant in a similar manner to independent claim 1, and is therefore rejected on a similar basis. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to RYAN A BARHAM whose telephone number is (571)272-4338. The examiner can normally be reached Mon-Fri, 8:30am-5pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Xiao Wu, can be reached at (571) 272-7761. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /RYAN ALLEN BARHAM/Examiner, Art Unit 2613 /XIAO M WU/Supervisory Patent Examiner, Art Unit 2613
Read full office action

Prosecution Timeline

Aug 03, 2023
Application Filed
Jun 11, 2025
Non-Final Rejection — §103
Aug 12, 2025
Examiner Interview Summary
Sep 17, 2025
Response Filed
Oct 31, 2025
Final Rejection — §103
Dec 17, 2025
Applicant Interview (Telephonic)
Dec 17, 2025
Examiner Interview Summary
Dec 22, 2025
Response after Non-Final Action
Jan 15, 2026
Request for Continued Examination
Jan 26, 2026
Response after Non-Final Action
Feb 04, 2026
Non-Final Rejection — §103
Apr 15, 2026
Examiner Interview Summary
Apr 15, 2026
Applicant Interview (Telephonic)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12564345
MEDICAL APPARATUS, AND IMAGE GENERATION METHOD FOR VISUALIZING TEMPORAL TRENDS OF BIOMAGNETIC DATA ON AN ORGAN MODEL
2y 5m to grant Granted Mar 03, 2026
Patent 12548109
Preserving Tumor Volumes for Unsupervised Medical Image Registration
2y 5m to grant Granted Feb 10, 2026
Patent 12530836
OBJECT TRANSITION BETWEEN DEVICE-WORLD-LOCKED AND PHYSICAL-WORLD-LOCKED
2y 5m to grant Granted Jan 20, 2026
Study what changed to get past this examiner. Based on 3 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
54%
Grant Probability
99%
With Interview (+60.0%)
2y 8m
Median Time to Grant
High
PTA Risk
Based on 13 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in for Full Analysis

Enter your email to receive a magic link. No password needed.

Free tier: 3 strategy analyses per month