Prosecution Insights
Last updated: April 19, 2026
Application No. 18/669,408

Alignment of Medical Images in Augmented Reality Displays

Non-Final OA §112§DP
Filed
May 20, 2024
Examiner
WANG, YUEHAN
Art Unit
2617
Tech Center
2600 — Communications
Assignee
Novarad Corporation
OA Round
1 (Non-Final)
83%
Grant Probability
Favorable
1-2
OA Rounds
2y 7m
To Grant
96%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
404 granted / 485 resolved
+21.3% vs TC avg
Moderate +13% lift
Without
With
+12.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
47 currently pending
Career history
532
Total Applications
across all art units

Statute-Specific Performance

§101
4.3%
-35.7% vs TC avg
§103
69.6%
+29.6% vs TC avg
§102
8.3%
-31.7% vs TC avg
§112
6.6%
-33.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 485 resolved cases

Office Action

§112 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Objections Claims 5, 8 and 15 are objected to because of the following informalities: Claim 5 recites “The method of claim 4”. It should read “The method of claim 1”. Claim 8 recites “The method as in claim 3”. It should read “The method as in claim 1”. Claim 15 recites “The method as in claim 13”. It should read “The method as in claim 1”. Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 5, 6 and 15 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 5 recites the limitation " the defined length of the tubing" in line 1. There is insufficient antecedent basis for this limitation in the claim. Claim 6 are rejected under 112(b) by virtue of its dependency. Claim 15 recites the limitation " filling the plastic tubing" in line 1. There is insufficient antecedent basis for this limitation in the claim. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp. Claims 1, 2, 5-8, 15 and 18-29 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-24 of U.S. Patent No. 11,237,627. Although the claims at issue are not identical, they are not patentably distinct from each other because they are broader in scope as compared below: Present claims Patent No. 11,237,627 1. (Original) A method for using an augmented reality (AR) headset to co-localize an image data set with a body of a person, comprising: identifying one or more optical codes associated with the body of the person using an optical sensor of the AR headset, wherein an optical code for a body of the person has an image visible marker that includes an image visible contrast medium in tubing or an enclosed container in a fixed position relative to the optical code; identifying the image visible marker with the image visible contrast medium in the tubing in the image data set acquired using a medical imaging device; and aligning the image data set with the body of the person using one or more optical codes on the body of the person as viewed through the AR headset and using the fixed position of the image visible marker with the image visible contrast medium in the tubing with respect to the optical code as referenced to a representation of the image visible marker with the image visible contrast medium in the tubing in the image data set. 1. A method for using an augmented reality (AR) headset to co-localize an image data set with a body of a person, comprising: identifying one or more optical codes associated with the body of the person using an optical sensor of the AR headset, wherein an optical code for a body of the person has an image visible marker that includes an image visible contrast medium in tubing or an enclosed container in a fixed position relative to the optical code; identifying the image visible marker with the image visible contrast medium in the tubing in the image data set acquired using a medical imaging device; and aligning the image data set with the body of the person using one or more optical codes on the body of the person as viewed through the AR headset and using the fixed position of the image visible marker with the image visible contrast medium in the tubing with respect to the optical code as referenced to a representation of the image visible marker with the image visible contrast medium in the tubing in the image data set. 2. (Currently Amended) The method of claim 1, further wherein the image visible contrast medium is an MRI (magnetic resonance imaging) visible contrast medium or a paramagnetic fluid. 2. The method of claim 1, further wherein the image visible contrast medium is an MRI (magnetic resonance imaging) visible contrast medium. 3. The method of claim 2, wherein the image visible contrast medium is a paramagnetic fluid. 5. (Original) The method of claim 4, wherein the defined length of the tubing identifies an optical code with a defined 2D data encoding that is visible using the optical sensor of the AR headset. 7. The method of claim 6, wherein the defined length of the tubing identifies an optical code with a defined 2D data encoding that is visible using the optical sensor of the AR headset. 6. (Original) The method of claim 5 further comprising associating the defined length of the tubing with an optical code visible using the optical sensor of the AR headset by linking the defined length of tubing with the optical code in a data store. 8. The method of claim 7 further comprising associating the defined length of the tubing with an optical code visible using the optical sensor of the AR headset by linking the defined length of tubing with the optical code in a data store. 7. (Original) The method as in claim 1, wherein the tubing is a shaped tubing in a shape of at least one of: a line, a cross formation, concentric circles, or a grid formation. 9. The method as in claim 1, wherein the tubing is a shaped tubing in a shape of at least one of: a line, a cross formation, concentric circles, or a grid formation. 8. (Original) The method as in claim 3, wherein the tubing has two dimensional (2D) cross- sectional shape that is at least one of: round, triangular, square, rectangular, elliptical, hexagonal, octagonal, oval, or polygonal. 4. The method as in claim 3, wherein the tubing has two dimensional (2D) cross-sectional shape that is at least one of: round, triangular, square, rectangular, elliptical, hexagonal, octagonal, oval, or polygonal. 15. (Currently Amended) The method as in claim 13, further comprising filling the plastic tubing with the image visible contrast medium that is an MRI (magnetic resonance imaging) visible contrast medium or a paramagnetic fluid. 2. The method of claim 1, further wherein the image visible contrast medium is an MRI (magnetic resonance imaging) visible contrast medium. 3. The method of claim 2, wherein the image visible contrast medium is a paramagnetic fluid. 12. The method of claim 1, wherein the tubing is at least one of: plastic tubing, glass, or a non-metal tubing. 18. (Original) A method for using an augmented reality (AR) headset to co-localize an image data set with a body of a person, comprising: detecting visual image data of a portion of the body of the person using an optical sensor of the AR headset; identifying one or more optical codes associated with the body of the person, wherein an optical code for a body of the person has an image visible marker which is located in a fixed position relative to the optical code; identifying a known size of a geometric attribute of the image visible marker associated with the body of the person, wherein the image visible marker appears in the image data set; comparing a measured size of the geometric attribute of the image visible marker in the image data set to the known size of a geometric attribute of an image visible marker to determine a computed difference in size between a measured size and the known size of the geometric attribute of the image visible marker in the image data set; and modifying a scaling of the image data set, to be aligned with the body of the person using one or more optical codes on the body of the person as viewed through the AR headset and the image visible marker, based at least in part on the computed difference in size between the known size and measured size of the geometric attribute of the image visible marker. 13. A method for using an augmented reality (AR) headset to co-localize an image data set with a body of a person, comprising: detecting visual image data of a portion of the body of the person using an optical sensor of the AR headset; identifying one or more optical codes associated with the body of the person, wherein an optical code for a body of the person has an image visible marker which is located in a fixed position relative to the optical code; identifying a known size of a geometric attribute of the image visible marker associated with the body of the person, wherein the image visible marker appears in the image data set; comparing a measured size of the geometric attribute of the image visible marker in the image data set to the known size of a geometric attribute of an image visible marker to determine a computed difference in size between a measured size and the known size of the geometric attribute of the image visible marker in the image data set; and modifying a scaling of the image data set, to be aligned with the body of the person using one or more optical codes on the body of the person as viewed through the AR headset and the image visible marker, based at least in part on the computed difference in size between the known size and measured size of the geometric attribute of the image visible marker. 19. (Original) The method as in claim 18, further comprising increasing or decreasing the scaling of the image data set to match the measured size of the geometric attribute of the image visible marker in the image data set to the known size of the geometric attribute of the image visible marker. 14. The method as in claim 13, further comprising increasing or decreasing the scaling of the image data set to match the measured size of the geometric attribute of the image visible marker in the image data set to the known size of the geometric attribute of the image visible marker. 20. (Original) The method as in claim 18, further comprising increasing the scaling of the image data set as viewed through the AR headset where the known size of the optical code is greater than the measured size of the image visible marker in the image data set. 15. The method as in claim 13, further comprising increasing the scaling of the image data set as viewed through the AR headset where the known size of the optical code is greater than the measured size of the image visible marker in the image data set. 21. (Original) The method as in claim 18, further comprising decreasing the scaling of the image data set as viewed through the AR headset where the known size of the optical code is smaller than the measured size of the image visible marker in the image data set. 16. The method as in claim 13, further comprising decreasing the scaling of the image data set as viewed through the AR headset where the known size of the optical code is smaller than the measured size of the image visible marker in the image data set. 22. (Original) The method as in claim 18, wherein the size of the geometric attribute is a pre- measured value stored in a data store accessible to the AR headset. 17. The method as in claim 13, wherein the size of the geometric attribute is a pre-measured value stored in a data store accessible to the AR headset. 23. (Original) The method as in claim 18, wherein the size of the geometric attribute is measured from an image visible marker that is visible on a person's body acquired by the AR headset. 18. The method as in claim 13, wherein the size of the geometric attribute is measured from an image visible marker that is visible on a person's body acquired by the AR headset. 24. (Original) The method as in claim 18, wherein the geometric attribute of the optical code is at least one of: a length of a tubing of a paramagnetic marker, a distance between two metallic markers with of the optical code, a radius encompassing multiple metallic markers of the optical code, a diameter of the image visible marker, or a geometric measurement of the image visible marker. 19. The method as in claim 13, wherein the geometric attribute of the optical code is at least one of: a length of a tubing of a paramagnetic marker, a distance between two metallic markers with of the optical code, a radius encompassing multiple metallic markers of the optical code, a diameter of the image visible marker, or a geometric measurement of the image visible marker. 25. (Original) A method for using an augmented reality (AR) headset to co-localize an image data set with a body of a person, comprising: identifying one or more optical codes associated with the body of the person using an optical sensor of the AR headset, wherein an optical code for a body of the person is located in a fixed position relative to an image visible marker; identifying one or more edges of an optical code associated with the body of the person; identifying a center point of the optical code using the one or more edges of the optical code; and aligning the image data set with the body of the person using the center point of the optical code and the image visible code. 20. A method for using an augmented reality (AR) headset to co-localize an image data set with a body of a person, comprising; identifying one or more optical codes associated with the body of the person using an optical sensor of the AR headset, wherein an optical code for a body of the person is located in a fixed position relative to an image visible marker; identifying one or more edges of an optical code associated with the body of the person; identifying a center point of the optical code using the one or more edges of the optical code; and aligning the image data set with the body of the person using the center point of the optical code and the image visible marker. 26. (Original) The method as in claim 25, wherein the center point is a computed centroid of the optical code. 21. The method as in claim 20, wherein the center point is a computed centroid of the optical code. 27. (Currently Amended) The method as in claim 25, further comprising: calculating a first line between a first point on the edge of the optical code and a second point on edge of the optical code to form a first diagonal of the optical code; computing a second line between a third point on the edge and a fourth point on the edge of the optical code which forms a second diagonal of the optical code and intersects the first line; and identifying a center point of the optical code where the first line and second line intersect. 22. The method as in claim 20, further comprising: calculating a first line between a first point on the edge of the optical code and a second point on edge of the optical code to form a first diagonal of the optical code; computing a second line between a third point on the edge and a fourth point on the edge of the optical code which forms a second diagonal of the optical code and intersects the first line; and identifying a center point of the optical code where the first line and second line intersect. 28. (Original) The method as in claim 25, further comprising: finding the center points of a plurality of optical codes in the visual image data; calculating an average center point of the center points of the plurality of optical codes; and aligning the image data set with a body of a person using the average center point of the plurality of optical codes. 23. A method for using an augmented reality (AR) headset to co-localize an image data set with a body of a person, comprising: identifying a plurality of optical codes associated with the body of the person using an optical sensor of the AR headset, wherein the optical codes for a body of the person are located in a fixed position relative to an image visible marker; identifying one or more edges of the plurality of optical codes associated with the body of the person; finding the center points of a plurality of optical codes using the one or more edges of the optical codes; calculating an average center point of the center points of the plurality of optical codes; and aligning the image data set with a body of a person using the average center point of the plurality of optical codes and image visible markers. 29. (New) The method as in claim 25, further comprising determining a location for displaying the image data set through the AR headset by using the center point of optical code as referenced to the image visible marker and a fixed distance between the center point of the optical code and the image visible marker. 24. The method as in claim 23, further comprising determining a location for displaying the image data set through the AR headset by using the average center point of the plurality of optical codes as referenced to image visible markers, wherein there is a fixed distance between the center point of each optical code and each image visible marker. Claims 1, 2, 5-8, 15 and 18-29 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-24 of U.S. Patent No. 11,989,341 . Although the claims at issue are not identical, they are not patentably distinct from each other because they are broader in scope as compared below: Present claims Patent No. 11,989,341 1. (Original) A method for using an augmented reality (AR) headset to co-localize an image data set with a body of a person, comprising: identifying one or more optical codes associated with the body of the person using an optical sensor of the AR headset, wherein an optical code for a body of the person has an image visible marker that includes an image visible contrast medium in tubing or an enclosed container in a fixed position relative to the optical code; identifying the image visible marker with the image visible contrast medium in the tubing in the image data set acquired using a medical imaging device; and aligning the image data set with the body of the person using one or more optical codes on the body of the person as viewed through the AR headset and using the fixed position of the image visible marker with the image visible contrast medium in the tubing with respect to the optical code as referenced to a representation of the image visible marker with the image visible contrast medium in the tubing in the image data set. 1. A method, comprising: identifying one or more optical codes associated with a body of a person using a sensor of an AR (augmented reality) headset, wherein an optical code for the body of the person has an image visible marker that includes an image visible contrast medium in a container in a position relative to the optical code; identifying the image visible marker with the image visible contrast medium in the container in an image data set acquired using a medical imaging device; and aligning the image data set with the body of the person using one or more optical codes as viewed through the AR headset and using the image visible marker with respect to the optical code as referenced to a representation of the image visible marker in the image data set. 4. (Original) The method of claim 2, wherein the image visible contrast medium is encapsulated in a tubing of a defined length or a tubing of variable length. 2. (Currently Amended) The method of claim 1, further wherein the image visible contrast medium is an MRI (magnetic resonance imaging) visible contrast medium or a paramagnetic fluid. 2. The method of claim 1, further wherein the image visible contrast medium is an MRI (magnetic resonance imaging) visible contrast medium. 3. The method of claim 2, wherein the image visible contrast medium is a paramagnetic fluid. 5. (Original) The method of claim 4, wherein the defined length of the tubing identifies an optical code with a defined 2D data encoding that is visible using the optical sensor of the AR headset. 5. The method of claim 4, wherein the defined length of the tubing identifies an optical code with a defined 2D data encoding that is visible using an optical sensor of the AR headset. 6. (Original) The method of claim 5 further comprising associating the defined length of the tubing with an optical code visible using the optical sensor of the AR headset by linking the defined length of tubing with the optical code in a data store. 6. The method of claim 5 further comprising associating the defined length of the tubing with an optical code visible using the optical sensor of the AR headset by linking the defined length of the tubing with the optical code in a data store. 7. (Original) The method as in claim 1, wherein the tubing is a shaped tubing in a shape of at least one of: a line, a cross formation, concentric circles, or a grid formation. 7. The method as in claim 4, wherein the tubing is a shaped tubing in a shape of at least one of: a line, a cross formation, concentric circles, or a grid formation. 8. (Original) The method as in claim 3, wherein the tubing has two dimensional (2D) cross- sectional shape that is at least one of: round, triangular, square, rectangular, elliptical, hexagonal, octagonal, oval, or polygonal. 8. The method as in claim 4, wherein the tubing has two dimensional (2D) cross-sectional shape that is at least one of: round, triangular, square, rectangular, elliptical, hexagonal, octagonal, oval, or polygonal. 15. (Currently Amended) The method as in claim 13, further comprising filling the plastic tubing with the image visible contrast medium that is an MRI (magnetic resonance imaging) visible contrast medium or a paramagnetic fluid. 2. The method of claim 1, further wherein the image visible contrast medium is an MRI (magnetic resonance imaging) visible contrast medium. 3. The method of claim 2, wherein the image visible contrast medium is a paramagnetic fluid. 12. The method of claim 4, wherein the tubing is at least one of: plastic tubing, glass, or a non-metal tubing. 18. (Original) A method for using an augmented reality (AR) headset to co-localize an image data set with a body of a person, comprising: detecting visual image data of a portion of the body of the person using an optical sensor of the AR headset; identifying one or more optical codes associated with the body of the person, wherein an optical code for a body of the person has an image visible marker which is located in a fixed position relative to the optical code; identifying a known size of a geometric attribute of the image visible marker associated with the body of the person, wherein the image visible marker appears in the image data set; comparing a measured size of the geometric attribute of the image visible marker in the image data set to the known size of a geometric attribute of an image visible marker to determine a computed difference in size between a measured size and the known size of the geometric attribute of the image visible marker in the image data set; and modifying a scaling of the image data set, to be aligned with the body of the person using one or more optical codes on the body of the person as viewed through the AR headset and the image visible marker, based at least in part on the computed difference in size between the known size and measured size of the geometric attribute of the image visible marker. 13. A method, comprising: identifying one or more optical codes associated with a body of a person, wherein an optical code for the body of the person has an image visible marker which is located in a position relative to the optical code; identifying a known size of a geometric attribute of the image visible marker, which appears in an image data set; comparing a measured size of the geometric attribute of the image visible marker in the image data set to the known size of a geometric attribute of an image visible marker to determine a computed difference in size between the measured size and the known size of the geometric attribute of the image visible marker in the image data set; and modifying a scaling of the image data set as viewed through an AR (augmented reality) headset using the image visible marker, based at least in part on the computed difference in size. 11. The method of claim 1, further comprising retrieving the image data set automatically for the body of the person using one or more optical codes on the body of the person. 19. (Original) The method as in claim 18, further comprising increasing or decreasing the scaling of the image data set to match the measured size of the geometric attribute of the image visible marker in the image data set to the known size of the geometric attribute of the image visible marker. 14. The method as in claim 13, further comprising increasing or decreasing the scaling of the image data set to match the measured size of the geometric attribute of the image visible marker in the image data set to the known size of the geometric attribute of the image visible marker. 20. (Original) The method as in claim 18, further comprising increasing the scaling of the image data set as viewed through the AR headset where the known size of the optical code is greater than the measured size of the image visible marker in the image data set. 15. The method as in claim 13, further comprising increasing the scaling of the image data set as viewed through the AR headset where the known size of the optical code is greater than the measured size of the image visible marker in the image data set. 21. (Original) The method as in claim 18, further comprising decreasing the scaling of the image data set as viewed through the AR headset where the known size of the optical code is smaller than the measured size of the image visible marker in the image data set. 16. The method as in claim 13, further comprising decreasing the scaling of the image data set as viewed through the AR headset where the known size of the optical code is smaller than the measured size of the image visible marker in the image data set. 22. (Original) The method as in claim 18, wherein the size of the geometric attribute is a pre- measured value stored in a data store accessible to the AR headset. 17. The method as in claim 13, wherein the size of the geometric attribute is a pre-measured value stored in a data store accessible to the AR headset. 23. (Original) The method as in claim 18, wherein the size of the geometric attribute is measured from an image visible marker that is visible on a person's body acquired by the AR headset. 18. The method as in claim 13, wherein the size of the geometric attribute is measured from an image visible marker that is visible on a person's body acquired by the AR headset. 24. (Original) The method as in claim 18, wherein the geometric attribute of the optical code is at least one of: a length of a tubing of a paramagnetic marker, a distance between two metallic markers with of the optical code, a radius encompassing multiple metallic markers of the optical code, a diameter of the image visible marker, or a geometric measurement of the image visible marker. 19. The method as in claim 13, wherein the geometric attribute of the optical code is at least one of: a length of a tubing of a paramagnetic marker, a distance between two metallic markers with the optical code, a radius encompassing multiple metallic markers of the optical code, a diameter of the image visible marker, or a geometric measurement of the image visible marker. 25. (Original) A method for using an augmented reality (AR) headset to co-localize an image data set with a body of a person, comprising: identifying one or more optical codes associated with the body of the person using an optical sensor of the AR headset, wherein an optical code for a body of the person is located in a fixed position relative to an image visible marker; identifying one or more edges of an optical code associated with the body of the person; identifying a center point of the optical code using the one or more edges of the optical code; and aligning the image data set with the body of the person using the center point of the optical code and the image visible code. 20. A method, comprising: identifying one or more optical codes associated with a body of a person using a sensor of an AR (augmented reality) headset, wherein an optical code is located in a position relative to an image visible marker; identifying one or more edges of an optical code; identifying a center point of the optical code using the one or more edges of the optical code; and aligning an image data set with the body of the person based at least in part on the center point of the optical code and the position of the image visible marker relative to the optical code. 26. (Original) The method as in claim 25, wherein the center point is a computed centroid of the optical code. 21. The method as in claim 20, wherein the center point is a computed centroid of the optical code. 27. (Currently Amended) The method as in claim 25, further comprising: calculating a first line between a first point on the edge of the optical code and a second point on edge of the optical code to form a first diagonal of the optical code; computing a second line between a third point on the edge and a fourth point on the edge of the optical code which forms a second diagonal of the optical code and intersects the first line; and identifying a center point of the optical code where the first line and second line intersect. 22. The method as in claim 20, further comprising: calculating a first line between a first point on an edge of the optical code and a second point on edge of the optical code to form a first diagonal of the optical code; computing a second line between a third point on the edge and a fourth point on the edge of the optical code which forms a second diagonal of the optical code and intersects the first line; and identifying a center point of the optical code where the first line and second line intersect. 28. (Original) The method as in claim 25, further comprising: finding the center points of a plurality of optical codes in the visual image data; calculating an average center point of the center points of the plurality of optical codes; and aligning the image data set with a body of a person using the average center point of the plurality of optical codes. 23. A method, comprising: identifying a plurality of optical codes associated with a body of a person using a sensor of the AR (augmented reality) headset, wherein the optical codes for the body of the person are positioned relative to an image visible marker; identifying one or more edges of the plurality of optical codes; finding center points of a plurality of optical codes using the one or more edges of the optical codes; calculating an average center point of the center points of the plurality of optical codes; and aligning an image data set with the body of the person using the AR headset, the average center point of the plurality of optical codes and image visible markers. 29. (New) The method as in claim 25, further comprising determining a location for displaying the image data set through the AR headset by using the center point of optical code as referenced to the image visible marker and a fixed distance between the center point of the optical code and the image visible marker. 24. The method as in claim 23, further comprising determining a location for displaying the image data set through the AR headset by using the average center point of the plurality of optical codes as referenced to image visible markers, wherein there is a fixed distance between a center point of each optical code and each image visible marker. Allowable Subject Matter Claims 1, 2, 5-8, 15 and 18-29 are allowed over prior art. The following is an examiner’s statement of reasons for allowance: Ofek et al. (US 2009/0078772) discloses a method enhancing decoding images in which alignment pattern (Figure 1, item 121) is used to align an optical code (101), and additional patterns (111). Finn et al. (US 2014/0210856) discloses a method of overlaying a 3D model to a real scene using a marker (Figure 5E, item 530); the marker has a center marker point (532) at the center of the QR code (531). Stolka et al. (US 2017/0116729) discloses using a plurality of markers are used to track and register a graph to a 3D image data (Figure 6). Mahmood et al. (US 2017/0296292) discloses a method aligning a 3D model of a brain to a real brain using a system of fiducial markers (Figure 4). The closest prior art by Ofek et al. (US 2009/0078772), Finn et al. (US 2014/0210856), Stolka et al. (US 2017/0116729) and Mahmood et al. (US 2017/0296292) do not explicitly teach (claim 1) wherein an optical code for a body of the person has an image visible marker that includes an image visible contrast medium in tubing or an enclosed container in a fixed position relative to the optical code; identifying the image visible marker with the image visible contrast medium in the tubing in the image data set acquired using a medical imaging device; and aligning the image data set with the body of the person using one or more optical codes on the body of the person as viewed through the AR headset and using the fixed position of the image visible marker with the image visible contrast medium in the tubing with respect to the optical code as referenced to a representation of the image visible marker with the image visible contrast medium in the tubing in the image data set. (claim 18) identifying a known size of a geometric attribute of the image visible marker associated with the body of the person, wherein the image visible marker appears in the image data set; comparing a measured size of the geometric attribute of the image visible marker in the image data set to the known size of a geometric attribute of an image visible marker to determine a computed difference in size between a measured size and the known size of the geometric attribute of the image visible marker in the image data set; and modifying a scaling of the image data set, to be aligned with the body of the person using one or more optical codes on the body of the person as viewed through the AR headset and the image visible marker, based at least in part on the computed difference in size between the known size and measured size of the geometric attribute of the image visible marker. (claim 25) identifying one or more edges of an optical code associated with the body of the person; identifying a center point of the optical code using the one or more edges of the optical code; and aligning the image data set with the body of the person using the center point of the optical code and the image visible code. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Samantha (Yuehan) Wang whose telephone number is (571)270-5011. The examiner can normally be reached Monday-Friday, 8am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, King Poon can be reached at (571)272-7440. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Samantha (YUEHAN) WANG/ Primary Examiner Art Unit 2617
Read full office action

Prosecution Timeline

May 20, 2024
Application Filed
Nov 14, 2025
Non-Final Rejection — §112, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597178
VECTOR OBJECT PATH SEGMENT EDITING
2y 5m to grant Granted Apr 07, 2026
Patent 12597506
ENDOSCOPIC EXAMINATION SUPPORT APPARATUS, ENDOSCOPIC EXAMINATION SUPPORT METHOD, AND RECORDING MEDIUM
2y 5m to grant Granted Apr 07, 2026
Patent 12586286
DIFFERENTIABLE REAL-TIME RADIANCE FIELD RENDERING FOR LARGE SCALE VIEW SYNTHESIS
2y 5m to grant Granted Mar 24, 2026
Patent 12586261
IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND STORAGE MEDIUM
2y 5m to grant Granted Mar 24, 2026
Patent 12567182
USING AUGMENTED REALITY TO VISUALIZE OPTIMAL WATER SENSOR PLACEMENT
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
83%
Grant Probability
96%
With Interview (+12.9%)
2y 7m
Median Time to Grant
Low
PTA Risk
Based on 485 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month