Prosecution Insights
Last updated: April 19, 2026
Application No. 18/702,541

RELATIVE MOVEMENT TRACKING WITH AUGMENTED REALITY

Non-Final OA §103
Filed
Apr 18, 2024
Examiner
AMIN, JWALANT B
Art Unit
2612
Tech Center
2600 — Communications
Assignee
Hoffmann-La Roche, Inc.
OA Round
1 (Non-Final)
79%
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant
94%
With Interview

Examiner Intelligence

Grants 79% — above average
79%
Career Allow Rate
500 granted / 631 resolved
+17.2% vs TC avg
Strong +15% interview lift
Without
With
+15.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
14 currently pending
Career history
645
Total Applications
across all art units

Statute-Specific Performance

§101
13.4%
-26.6% vs TC avg
§103
56.8%
+16.8% vs TC avg
§102
7.5%
-32.5% vs TC avg
§112
10.8%
-29.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 631 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Acknowledgment is made of applicant's claim for foreign priority based on an application filed in EPO on 10/21/2021. It is noted, however, that applicant has not filed a certified copy of the EP 21204019.0 application as required by 37 CFR 1.55. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1-5, 9, 11 and 13-18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Wang et al. (US 2021/0267494, hereinafter Wang), and further in view of Dassonville et al. (US 2022/0211444, hereinafter Dassonville). Regarding claim 1, Wang teaches a computer-implemented method for movement tracking of a subject (abstract: data analyzing device that obtains a space movement data of the virtual marker points from the space movement data of the actual markers, and simulates and analyzes a movement of the bony structure of the joint according to the space movement data of the virtual marker points), the method comprising: obtaining data of a subject’s body during a predefined time period (sensors can determine data related to a subject’s body during a certain time period; [0045]: after the relative position relationship being determined, identifying the calibrating device of the actual marker points for multiple times by the mark recognition device during the knee joint movement within a certain time, so as to obtain the space movement data of the actual marker points, then obtaining or optimizing the space movement data of the virtual marker points according to the relative position relationship and the space movement data of the actual marker points; [0122]: The mark recognition device can be an optical tracking device, such as a dual infrared or multi-infrared integrated optical positioning sensor. The calibrating device can be identified and located by the dual infrared or multiple infrared integrated optical positioning sensors so as to determine the position of the actual marker points and the virtual marker points); tracking locations of a plurality of anatomical landmarks of the body (virtual marker points set at anatomical bony landmark positions of the joints) based on the ([0043]: setting virtual marker points at anatomical bony landmark positions of the knee joint; [0079]: The virtual marker points, for example, are also collected into the data collection device 3 together with the actual marker points after being calibrated, except that the virtual marker points are set on the anatomical bony landmark positions of the joint, and the virtual marker points are set at the corresponding typical characteristic point through the calibrating device, also through sensing and collecting method to form its relative space position with the actual marker points, the space position of the initial virtual marker points is fixed; [0122]: The calibrating device can be identified and located by the dual infrared or multiple infrared integrated optical positioning sensors so as to determine the position of the actual marker points and the virtual marker points); tracking a location of a predefined point (sensing space positions of actual marker points attached to a set position around the joints) in an environment, in which the subject is moving (movement of the joint is functionally analogous to movement of the subject; [0018]: the calibrating device is an elastic binding device or a convenient attachment device for binding or attaching the actual marker points at a set position around a joint to establish a relative position of the virtual marker points and the actual marker points through a characteristic calibration pointer; [0074]: The calibrating device 1 is preferably a flexible calibrating device, such as an elastic binding device or a convenient attachment device, which binds or attaches actual marker points to a set position around the joint, which is fast and reliable. By capturing the bony structure movement of the joint through the actual marker points set on the joint surface, the relative movement between the bony structures constituting the joint is calculated and the joint movement is obtained. It can be used for the movement analysis and evaluation of a knee joint, an ankle joint, a hip joint wrist joint, an elbow joint, and a shoulder joint; [0075]: The optical tracking device 2 senses the space position of the actual marker points; [0122]: The calibrating device can be identified and located by the dual infrared or multiple infrared integrated optical positioning sensors so as to determine the position of the actual marker points and the virtual marker points); adjusting the location of one or more of the plurality of anatomical landmarks relative to the location of the predefined point to obtain adjusted locations (fig. 6 step T23; [0120]: after the relative position relationship being determined, identifying the mark position of the actual marker points for multiple times by the mark recognition device during the knee joint movement within a certain time, so as to obtain the space movement data of the actual marker points, then obtaining or optimizing the space movement data of the virtual point according to the relative position and the space movement data of the actual marker points); determining a movement of the subject's body based on the adjusted locations during the predefined time period (abstract: data analyzing device that obtains a space movement data of the virtual marker points from the space movement data of the actual markers, and simulates and analyzes a movement of the bony structure of the joint according to the space movement data of the virtual marker points; fig. 6 step T24; [0121]: constructing the three-dimensional and six-degrees-of-freedom movement data of the left and right knee joints by the displacement of the space movement data of the virtual marker points along the anatomical coordinate system of the joint and the rotation around the anatomical coordinate system of the joint). Wang does not explicitly teach to obtain depth image data of a subject's body during a predefined time period and tracking locations of the anatomical landmarks of the body based on the depth image data. Dassonville teaches to obtain depth image data of a subject's body during a predefined time period (the movement of anatomy of interest is periodically monitored; [0117]: the image data collected by the depth and/or optical camera(s) 530, 532 (FIG. 4) of visualization device 213 can be processed to detect surface descriptors that will facilitate identification of the position and orientation of the observed bone and to determine an initialization transformation between the virtual and observed bones; [0118]: MR system 212 may process the image data collected by the depth and/or optical camera(s) 530, 532 (FIG. 4) to automatically identify a location of the anatomy of interest (e.g., observed bone structure 252); [0119]: MR system 212 also can use the optical image data collected from optical cameras 530 and/or depth cameras 532 and/or motion sensors 533 (or any other acquisition sensor) to determine a global reference coordinate system with respect to the environment (e.g., operating room) in which the user is located. In other examples, the global reference coordinate system can be defined in other manners. In some examples, depth cameras 532 are externally coupled to visualization device 213, which may be a mixed reality headset, such as the Microsoft HOLOLENS™ headset or a similar MR visualization device. For instance, depth cameras 532 may be removable from visualization device 213. In some examples, depth cameras 532 are part of visualization device 213, which again may be a mixed reality headset. For instance, depth cameras 532 may be contained within an outer housing of visualization device 213; [0171]: At block 1706, movement of the anatomy of interest is continuously (or periodically) monitored) and tracking locations of the anatomical landmarks of the body based on the depth image data ([0118]: MR system 212 may process the image data collected by the depth and/or optical camera(s) 530, 532 (FIG. 4) to automatically identify a location of the anatomy of interest (e.g., observed bone structure 252) … MR system 212 may use a machine learned model (i.e., use machine learning, such as a random forest algorithm) to process the image data and identify the location of the anatomy of interest; [0167]: he region of interest may be an anatomical landmark of the anatomy of interest. The anatomy of interest may be a shoulder joint. In some examples, the anatomical landmark is a center region of a glenoid). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply Dassonville’s knowledge of tracking location of the anatomy of interest of a subject’s body using the depth image data obtained during periodic monitoring the movement of anatomy of interest as taught and modify the process of Wang because such a process for monitoring a spatial relationship between at least a portion of an implant or implant tool (generally referred to as an implant component) and a bone surface during a surgical procedure and, providing information to a surgeon based on the monitored spatial relationship during the surgical procedure, e.g., enables the surgeon to limit movement of the implant component toward the bone, and thereby reduces the risk of bone fractures. The information provided to the surgeon may guide the surgeon in installing the implant component or indicate to the surgeon that a different sized or shaped implant component is desirable ([0044]). Claims 11 and 14 are similar in scope to claim 1, and therefore the examiner provides similar rationale to reject these claims. Moreover, the combination of Wang and Dassonville teaches a computing device (Dassonville - MR system 212, fig. 2 and [0069]) comprising a motion sensor (Dassonville – motion sensors 433, [0087]), a camera (Wang – [0021]: the optical sensing device is a binocular infrared camera integrated with an infrared light source), a memory (Dassonville - memory storage device 215, fig. 2 and [0072]), and a processor (Dassonville - processing device 210, fig. 2 and [0069]). The combination of Wang and Dassonville also teaches a computer readable medium (Dassonville – [0148] and [0315]). Regarding claim 2, the combination of Wang and Dassonville teaches the method of claim 1, wherein the environment is a real environment (Dassonville – [0079]: MR system 212 can be operated in an augmented surgery mode in which the user can manipulate the user interface intraoperatively so that the user can visually perceive details of the virtual surgical plan projected in a real environment, e.g., on a real anatomy of interest of the particular patient), wherein the predefined point is an anchor point of a virtual environment superimposed on the real environment (Dassonville – [0116]: FIG. 13 illustrates an example registration procedure using a virtual marker 292. FIG. 14 is a conceptual diagram illustrating additional steps of the example registration procedure of FIG. 8A using a virtual marker. In the example of FIG. 13 and FIG. 14, the user of visualization device 213 shifts a gaze line 278 to set virtual marker 292 at a center region 286 (e.g., center point) of observed bone structure 252; Dassonville – [0118]: As discussed above, in some examples, the initialization may be aided by the user (e.g., aided by the user shifting gaze line 278 to set virtual marker 292 at a center region 286 of observed bone structure 252)), and wherein the adjusted locations correspond to a position of the subject's body in the virtual environment (Wang – [0125]: After the geometric relationship between the actual marker points and the virtual marker points is established, when the movement of the virtual marker points is determined by the actual marker points, a marker point deviation may occur during the operation of the calibrating device or the human body movement. If there are four or more marking elements that mark the actual marker points, and the initial four points are used to determine the spacing between any two actual marker points according to the four actual marker points that are sensed, during the following operations, any three of the four actual marker points are sorted so that the actual marker points selected in any frame can correspond to the actual marker points of the initial frame. If the relative position between the actual marker points and the virtual marker points is unchanged during the joint movement, the position of the virtual marker points can be directly determined. If there is a slight change, the distance between the virtual marker points and the actual marker points after the arbitrary frame sorting is closest to the distance between the virtual marker points and the actual marker points of the initial frame, so the space position of the virtual marker points can be determined through optimization. In other words, according to the characteristic that the distance between any three marking elements in the actual marking element groups of the actual marker points and the calibrated virtual marker points remains unchanged or changes negligibly in the knee joint motion, in any frame of the movement of the joint, the distance between the virtual marker points and the said three actual marker points is closest to the distance between the virtual marker points and the said three actual marker points in initial position. Use this as an optimization goal to determine the space position of the virtual marker points, and obtain the space movement data of the virtual marker points). Regarding claim 3, the combination of Wang and Dassonville teaches the method of claim 2, wherein the predefined point corresponds to one of the anatomical landmarks or a tag attached to the subject's body and suitable to be recognized in the depth image data (Dassonville – [0121]: In some examples, one or more of the virtual markers can be replaced and/or supplemented with one or more physical markers, such as optical markers or electromagnetic markers, as examples. FIG. 16 illustrates an example of a physical markers positioned around the real observed bone structure 252. In general, the one or more physical markers may be positioned at various positions on or around the object being registered (e.g., real observed bone structure 252 or a tool)… As shown in FIG. 16, the fiducial marker may be positioned on a portion of the physical marker that is proximal to a tip 1601B of the marker 1601. In some examples, MR system 212 may obtain a distance between a feature of the fiducial marker (e.g., a centroid or center point) and the tip of the physical marker; Dassonville – [0123]: The physical markers can be any type of marker that enables identification of a particular location relative to the real observed object (e.g., bone structure 252). Examples of physical markers include, but are not necessarily limited to, passive physical markers and active physical markers. Passive physical markers may have physical parameters that aid in their identification by MR system 212. For instance, physical markers may have a certain shape (e.g., spherical markers that may be attached to the real observed bone structure 252), and/or optical characteristics (e.g., reflective materials, colors (e.g., colors, such a green, that are more visible in a surgical environment), bar codes (including one-dimensional or two-dimensional bars, such as QR codes), or the like) that aid in their identification by MR system 212), so that locations in the virtual environment attached to the subject's body and suitable to be recognized in the depth image data (Dassonville – [0128]: MR system 212 may utilize data from one or more sensors (e.g., one or more of sensors 554 of visualization device 213 of FIG. 5) to identify the location of the physical markers (820). For instance, MR system 212 may use data generated by any combination of depth sensors 532 and/or optical sensors 530 to identify a specific position (e.g., coordinates) of each of the physical markers. As one specific example, MR system 212 may utilize optical data generated by optical sensors 530 to identify a centroid of optical marker 1601A of FIG. 16. MR system 212 may then utilize depth data generated by depth sensors 532 and/or optical data generated by optical sensors 530 to determine a position and/or orientation of the identified centroid. MR system 212 may determine a distance between the centroid and an attachment point of the physical marker. For instance, MR system 212 may determine a distance between a centroid of fiducial marker 1601A and tip 1601B of optical marker 1601 of FIG. 16. Based on the determined distance (i.e., between the centroid and the attachment point) and the determined position/orientation of the centroid, MR system 212 may determine a position/orientation of the attachment point.; Dassonville – [0129]: MR system 212 may register the virtual model with the observed anatomy based on the identified positions (822) of the physical markers. For instance, where the physical markers are placed on the observed bone structure 252 at locations that correspond to specific location(s) on the virtual model that corresponds to the observed bone structure 252, MR system 212 may generate a transformation matrix between the virtual model and the observed bone structure 252), so that locations in the virtual environment follow a motion of the location of the one of the anatomical landmarks (Wang – [0125]: After the geometric relationship between the actual marker points and the virtual marker points is established, when the movement of the virtual marker points is determined by the actual marker points, a marker point deviation may occur during the operation of the calibrating device or the human body movement. If there are four or more marking elements that mark the actual marker points, and the initial four points are used to determine the spacing between any two actual marker points according to the four actual marker points that are sensed, during the following operations, any three of the four actual marker points are sorted so that the actual marker points selected in any frame can correspond to the actual marker points of the initial frame. If the relative position between the actual marker points and the virtual marker points is unchanged during the joint movement, the position of the virtual marker points can be directly determined. If there is a slight change, the distance between the virtual marker points and the actual marker points after the arbitrary frame sorting is closest to the distance between the virtual marker points and the actual marker points of the initial frame, so the space position of the virtual marker points can be determined through optimization. In other words, according to the characteristic that the distance between any three marking elements in the actual marking element groups of the actual marker points and the calibrated virtual marker points remains unchanged or changes negligibly in the knee joint motion, in any frame of the movement of the joint, the distance between the virtual marker points and the said three actual marker points is closest to the distance between the virtual marker points and the said three actual marker points in initial position. Use this as an optimization goal to determine the space position of the virtual marker points, and obtain the space movement data of the virtual marker points.; Dassonville – [0150]: The MR system may use the registration to track movement of the real anatomy of interest during implementation of the virtual surgical plan on the real anatomy of interest. In some examples, the MR system may track the movement of the real anatomy of interest without the use of tracking markers). Regarding claim 4, the combination of Wang and Dassonville teaches the method of claim 2, wherein the predefined point is a fixed location in the real environment (fixed optical marker 1601 is functionally analogous to a marker that is at a fixed location in the real environment; Dassonville – [0048]: As the virtual models are registered to corresponding observed structures (e.g., externally visible bone and/or markers attached to the bone), the relative positions of the virtual models may correspond to the relative positions of the corresponding observed structures; Dassonville – [0121]: In some examples, one or more of the virtual markers can be replaced and/or supplemented with one or more physical markers, such as optical markers or electromagnetic markers, as examples. FIG. 16 illustrates an example of a physical markers positioned around the real observed bone structure 252. In general, the one or more physical markers may be positioned at various positions on or around the object being registered (e.g., real observed bone structure 252 or a tool). As shown in the examples of FIG. 16, a fixed optical marker 1601 may be used in a shoulder arthroplasty procedure to define a particular location of on a humerus after a humeral head has been resected. In the example of FIG. 16, fixed optical marker 1601 may include a planar fiducial marker 1601A on a single face of the optical marker. As shown in FIG. 16, the fiducial marker may be positioned on a portion of the physical marker that is proximal to a tip 1601B of the marker 1601; Dassonville – [0123]: passive physical markers may be fixed to bone, e.g., with surgical adhesive, screws, nails, clamps and/or other fixation mechanisms.), the fixed location being one of a geolocation or a position of a tag fixed to a real environment (Dassonville – [0048]: As the virtual models are registered to corresponding observed structures (e.g., externally visible bone and/or markers attached to the bone), the relative positions of the virtual models may correspond to the relative positions of the corresponding observed structures; Dassonville – [0123]: The physical markers can be any type of marker that enables identification of a particular location relative to the real observed object (e.g., bone structure 252). Examples of physical markers include, but are not necessarily limited to, passive physical markers and active physical markers. Passive physical markers may have physical parameters that aid in their identification by MR system 212. For instance, physical markers may have a certain shape (e.g., spherical markers that may be attached to the real observed bone structure 252), and/or optical characteristics (e.g., reflective materials, colors (e.g., colors, such a green, that are more visible in a surgical environment), bar codes (including one-dimensional or two-dimensional bars, such as QR codes), or the like) that aid in their identification by MR system 212;) and suitable to be recognized in the depth image data (Dassonville – [0128]: MR system 212 may utilize data from one or more sensors (e.g., one or more of sensors 554 of visualization device 213 of FIG. 5) to identify the location of the physical markers (820). For instance, MR system 212 may use data generated by any combination of depth sensors 532 and/or optical sensors 530 to identify a specific position (e.g., coordinates) of each of the physical markers. As one specific example, MR system 212 may utilize optical data generated by optical sensors 530 to identify a centroid of optical marker 1601A of FIG. 16. MR system 212 may then utilize depth data generated by depth sensors 532 and/or optical data generated by optical sensors 530 to determine a position and/or orientation of the identified centroid. MR system 212 may determine a distance between the centroid and an attachment point of the physical marker. For instance, MR system 212 may determine a distance between a centroid of fiducial marker 1601A and tip 1601B of optical marker 1601 of FIG. 16. Based on the determined distance (i.e., between the centroid and the attachment point) and the determined position/orientation of the centroid, MR system 212 may determine a position/orientation of the attachment point). Regarding claim 5, the combination of Wang and Dassonville teaches the method of claim 2, wherein the virtual environment is an augmented reality environment (Dassonville – [0079]: MR system 212 can be operated in an augmented surgery mode in which the user can manipulate the user interface intraoperatively so that the user can visually perceive details of the virtual surgical plan projected in a real environment, e.g., on a real anatomy of interest of the particular patient; Dassonville – [0149]: in the augmented surgery mode, the user can use the visualization device to align the 3D virtual model of the anatomy of interest with the real anatomy of interest). Regarding claim 9, the combination of Wang and Dassonville teaches the method of claim 2, further comprising: rendering in real-time a combined image of the real environment and the superimposed virtual environment (Dassonville – [0057]: Augmented reality (AR) is similar to MR in the presentation of both real-world and virtual elements, but AR generally refers to presentations that are mostly real, with a few virtual additions to “augment” the real-world presentation. For purposes of this disclosure, MR is considered to include AR. For example, in AR, parts of the user's physical environment that are in shadow can be selectively brightened without brightening other areas of the user's physical environment. This example is also an instance of MR in that the selectively-brightened areas may be considered virtual objects superimposed on the parts of the user's physical environment that are in shadow; Dassonville – [0079]: MR system 212 can be operated in an augmented surgery mode in which the user can manipulate the user interface intraoperatively so that the user can visually perceive details of the virtual surgical plan projected in a real environment, e.g., on a real anatomy of interest of the particular patient; Dassonville – [0149]: in the augmented surgery mode, the user can use the visualization device to align the 3D virtual model of the anatomy of interest with the real anatomy of interest); outputting the combined image for display (Dassonville – [0291]: The processing circuitry of intraoperative guidance system 108 may be configured to cause the output device, e.g., the MR visualization device or a monitor, to display a visual representation of the model of the bone and annotate the visual representation of the model based on the one or more distances between the bone and the implant component. In some implementations, the processing circuitry of intraoperative guidance system 108 may be configured to cause the output device to show a model of the implant component superimposed over the model of the bone. Intraoperative guidance system 108 can determine the position of the model of the implant component relative to the model of the bone based on an implant depth and distances between the implant component and the cortical wall determined by devices 3300, 3500, or 3700). Regarding claim 13, the combination of Wang and Dassonville teaches the computing device of claim 11, wherein the computing device is a mobile computing device (Dassonville – fig. 2 and [0071]). Claims 15-18 are similar in scope to claims 2-5, and therefore the examiner provides similar rationale to reject these claims. Claim(s) 10 and 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Wang, in view of Dassonville, and further in view of Kamiyama et al. (US 2023/0052613, hereinafter Kamiyama). Regarding claim 10, the combination of Wang and Dassonville does not explicitly teach the method of claim 1, wherein depth image data is obtained using a LiDAR sensor; and wherein the LiDAR sensor is included in a mobile computing device. Kamiyama teaches depth image data is obtained using a LiDAR sensor ([0009]: A depth sensor or lidar may be used to generate depth information (e.g., how far the target object in the 2D image is from the camera) from one or more captured 2D images); and wherein the LiDAR sensor is included in a mobile computing device ([0086]: a depth sensor or a lidar sensor located on the mobile computing device is used to determine the scale reference). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply Kamiyama’s knowledge of using a LiDAR included in a mobile computing device to obtain depth image data as taught and modify the process of the combination of Wang and Dassonville because such a process can be used for obtaining measurements of regular and irregular objects using mobile computing devices ([0010]). Regarding claim 12, the combination of Wang and Dassonville does not explicitly teach the computing device of claim 11, wherein the motion sensor is a LiDAR sensor ([0009]: A depth sensor or lidar may be used to generate depth information (e.g., how far the target object in the 2D image is from the camera) from one or more captured 2D images). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply Kamiyama’s knowledge of using a LiDAR included in a mobile computing device to obtain depth image data as taught and modify the process of the combination of Wang and Dassonville because such a process can be used for obtaining measurements of regular and irregular objects using mobile computing devices ([0010]). Allowable Subject Matter Claims 6-8 and 19-20 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Regarding claims 6 and 19, none of the cited prior art reference of record, either individually or in combination, teaches the augmented reality environment includes a virtual object that the subject can interact with, and determining, in the augmented reality environment and based on the adjusted locations and a location of the virtual object, whether the subject interacts with the virtual object; and in response to determining an interaction, signaling an indication of the interaction to the subject. Regarding claims 7 and 20, none of the cited prior art reference of record, either individually or in combination, teaches the augmented reality environment includes a virtual guiding object, and determining, in the augmented reality environment and based on the adjusted locations and a location of the virtual guiding object, a virtual distance between the subject’s body part corresponding to one or more of the adjusted locations and the virtual guiding object; in response to the determination, signaling an indication of the virtual distance to the subject. Regarding claim 8, none of the cited prior art reference of record, either individually or in combination, teaches the augmented reality environment includes a virtual target movement path, and determining, during the predefined time period and based on the adjusted locations and locations of the target movement path, a deviation of the subject's body par corresponding to one or more of the adjusted locations and the target movement path; in response to the determination, signaling an indication of the deviation to the subject. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JWALANT B AMIN whose telephone number is (571)272-2455. The examiner can normally be reached Monday-Friday 10am - 630pm CST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Said Broome can be reached at 571-272-2931. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JWALANT AMIN/Primary Examiner, Art Unit 2612
Read full office action

Prosecution Timeline

Apr 18, 2024
Application Filed
Jan 10, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597091
COMPUTER-IMPLEMENTED METHOD, APPARATUS, SYSTEM AND COMPUTER PROGRAM FOR CONTROLLING A SIGHTEDNESS IMPAIRMENT OF A SUBJECT
2y 5m to grant Granted Apr 07, 2026
Patent 12592020
TRACKING SYSTEM, TRACKING METHOD, AND SELF-TRACKING TRACKER
2y 5m to grant Granted Mar 31, 2026
Patent 12585324
PROCESSOR, IMAGE PROCESSING DEVICE, GLASSES-TYPE INFORMATION DISPLAY DEVICE, IMAGE PROCESSING METHOD, AND IMAGE PROCESSING PROGRAM
2y 5m to grant Granted Mar 24, 2026
Patent 12585130
LUMINANCE-AWARE UNINTRUSIVE RECTIFICATION OF DEPTH PERCEPTION IN EXTENDED REALITY FOR REDUCING EYE STRAIN
2y 5m to grant Granted Mar 24, 2026
Patent 12579571
METHOD FOR IMPROVING AESTHETIC APPEARANCE OF RETAILER GRAPHICAL USER INTERFACE
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
79%
Grant Probability
94%
With Interview (+15.3%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 631 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month