Prosecution Insights
Last updated: April 19, 2026
Application No. 18/794,365

USER PERCEIVED FORWARD DETERMINATION BASED ON DETECTED HEAD CENTER

Non-Final OA §103
Filed
Aug 05, 2024
Examiner
LIU, GORDON G
Art Unit
2618
Tech Center
2600 — Communications
Assignee
Apple Inc.
OA Round
1 (Non-Final)
83%
Grant Probability
Favorable
1-2
OA Rounds
2y 4m
To Grant
98%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
556 granted / 673 resolved
+20.6% vs TC avg
Strong +15% interview lift
Without
With
+15.1%
Interview Lift
resolved cases with interview
Typical timeline
2y 4m
Avg Prosecution
29 currently pending
Career history
702
Total Applications
across all art units

Statute-Specific Performance

§101
6.7%
-33.3% vs TC avg
§103
73.3%
+33.3% vs TC avg
§102
3.0%
-37.0% vs TC avg
§112
5.7%
-34.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 673 resolved cases

Office Action

§103
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1-20 are pending under this Office action. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-7, 9-18, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Jones, etc. (US 20190179409 A1) in view of Magyari, etc. (US 20140320972 A1). Regarding claim 1, Jones teaches that a method (See Jones: Fig. 41, and [0413], “Referring to FIG. 41 there is depicted a portable electronic device 4104 supporting an interface to a NR2I 4170 according to an embodiment of the invention. Also depicted within the PED 4104 is the protocol architecture as part of a simplified functional diagram of a system 4100 that includes a portable electronic device (PED) 4104, such as a smartphone, an Access Point (AP) 4106, such as a Wi-Fi access point or wireless cellular base station, and one or more network devices 4107, such as communication servers, streaming media servers, and routers for example. Network devices 4107 may be coupled to AP 4106 via any combination of networks, wired, wireless and/or optical communication. The PED 4104 includes one or more processors 4110 and a memory 4112 coupled to processor(s) 4110. AP 4106 also includes one or more processors 4111 and a memory 4113 coupled to processor(s) 4111. A non-exhaustive list of examples for any of processors 4110 and 4111 includes a central processing unit (CPU), a digital signal processor (DSP), a reduced instruction set computer (RISC), a complex instruction set computer (CISC) and the like. Furthermore, any of processors 4110 and 4111 may be part of application specific integrated circuits (ASICs) or may be a part of application specific standard products (ASSPs). A non-exhaustive list of examples for memories 4112 and 4113 includes any combination of the following semiconductor devices such as registers, latches, ROM, EEPROM, flash memory devices, non-volatile random-access memory devices (NVRAM), SDRAM, DRAM, double data rate (DDR) memory devices, SRAM, universal serial bus (USB) removable memory, and the like”) comprising: at a head-mounted device (HMD) (See Jones: Figs. 1A-B, and [0057], “FIGS. 1A and 1B depict a near-to-eye (NR2I) head mounted display (HMD) system comprising a frame with temple-arms, a weight-relieving strap, a demountable display assembly that pivots about a magnetic hinged attachment, allowing rotation of the display assembly together with additional forward-facing elements such as one or more image sensors, range-finders, and structured/unstructured light sources”) having a processor (See Jones: Fig. 41, and [0413], “The PED 4104 includes one or more processors 4110 and a memory 4112 coupled to processor(s) 4110. AP 4106 also includes one or more processors 4111 and a memory 4113 coupled to processor(s) 4111. A non-exhaustive list of examples for any of processors 4110 and 4111 includes a central processing unit (CPU), a digital signal processor (DSP), a reduced instruction set computer (RISC), a complex instruction set computer (CISC) and the like. Furthermore, any of processors 4110 and 4111 may be part of application specific integrated circuits (ASICs) or may be a part of application specific standard products (ASSPs)”): presenting a view of a content item at a position and an orientation within a three-dimensional (3D) environment (See Jones: Figs. 1A-B, and [0121], “Said applications could include, but are not necessarily limited to gaming, augmented reality, night vision, computer use, viewing movies, environment simulation, training, remote-assistance, etc. Augmented reality applications may include, but are not limited to, medicine, visual assistance, engineering, aviation, training, remote-assistance, tactical, gaming, sports, virtual reality, environment simulation, and data display”; and [0165], “Whilst FIGS. 1 to 2P depict a single field-of-view camera centrally located on the front of the NR2I display, alternate functional decompositions are considered. In particular, one or more forward-facing cameras may instead be mounted to the headband so that their directional orientation remains unchanged as the NR2I display position is changed. Further, two forward-facing optical imaging devices, one on each side of the headband, may be used to provide a wider field of view and/or stereoscopic image capture. Similarly, one or more forward facing infrared range finders and/or optical scanners may be mounted to the headband so that their orientation remains unchanged as the NR2I display position is changed. Range finder(s) may provide additional information to the user in their immersive use of the NR2I-HMD whilst an optical scanner or optical scanners may provide environment information which is displayed in conjunction with a field of view or region of interest image derived from the one or more optical imaging devices”. Note that the simulation environment is mapped to the virtual content, and the stereo images are mapped to the 3D environment); obtaining a first change to the orientation of the content item within the 3D environment (See Jones: Figs.4A-B, and [0185], “It should be noted that in the design disclosed according to an embodiment of the invention is presented with the global reference coordinate system centered with respect to the exit pupil, like most of the existing freeform prism-lens designs. However, the reference axes are set differently from the existing designs presented within the prior art. Here the Z-axis is along the viewing direction, but the Y-axis is parallel to the horizontal direction aligning with inter-pupillary direction, and the X-axis is in the vertical direction aligning with the head orientation. In other words, the reference coordinate system is rotated 90-degrees around the Z-axis. As a result, the overall prism-lens system is symmetric about the horizontal (YOZ) plane, rather than a typical left-right symmetry about the vertical plane. The optical surfaces (S1 410, S2 420, and S3 430) are decentered along the horizontal Y-axis and rotated about the vertical X-axis. As a result, the optical path is folded in the horizontal YOZ plane, corresponding to the direction of wider field of view, to form a prism-lens structure. This arrangement allows the MicroDisplay 440 to be mounted on the temple side of the user's head”. Note that the rotation about the X-axis will change the orientation of the virtual content presentation, and this is mapped to the first change to the orientation of the content item within the 3D environment); obtaining a second change to the position of the content item with the 3D environment (See Jones: Fig. 18, and [0270], “More complex examples still might consider off-centered objects, employ both eye tracking data and the range to the object of gaze and then shift the images asymmetrically, and/or independently for left and right eyes, and/or in the vertical orientation and/or rotational translations as well, the display dynamically responding to the user's gaze. In such cases although the user's eyes 1800 are focused on an off-center object the central rangefinder 1820 will measure the depth to the centered object 1830”; and [0372], “Within other embodiments of the invention a combination function of eye-tracking and bioptic may be employed such that as the display assembly is rotated, the geometry with respect to the user's eye changes, and the system compensates. There are at least two ways these features can interact. By measuring the rotation angle (either directly with an encoder, say, or inferring based on, for example, inertial sensing, or from eye-tracking itself) we can know that the display has been shifted with respect to the user's eye”. Note that the translation and shift are mapped to the second change to the position of the content item with the 3D environment); determining a characteristic of a user-specific forward direction based on the first change and the second change to the content item (See Jones: Fig. 2A-T, and [0062], “FIGS. 2N to 2O respectively depict an alternative configuration for a bioptic immersive NR2I-HMD according to the embodiment of the invention in FIGS. 2G to 2M respectively exploiting a NR2I freeform prism-lens according to another embodiment of the invention wherein the user has positioned the NR2I-HMD out of their direct line of sight and in their line of sight”; and [0063], “FIG. 2P depicts an alternative configuration for a bioptic immersive NR2I-HMD according to an embodiment of the invention exploiting a NR2I freeform prism-lens according to another embodiment of the invention wherein the user has positioned the NR2I-HMD in their line of sight”. Note that the line of sight is mapped to the user-specific forward direction, and “within or out of” the lie of sight are mapped to the characteristics of the user-specific forward direction ); and presenting additional content within one or more 3D environments based on the characteristic of the user-specific forward direction. However, Jones fails to explicitly disclosed that presenting additional content within one or more 3D environments based on the characteristic of the user-specific forward direction. However, Magyari teaches that presenting additional content within one or more 3D environments based on the characteristic of the user-specific forward direction (See Magyari: Figs. 8-9, and [0066], “Visual content to be projected includes both static and dynamic visual content, and any additional content that can be visually displayed and is capable of being viewed. Static visual content includes content that does not change over the time during which it is displayed and includes but is not limited to photos, still imagery, static text and graphic data displays that do not update with new information. Dynamic visual content includes content that does change over the time during which it is displayed and includes but is not limited to video playback or real time video, changing imagery, dynamic text and graphic data displays that update as new information is obtained”. Note that the HMD alignment is maintained automatically and the dynamic content is updated as it occurs, this updating dynamic content is mapped to presenting additional content to the HMD user). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention was effectively filed to modify Jones to have presenting additional content within one or more 3D environments based on the characteristic of the user-specific forward direction as taught by Magyari in order to secure and provide stable support and alignment for optical components (See Magyari: Fig. 1, and [0153], “An efficient design and manufacturing process provides a very compact, dimensionally accurate, dimensionally stable, and impact resistant HMD. The design includes a rigid and high modulus structural frame that secures and provides a stable support and alignment for optical components. A relatively lower modulus polymer outer frame is assembled to the rigid inner frame. The outer frame provides a dust cover for components mounted to the structural frame along with temple arms for securing the HMD to a user's head. The outer frame also provides a composite structure that withstands impact by virtue of the combination of the relatively lower modulus outer frame in combination with the higher modulus inner frame”). Jones teaches a method and system that may adjust the MHD assembly to the users by rotation and translation to align the user-specific forward direction to the eye forward direction; while Magyari teaches a system and method that may maintain the alignments of the HMD device and provide static and dynamic content for the users. Therefore, it is obvious to one of ordinary skill in the art to modify Jones by Magyari to provide additional (dynamic) content to the users after alignment or adjustment the HMD for the users. The motivation to modify Jones by Magyari is “Use of known technique to improve similar devices (methods, or products) in the same way”. Regarding claim 2, Jones and Magyari teach all the features with respect to claim 1 as outlined above. Further, Jones teaches that the method of claim 1, wherein said presenting the view of the content item at the position and the orientation comprises: determining an eye center based on sensor data from one or more sensors of the HMD, wherein the eye center is a center position between eyes of a user wearing the HMD (See Jones: Fig. 18, and [0266], “In the limit, the user is cross-eyed staring at the bridge of their nose, and the inter-pupil distance (IPD) 1890 has reduced substantially as the eyes gaze turned inwards. Typical NR2I systems provide the image at a fixed focal depth of infinity, and the IPD of the images are fixed, which may result in diplopia (double-vision) or eye-strain when viewing close objects, as the eyes are not operating in a “natural” manner for close objects. Improved usability can be achieved if a mechanical or electronic IPD adjustment is made dynamic, and according to the distance to the object being viewed, as identified through a combination of eye-tracking and FoV image depth-mapping, achieved using either a range finding system or through indirect means such as depth-mapping from defocus-information, or other means, such as stereoscopy or LIDAR”. Note that IPD is mapped to the center of eyes); determining an eye-forward direction based on the eye center (See Jones: Fig. 18, and [0270], “More complex examples still might consider off-centered objects, employ both eye tracking data and the range to the object of gaze and then shift the images asymmetrically, and/or independently for left and right eyes, and/or in the vertical orientation and/or rotational translations as well, the display dynamically responding to the user's gaze. In such cases although the user's eyes 1800 are focused on an off-center object the central rangefinder 1820 will measure the depth to the centered object 1830. Gaze-tracking implemented with any of a variety of mechanisms (for example using additional imaging devices directed towards the user's eyeball or eyeballs) may be employed to allow an improved image to be displayed. First, by employing both a depth-map derived from the image-data, in combination with the location within the image to which the user's gaze is directed through gaze-tracking, as well as the current focal depth, then the system may derive the difference in depth between where the camera is currently focused versus where the user is gazing, and thus issue a focusing command to bring the gazed-at object into improved focus. Secondly, as the object is now no longer centered in the horizontal field of view, each eye's rotation assumes a different angle, θ.sub.L for the left eye and θ.sub.R for the right eye”. Note that gaze is mapped to an eye-forward direction); and determining the position and the orientation, of the content item within the 3D environment, based on the eye-forward direction (See Jones: Fig. 18, and [0271], “Analogous to the symmetric case above, a lateral image-shift may now be computed independently for each of the left and right displays such that each eye perceives the image of the object being gazed-at in the correct location for an object at that depth and offset from centre being viewed in the absence of the near-to-eye HMD system, and thus making the image appear more natural to the user. Further, the combination of a central range finder 1820 and image-based depth-mapping also allows periodic or continuous calibration of the image-derived depth map at the central field of view as measured by the rangefinder”. Note that the object position and orientation viewed by the user in gaze direction is mapped to the position and the orientation, of the content item within the 3D environment). Regarding claim 3, Jones and Magyari teach all the features with respect to claim 1 as outlined above. Further, Jones teaches that the method of claim 1, wherein said presenting the view of the content item at the position and the orientation comprises: determining a device-forward direction based on a position of the HMD (See Jones: Figs. 2A-T, and [0163], “Now referring to FIG. 2P there is depicted an alternative configuration for a bioptic immersive NR2I-HMD according to an embodiment of the invention exploiting a NR2I freeform prism-lens according to another embodiment of the invention wherein the user has positioned the NR2I-HMD in their line of sight. Accordingly, the NR2I-HMD comprises a head mounted frame comprising a rear portion 2210 which fits around the sides and rear of the user's head and a front portion 2220 which fits around the front of the user's head at their forehead level. Coupled to the front portion 2220 is the NR2I-Housing 2230 via pivot mounts 2240 on either side of the user's head. Also depicted in FIG. 2P are a conventional set of eyewear frames 2250 and their lenses 2260. Accordingly, the NR2I-HMD can be work with or without such eyewear frames. Optionally, within another embodiment of the invention the pivot mount 2240 may be only on one side of the user's head”. Note that when the HMD fits around the front of the user’s head at their forehead level, the line of sight is mapped to the device forward direction); and determining the position and the orientation, of the content item within the 3D environment, based on the device-forward direction (See Jones: Fig. 18, and [0271], “Analogous to the symmetric case above, a lateral image-shift may now be computed independently for each of the left and right displays such that each eye perceives the image of the object being gazed-at in the correct location for an object at that depth and offset from centre being viewed in the absence of the near-to-eye HMD system, and thus making the image appear more natural to the user. Further, the combination of a central range finder 1820 and image-based depth-mapping also allows periodic or continuous calibration of the image-derived depth map at the central field of view as measured by the rangefinder”. Note that the object position and orientation viewed by the user in gaze direction is mapped to the position and the orientation, of the content item within the 3D environment). Regarding claim 4, Jones and Magyari teach all the features with respect to claim 1 as outlined above. Further, Jones teaches that the method of claim 1, wherein said determining the characteristic of the user-specific forward direction comprises determining a difference between a device-based forward direction and the user-specific forward direction (See Jones: Fig. 18, and [0270], “First, by employing both a depth-map derived from the image-data, in combination with the location within the image to which the user's gaze is directed through gaze-tracking, as well as the current focal depth, then the system may derive the difference in depth between where the camera is currently focused versus where the user is gazing, and thus issue a focusing command to bring the gazed-at object into improved focus. Secondly, as the object is now no longer centered in the horizontal field of view, each eye's rotation assumes a different angle, θ.sub.L for the left eye and θ.sub.R for the right eye”. Note that the camera focus is mapped to the device-based forward direction, and the user gazing is mapped to the user-specific forward direction, and their difference is mapped to the difference of this claim cited difference). Regarding claim 5, Jones and Magyari teach all the features with respect to claim 1 as outlined above. Further, Magyari teaches that the method of claim 1, wherein said determining the characteristic of the user-specific forward direction comprises determining a difference between an eye-based forward direction and the user-specific forward direction (See Magyari: Figs. 18A-F, and [0148], “FIG. 18C depicts micro-display mechanism 108 in fully assembled form along with X and Z axes. Twisting the user manipulation portion 164 of the X-gear 152 (FIG. 18B) moves the display 116 along the X-axis to adjust inter-pupil distance (IPD). Manipulating the user manipulation portion 170 of the Z-gear 154 (FIG. 18B) moves the display 116 along Z-axis to adjust the focal point of the optical path 114”. Note that IPD adjustment is mapped to the eye-based forward direction, and Z-axis translation adjustment is mapped to the user-specific forward direction, and the adjustments on both the IPD and Z-axis focus to maintain the alignment of the user HMD device is mapped to determine the difference between the eye-based forward direction and the user-specific forward direction (in order to align the HMD device)). Regarding claim 6, Jones and Magyari teach all the features with respect to claim 1 as outlined above. Further, Jones teaches that the method of claim 1, wherein the first change is obtained before the second change (See Jones: Figs. 4A-B, and [0181], “A freeform prism-lens typically is symmetric about the plane in which the surfaces are rotated and decentered and the optical path is folded. For instance, the prism-lens schematic in FIG. 4A was set to be symmetric about the vertical YOZ plane. The optical surfaces are decentered along the vertical Y-axis and rotated about the horizontal X-axis so that the optical path is folded in the vertical YOZ plane to form a prism-lens structure. With this type of plane-symmetry structure, it is very challenging to achieve a wider field of view for the folding direction than the direction with symmetry”; and Fig. 18, and [0266], “Improved usability can be achieved if a mechanical or electronic IPD adjustment is made dynamic, and according to the distance to the object being viewed, as identified through a combination of eye-tracking and FoV image depth-mapping, achieved using either a range finding system or through indirect means such as depth-mapping from defocus-information, or other means, such as stereoscopy or LIDAR”. Note that the first change is rotation, which is made before the second change is made, the second change is the translation, i.e., the dynamic adjustment of the IPD). Regarding claim 7, Jones and Magyari teach all the features with respect to claim 1 as outlined above. Further, Jones teaches that the method of claim 1, wherein said obtaining the first change to the orientation comprises: presenting an instruction for a user to rotate the content item until the content item appears to the user to be facing the user (See Jones: Figs. 2A-F, and [0219], “If a bioptic hinge for the NR2I-HMD, which allows the HMD to be pivoted from the configuration in FIGS. 2A to 2C to that depicted in FIG. 2D to 2F, is aligned with user eye rotation then bioptic tilt compensation may not be required for eye/HMD reference frames. If the hinge is not perfectly aligned with the user's eye rotation axis, compensation for bioptic tilt may be performed to accommodate eye-NR2I geometry change as rotation occurs”. Note that the HMD optical components are rotated till it is facing the user’s eye, as shown in Fig. 2F). Regarding claim 9, Jones and Magyari teach all the features with respect to claim 1 as outlined above. Further, Jones teaches that the method of claim 1, wherein said obtaining the second change to the position comprises: presenting an instruction for a user to shift the content item left or right until the content item appears to the user to be centered in front of the user (See Jones: Figs. 2A-T, and [0154], “FIG. 2G in a first use configuration where the NR2I-Housing 2150 is in front of the user's eyes and with their head level the center of the NR2I display(s) are directly within their line of sight”. The display and the images on the NR2I display after rotation and shifting adjustment of the HMD, are in the center of the line of line, and this is mapped to the content item appears to the user to be centered in front of the user). Regarding claim 10, Jones and Magyari teach all the features with respect to claim 1 as outlined above. Further, Jones teaches that the method of claim 1, further comprising: based on the second change to the position of the content item, shifting the content item left or right (See Jones: Fig. 18, and [0270], “More complex examples still might consider off-centered objects, employ both eye tracking data and the range to the object of gaze and then shift the images asymmetrically, and/or independently for left and right eyes, and/or in the vertical orientation and/or rotational translations as well, the display dynamically responding to the user's gaze. In such cases although the user's eyes 1800 are focused on an off-center object the central rangefinder 1820 will measure the depth to the centered object 1830”). Regarding claim 11, Jones and Magyari teach all the features with respect to claim 1 as outlined above. Further, Jones teaches that the method of claim 1, wherein the content item is a two-dimensional (2D) window (See Jones: Fig. 4A-B, and [0184], “Referring to FIG. 4B respectively there is depicted a 2D optical layout of a freeform prism-lens absent any auxiliary optical elements as can be employed within the NR2I system according to an embodiment of the invention. A ray emitted from a point on the MicroDisplay 440 is first refracted by the surface S3 430 next to the MicroDisplay 440. After two consecutive reflections by the surfaces S1′ 415 and S2 420, the ray is transmitted through the surface S1 410 and reaches the exit pupil 450 of the system. The first surface (i.e., S1 410 and S1′ 415) of the prism-lens is required to satisfy the condition of total internal reflection for rays reflected by this surface S1′ 415. The rear surface S2 420 of the prism-lens may, optionally, be coated with a mirror coating for immersive NR2I systems thereby blocking the user's view of the real-world scene except as presented upon the MicroDisplay 440. Alternatively, the surface S2 420 may be coated with a beam-splitting coating if optical see-through capability is desired using the auxiliary lens (not shown for clarity). The coating on surface S2 may be wavelength-selective, for example with a wavelength transfer-function as shown in FIG. 12, to allow the passing of infra-red light, while reflecting visible light”. Note that the 2D optical layout and the micro display 440 are mapped to 2D windows). Regarding claim 12, Jones and Magyari teach all the features with respect to claim 1 as outlined above. Further, ones and Magyari teach that a head mounted device (HMD) (See Jones: Fig. 41, and [0413], “Referring to FIG. 41 there is depicted a portable electronic device 4104 supporting an interface to a NR2I 4170 according to an embodiment of the invention. Also depicted within the PED 4104 is the protocol architecture as part of a simplified functional diagram of a system 4100 that includes a portable electronic device (PED) 4104, such as a smartphone, an Access Point (AP) 4106, such as a Wi-Fi access point or wireless cellular base station, and one or more network devices 4107, such as communication servers, streaming media servers, and routers for example. Network devices 4107 may be coupled to AP 4106 via any combination of networks, wired, wireless and/or optical communication. The PED 4104 includes one or more processors 4110 and a memory 4112 coupled to processor(s) 4110. AP 4106 also includes one or more processors 4111 and a memory 4113 coupled to processor(s) 4111. A non-exhaustive list of examples for any of processors 4110 and 4111 includes a central processing unit (CPU), a digital signal processor (DSP), a reduced instruction set computer (RISC), a complex instruction set computer (CISC) and the like. Furthermore, any of processors 4110 and 4111 may be part of application specific integrated circuits (ASICs) or may be a part of application specific standard products (ASSPs). A non-exhaustive list of examples for memories 4112 and 4113 includes any combination of the following semiconductor devices such as registers, latches, ROM, EEPROM, flash memory devices, non-volatile random-access memory devices (NVRAM), SDRAM, DRAM, double data rate (DDR) memory devices, SRAM, universal serial bus (USB) removable memory, and the like”; and Figs. 1A-B, and [0147], “Referring to FIGS. 1A and 1B depict a near-to-eye (NR2I) head mounted display (HMD) system comprising a frame with temple-arms 170, a weight-relieving strap 180, a Demountable Display Assembly 110 that pivots about a magnetic hinged attachment 160, allowing rotation of the display assembly together with additional forward-facing elements such as one or more image sensors 120, range-finders 140 and 150, and a structured/unstructured light source 130”) comprising: a non-transitory computer-readable storage medium (See Jones: Fig. 41, and [0413], “The PED 4104 includes one or more processors 4110 and a memory 4112 coupled to processor(s) 4110. AP 4106 also includes one or more processors 4111 and a memory 4113 coupled to processor(s) 4111. A non-exhaustive list of examples for any of processors 4110 and 4111 includes a central processing unit (CPU), a digital signal processor (DSP), a reduced instruction set computer (RISC), a complex instruction set computer (CISC) and the like. Furthermore, any of processors 4110 and 4111 may be part of application specific integrated circuits (ASICs) or may be a part of application specific standard products (ASSPs). A non-exhaustive list of examples for memories 4112 and 4113 includes any combination of the following semiconductor devices such as registers, latches, ROM, EEPROM, flash memory devices, non-volatile random-access memory devices (NVRAM), SDRAM, DRAM, double data rate (DDR) memory devices, SRAM, universal serial bus (USB) removable memory, and the like”); and one or more processors coupled to the non-transitory computer-readable storage medium, wherein the non-transitory computer-readable storage medium comprises program instructions that, when executed on the one or more processors, cause the system to perform operations (See Jones: Fig. 41, and [0414], “PED 4104 may include an audio input element 4114, for example a microphone, and an audio output element 4116, for example, a speaker, coupled to any of processors 4110. PED 4104 may include a video input element 4118, for example, a video camera, and a visual output element 4120, for example an LCD display, coupled to any of processors 4110. The visual output element 4120 is also coupled to display interface 4120B and display status 4120C. PED 4104 includes one or more applications 4122 that are typically stored in memory 4112 and are executable by any combination of processors 4110. PED 4104 includes a protocol stack 4124 and AP 4106 includes a communication stack 4125. Within system 4100 protocol stack 4124 is shown as IEEE 802.11/15 protocol stack but alternatively may exploit other protocol stacks such as an Internet Engineering Task Force (IETF) multimedia protocol stack for example. Likewise, AP stack 4125 exploits a protocol stack but is not expanded for clarity. Elements of protocol stack 4124 and AP stack 4125 may be implemented in any combination of software, firmware and/or hardware”) comprising: presenting a view of a content item at a position and an orientation within a three-dimensional (3D) environment (See Jones: Figs. 1A-B, and [0121], “Said applications could include, but are not necessarily limited to gaming, augmented reality, night vision, computer use, viewing movies, environment simulation, training, remote-assistance, etc. Augmented reality applications may include, but are not limited to, medicine, visual assistance, engineering, aviation, training, remote-assistance, tactical, gaming, sports, virtual reality, environment simulation, and data display”; and [0165], “Whilst FIGS. 1 to 2P depict a single field-of-view camera centrally located on the front of the NR2I display, alternate functional decompositions are considered. In particular, one or more forward-facing cameras may instead be mounted to the headband so that their directional orientation remains unchanged as the NR2I display position is changed. Further, two forward-facing optical imaging devices, one on each side of the headband, may be used to provide a wider field of view and/or stereoscopic image capture. Similarly, one or more forward facing infrared range finders and/or optical scanners may be mounted to the headband so that their orientation remains unchanged as the NR2I display position is changed. Range finder(s) may provide additional information to the user in their immersive use of the NR2I-HMD whilst an optical scanner or optical scanners may provide environment information which is displayed in conjunction with a field of view or region of interest image derived from the one or more optical imaging devices”. Note that the simulation environment is mapped to the virtual content, and the stereo images are mapped to the 3D environment); obtaining a first change to the orientation of the content item within the 3D environment (See Jones: Figs.4A-B, and [0185], “It should be noted that in the design disclosed according to an embodiment of the invention is presented with the global reference coordinate system centered with respect to the exit pupil, like most of the existing freeform prism-lens designs. However, the reference axes are set differently from the existing designs presented within the prior art. Here the Z-axis is along the viewing direction, but the Y-axis is parallel to the horizontal direction aligning with inter-pupillary direction, and the X-axis is in the vertical direction aligning with the head orientation. In other words, the reference coordinate system is rotated 90-degrees around the Z-axis. As a result, the overall prism-lens system is symmetric about the horizontal (YOZ) plane, rather than a typical left-right symmetry about the vertical plane. The optical surfaces (S1 410, S2 420, and S3 430) are decentered along the horizontal Y-axis and rotated about the vertical X-axis. As a result, the optical path is folded in the horizontal YOZ plane, corresponding to the direction of wider field of view, to form a prism-lens structure. This arrangement allows the MicroDisplay 440 to be mounted on the temple side of the user's head”. Note that the rotation about the X-axis will change the orientation of the virtual content presentation, and this is mapped to the first change to the orientation of the content item within the 3D environment); obtaining a second change to the position of the content item with the 3D environment (See Jones: Fig. 18, and [0270], “More complex examples still might consider off-centered objects, employ both eye tracking data and the range to the object of gaze and then shift the images asymmetrically, and/or independently for left and right eyes, and/or in the vertical orientation and/or rotational translations as well, the display dynamically responding to the user's gaze. In such cases although the user's eyes 1800 are focused on an off-center object the central rangefinder 1820 will measure the depth to the centered object 1830”; and [0372], “Within other embodiments of the invention a combination function of eye-tracking and bioptic may be employed such that as the display assembly is rotated, the geometry with respect to the user's eye changes, and the system compensates. There are at least two ways these features can interact. By measuring the rotation angle (either directly with an encoder, say, or inferring based on, for example, inertial sensing, or from eye-tracking itself) we can know that the display has been shifted with respect to the user's eye”. Note that the translation and shift are mapped to the second change to the position of the content item with the 3D environment); determining a characteristic of a user-specific forward direction based on the first change to the orientation and the second change to the position of the content item (See Jones: Fig. 2A-T, and [0062], “FIGS. 2N to 2O respectively depict an alternative configuration for a bioptic immersive NR2I-HMD according to the embodiment of the invention in FIGS. 2G to 2M respectively exploiting a NR2I freeform prism-lens according to another embodiment of the invention wherein the user has positioned the NR2I-HMD out of their direct line of sight and in their line of sight”; and [0063], “FIG. 2P depicts an alternative configuration for a bioptic immersive NR2I-HMD according to an embodiment of the invention exploiting a NR2I freeform prism-lens according to another embodiment of the invention wherein the user has positioned the NR2I-HMD in their line of sight”. Note that the line of sight is mapped to the user-specific forward direction, and “within or out of” the lie of sight are mapped to the characteristics of the user-specific forward direction ); and presenting additional content within one or more 3D environments based on the characteristic of the user-specific forward direction (See Magyari: Figs. 8-9, and [0066], “Visual content to be projected includes both static and dynamic visual content, and any additional content that can be visually displayed and is capable of being viewed. Static visual content includes content that does not change over the time during which it is displayed and includes but is not limited to photos, still imagery, static text and graphic data displays that do not update with new information. Dynamic visual content includes content that does change over the time during which it is displayed and includes but is not limited to video playback or real time video, changing imagery, dynamic text and graphic data displays that update as new information is obtained”. Note that the HMD alignment is maintained automatically and the dynamic content is updated as it occurs, this updating dynamic content is mapped to presenting additional content to the HMD user). Regarding claim 13, Jones and Magyari teach all the features with respect to claim 12 as outlined above. Further, Jones teaches that the HMD of claim 12, wherein said presenting the view of the content item at the position and the orientation comprises: determining an eye center based on sensor data from one or more sensors of the HMD, wherein the eye center is a center position between eyes of a user wearing the HMD (See Jones: Fig. 18, and [0266], “In the limit, the user is cross-eyed staring at the bridge of their nose, and the inter-pupil distance (IPD) 1890 has reduced substantially as the eyes gaze turned inwards. Typical NR2I systems provide the image at a fixed focal depth of infinity, and the IPD of the images are fixed, which may result in diplopia (double-vision) or eye-strain when viewing close objects, as the eyes are not operating in a “natural” manner for close objects. Improved usability can be achieved if a mechanical or electronic IPD adjustment is made dynamic, and according to the distance to the object being viewed, as identified through a combination of eye-tracking and FoV image depth-mapping, achieved using either a range finding system or through indirect means such as depth-mapping from defocus-information, or other means, such as stereoscopy or LIDAR”. Note that IPD is mapped to the center of eyes); determining an eye-forward direction based on the eye center (See Jones: Fig. 18, and [0270], “More complex examples still might consider off-centered objects, employ both eye tracking data and the range to the object of gaze and then shift the images asymmetrically, and/or independently for left and right eyes, and/or in the vertical orientation and/or rotational translations as well, the display dynamically responding to the user's gaze. In such cases although the user's eyes 1800 are focused on an off-center object the central rangefinder 1820 will measure the depth to the centered object 1830. Gaze-tracking implemented with any of a variety of mechanisms (for example using additional imaging devices directed towards the user's eyeball or eyeballs) may be employed to allow an improved image to be displayed. First, by employing both a depth-map derived from the image-data, in combination with the location within the image to which the user's gaze is directed through gaze-tracking, as well as the current focal depth, then the system may derive the difference in depth between where the camera is currently focused versus where the user is gazing, and thus issue a focusing command to bring the gazed-at object into improved focus. Secondly, as the object is now no longer centered in the horizontal field of view, each eye's rotation assumes a different angle, θ.sub.L for the left eye and θ.sub.R for the right eye”. Note that gaze is mapped to an eye-forward direction); and determining the position and the orientation, of the content item within the 3D environment, based on the eye-forward direction (See Jones: Fig. 18, and [0271], “Analogous to the symmetric case above, a lateral image-shift may now be computed independently for each of the left and right displays such that each eye perceives the image of the object being gazed-at in the correct location for an object at that depth and offset from centre being viewed in the absence of the near-to-eye HMD system, and thus making the image appear more natural to the user. Further, the combination of a central range finder 1820 and image-based depth-mapping also allows periodic or continuous calibration of the image-derived depth map at the central field of view as measured by the rangefinder”. Note that the object position and orientation viewed by the user in gaze direction is mapped to the position and the orientation, of the content item within the 3D environment). Regarding claim 14, Jones and Magyari teach all the features with respect to claim 12 as outlined above. Further, Jones teaches that the HMD of claim 12, wherein said presenting the view of the content item at the position and the orientation comprises: determining a device-forward direction based on a position of the HMD (See Jones: Figs. 2A-T, and [0163], “Now referring to FIG. 2P there is depicted an alternative configuration for a bioptic immersive NR2I-HMD according to an embodiment of the invention exploiting a NR2I freeform prism-lens according to another embodiment of the invention wherein the user has positioned the NR2I-HMD in their line of sight. Accordingly, the NR2I-HMD comprises a head mounted frame comprising a rear portion 2210 which fits around the sides and rear of the user's head and a front portion 2220 which fits around the front of the user's head at their forehead level. Coupled to the front portion 2220 is the NR2I-Housing 2230 via pivot mounts 2240 on either side of the user's head. Also depicted in FIG. 2P are a conventional set of eyewear frames 2250 and their lenses 2260. Accordingly, the NR2I-HMD can be work with or without such eyewear frames. Optionally, within another embodiment of the invention the pivot mount 2240 may be only on one side of the user's head”. Note that when the HMD fits around the front of the user’s head at their forehead level, the line of sight is mapped to the device forward direction); and determining the position and the orientation, of the content item within the 3D environment, based on the device-forward direction (See Jones: Fig. 18, and [0271], “Analogous to the symmetric case above, a lateral image-shift may now be computed independently for each of the left and right displays such that each eye perceives the image of the object being gazed-at in the correct location for an object at that depth and offset from centre being viewed in the absence of the near-to-eye HMD system, and thus making the image appear more natural to the user. Further, the combination of a central range finder 1820 and image-based depth-mapping also allows periodic or continuous calibration of the image-derived depth map at the central field of view as measured by the rangefinder”. Note that the object position and orientation viewed by the user in gaze direction is mapped to the position and the orientation, of the content item within the 3D environment). Regarding claim 15, Jones and Magyari teach all the features with respect to claim 12 as outlined above. Further, Jones teaches that the HMD of claim 12, wherein said determining the characteristic of the user-specific forward direction comprises determining a difference between a device-based forward direction and the user-specific forward direction (See Jones: Fig. 18, and [0270], “First, by employing both a depth-map derived from the image-data, in combination with the location within the image to which the user's gaze is directed through gaze-tracking, as well as the current focal depth, then the system may derive the difference in depth between where the camera is currently focused versus where the user is gazing, and thus issue a focusing command to bring the gazed-at object into improved focus. Secondly, as the object is now no longer centered in the horizontal field of view, each eye's rotation assumes a different angle, θ.sub.L for the left eye and θ.sub.R for the right eye”. Note that the camera focus is mapped to the device-based forward direction, and the user gazing is mapped to the user-specific forward direction, and their difference is mapped to the difference of this claim cited difference). Regarding claim 16, Jones and Magyari teach all the features with respect to claim 12 as outlined above. Further, Magyari teaches that the HMD of claim 12, wherein said determining the characteristic of the user-specific forward direction comprises determining a difference between an eye-based forward direction and the user-specific forward direction (See Magyari: Figs. 18A-F, and [0148], “FIG. 18C depicts micro-display mechanism 108 in fully assembled form along with X and Z axes. Twisting the user manipulation portion 164 of the X-gear 152 (FIG. 18B) moves the display 116 along the X-axis to adjust inter-pupil distance (IPD). Manipulating the user manipulation portion 170 of the Z-gear 154 (FIG. 18B) moves the display 116 along Z-axis to adjust the focal point of the optical path 114”. Note that IPD adjustment is mapped to the eye-based forward direction, and Z-axis translation adjustment is mapped to the user-specific forward direction, and the adjustments on both the IPD and Z-axis focus to maintain the alignment of the user HMD device is mapped to determine the difference between the eye-based forward direction and the user-specific forward direction (in order to align the HMD device)). Regarding claim 17, Jones and Magyari teach all the features with respect to claim 12 as outlined above. Further, Jones teaches that the HMD of claim 12, wherein the first change is obtained before the second change (See Jones: Figs. 4A-B, and [0181], “A freeform prism-lens typically is symmetric about the plane in which the surfaces are rotated and decentered and the optical path is folded. For instance, the prism-lens schematic in FIG. 4A was set to be symmetric about the vertical YOZ plane. The optical surfaces are decentered along the vertical Y-axis and rotated about the horizontal X-axis so that the optical path is folded in the vertical YOZ plane to form a prism-lens structure. With this type of plane-symmetry structure, it is very challenging to achieve a wider field of view for the folding direction than the direction with symmetry”; and Fig. 18, and [0266], “Improved usability can be achieved if a mechanical or electronic IPD adjustment is made dynamic, and according to the distance to the object being viewed, as identified through a combination of eye-tracking and FoV image depth-mapping, achieved using either a range finding system or through indirect means such as depth-mapping from defocus-information, or other means, such as stereoscopy or LIDAR”. Note that the first change is rotation, which is made before the second change is made, the second change is the translation, i.e., the dynamic adjustment of the IPD). Regarding claim 18, Jones and Magyari teach all the features with respect to claim 12 as outlined above. Further, Jones teaches that the HMD of claim 12, wherein said obtaining the first change to the orientation comprises: presenting an instruction for a user to rotate the content item until the content item appears to the user to be facing the user (See Jones: Figs. 2A-F, and [0219], “If a bioptic hinge for the NR2I-HMD, which allows the HMD to be pivoted from the configuration in FIGS. 2A to 2C to that depicted in FIG. 2D to 2F, is aligned with user eye rotation then bioptic tilt compensation may not be required for eye/HMD reference frames. If the hinge is not perfectly aligned with the user's eye rotation axis, compensation for bioptic tilt may be performed to accommodate eye-NR2I geometry change as rotation occurs”. Note that the HMD optical components are rotated till it is facing the user’s eye, as shown in Fig. 2F). Regarding claim 20, Jones and Magyari teach all the features with respect to claim 1 as outlined above. Further, ones and Magyari teach that a non-transitory computer-readable storage medium storing program instructions executable via one or more processors, of a head mounted device (HMD), to perform operations (See Jones: Fig. 41, and [0413], “Referring to FIG. 41 there is depicted a portable electronic device 4104 supporting an interface to a NR2I 4170 according to an embodiment of the invention. Also depicted within the PED 4104 is the protocol architecture as part of a simplified functional diagram of a system 4100 that includes a portable electronic device (PED) 4104, such as a smartphone, an Access Point (AP) 4106, such as a Wi-Fi access point or wireless cellular base station, and one or more network devices 4107, such as communication servers, streaming media servers, and routers for example. Network devices 4107 may be coupled to AP 4106 via any combination of networks, wired, wireless and/or optical communication. The PED 4104 includes one or more processors 4110 and a memory 4112 coupled to processor(s) 4110. AP 4106 also includes one or more processors 4111 and a memory 4113 coupled to processor(s) 4111. A non-exhaustive list of examples for any of processors 4110 and 4111 includes a central processing unit (CPU), a digital signal processor (DSP), a reduced instruction set computer (RISC), a complex instruction set computer (CISC) and the like. Furthermore, any of processors 4110 and 4111 may be part of application specific integrated circuits (ASICs) or may be a part of application specific standard products (ASSPs). A non-exhaustive list of examples for memories 4112 and 4113 includes any combination of the following semiconductor devices such as registers, latches, ROM, EEPROM, flash memory devices, non-volatile random-access memory devices (NVRAM), SDRAM, DRAM, double data rate (DDR) memory devices, SRAM, universal serial bus (USB) removable memory, and the like”) comprising: presenting a view of a content item at a position and an orientation within a three-dimensional (3D) environment (See Jones: Figs. 1A-B, and [0121], “Said applications could include, but are not necessarily limited to gaming, augmented reality, night vision, computer use, viewing movies, environment simulation, training, remote-assistance, etc. Augmented reality applications may include, but are not limited to, medicine, visual assistance, engineering, aviation, training, remote-assistance, tactical, gaming, sports, virtual reality, environment simulation, and data display”; and [0165], “Whilst FIGS. 1 to 2P depict a single field-of-view camera centrally located on the front of the NR2I display, alternate functional decompositions are considered. In particular, one or more forward-facing cameras may instead be mounted to the headband so that their directional orientation remains unchanged as the NR2I display position is changed. Further, two forward-facing optical imaging devices, one on each side of the headband, may be used to provide a wider field of view and/or stereoscopic image capture. Similarly, one or more forward facing infrared range finders and/or optical scanners may be mounted to the headband so that their orientation remains unchanged as the NR2I display position is changed. Range finder(s) may provide additional information to the user in their immersive use of the NR2I-HMD whilst an optical scanner or optical scanners may provide environment information which is displayed in conjunction with a field of view or region of interest image derived from the one or more optical imaging devices”. Note that the simulation environment is mapped to the virtual content, and the stereo images are mapped to the 3D environment); obtaining a first change to the orientation of the content item within the 3D environment (See Jones: Figs.4A-B, and [0185], “It should be noted that in the design disclosed according to an embodiment of the invention is presented with the global reference coordinate system centered with respect to the exit pupil, like most of the existing freeform prism-lens designs. However, the reference axes are set differently from the existing designs presented within the prior art. Here the Z-axis is along the viewing direction, but the Y-axis is parallel to the horizontal direction aligning with inter-pupillary direction, and the X-axis is in the vertical direction aligning with the head orientation. In other words, the reference coordinate system is rotated 90-degrees around the Z-axis. As a result, the overall prism-lens system is symmetric about the horizontal (YOZ) plane, rather than a typical left-right symmetry about the vertical plane. The optical surfaces (S1 410, S2 420, and S3 430) are decentered along the horizontal Y-axis and rotated about the vertical X-axis. As a result, the optical path is folded in the horizontal YOZ plane, corresponding to the direction of wider field of view, to form a prism-lens structure. This arrangement allows the MicroDisplay 440 to be mounted on the temple side of the user's head”. Note that the rotation about the X-axis will change the orientation of the virtual content presentation, and this is mapped to the first change to the orientation of the content item within the 3D environment); obtaining a second change to the position of the content item with the 3D environment (See Jones: Fig. 18, and [0270], “More complex examples still might consider off-centered objects, employ both eye tracking data and the range to the object of gaze and then shift the images asymmetrically, and/or independently for left and right eyes, and/or in the vertical orientation and/or rotational translations as well, the display dynamically responding to the user's gaze. In such cases although the user's eyes 1800 are focused on an off-center object the central rangefinder 1820 will measure the depth to the centered object 1830”; and [0372], “Within other embodiments of the invention a combination function of eye-tracking and bioptic may be employed such that as the display assembly is rotated, the geometry with respect to the user's eye changes, and the system compensates. There are at least two ways these features can interact. By measuring the rotation angle (either directly with an encoder, say, or inferring based on, for example, inertial sensing, or from eye-tracking itself) we can know that the display has been shifted with respect to the user's eye”. Note that the translation and shift are mapped to the second change to the position of the content item with the 3D environment); determining a characteristic of a user-specific forward direction based on the first change to the orientation and the second change to the position of the content item (See Jones: Fig. 2A-T, and [0062], “FIGS. 2N to 2O respectively depict an alternative configuration for a bioptic immersive NR2I-HMD according to the embodiment of the invention in FIGS. 2G to 2M respectively exploiting a NR2I freeform prism-lens according to another embodiment of the invention wherein the user has positioned the NR2I-HMD out of their direct line of sight and in their line of sight”; and [0063], “FIG. 2P depicts an alternative configuration for a bioptic immersive NR2I-HMD according to an embodiment of the invention exploiting a NR2I freeform prism-lens according to another embodiment of the invention wherein the user has positioned the NR2I-HMD in their line of sight”. Note that the line of sight is mapped to the user-specific forward direction, and “within or out of” the lie of sight are mapped to the characteristics of the user-specific forward direction ); and presenting additional content within one or more 3D environments based on the characteristic of the user-specific forward direction (See Magyari: Figs. 8-9, and [0066], “Visual content to be projected includes both static and dynamic visual content, and any additional content that can be visually displayed and is capable of being viewed. Static visual content includes content that does not change over the time during which it is displayed and includes but is not limited to photos, still imagery, static text and graphic data displays that do not update with new information. Dynamic visual content includes content that does change over the time during which it is displayed and includes but is not limited to video playback or real time video, changing imagery, dynamic text and graphic data displays that update as new information is obtained”. Note that the HMD alignment is maintained automatically and the dynamic content is updated as it occurs, this updating dynamic content is mapped to presenting additional content to the HMD user).. Allowable Subject Matter Claims 8 and 19 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The best arts searched do not teach the cited limitations of “the method of claim 1, further comprising: based on the first change to the orientation, rotating an original vector used to determine the orientation of the content item; and reorienting the content item in the view based on the rotated original vector.” Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to GORDON G LIU whose telephone number is (571)270-0382. The examiner can normally be reached Monday - Friday 8:00-5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Devona E Faulk can be reached at 571-272-7515. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /GORDON G LIU/Primary Examiner, Art Unit 2618
Read full office action

Prosecution Timeline

Aug 05, 2024
Application Filed
Feb 07, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602846
GENERATING REALISTIC MACHINE LEARNING-BASED PRODUCT IMAGES FOR ONLINE CATALOGS
2y 5m to grant Granted Apr 14, 2026
Patent 12602840
IMAGE PROCESSING SYSTEM, IMAGE PROCESSING METHOD, AND STORAGE MEDIUM
2y 5m to grant Granted Apr 14, 2026
Patent 12602871
MESH TOPOLOGY GENERATION USING PARALLEL PROCESSING
2y 5m to grant Granted Apr 14, 2026
Patent 12592022
INTEGRATION CACHE FOR THREE-DIMENSIONAL (3D) RECONSTRUCTION
2y 5m to grant Granted Mar 31, 2026
Patent 12586330
DISPLAYING A VIRTUAL OBJECT IN A REAL-LIFE SCENE
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
83%
Grant Probability
98%
With Interview (+15.1%)
2y 4m
Median Time to Grant
Low
PTA Risk
Based on 673 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month