Prosecution Insights
Last updated: April 19, 2026
Application No. 18/683,692

AERIAL IMAGE DISPLAY DEVICE

Non-Final OA §103
Filed
Feb 14, 2024
Examiner
LHYMN, SARAH
Art Unit
2613
Tech Center
2600 — Communications
Assignee
Kyocera Corporation
OA Round
1 (Non-Final)
65%
Grant Probability
Favorable
1-2
OA Rounds
2y 4m
To Grant
81%
With Interview

Examiner Intelligence

Grants 65% — above average
65%
Career Allow Rate
357 granted / 546 resolved
+3.4% vs TC avg
Strong +15% interview lift
Without
With
+15.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 4m
Avg Prosecution
30 currently pending
Career history
576
Total Applications
across all art units

Statute-Specific Performance

§101
5.4%
-34.6% vs TC avg
§103
63.2%
+23.2% vs TC avg
§102
5.9%
-34.1% vs TC avg
§112
15.3%
-24.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 546 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-4 are rejected under 35 U.S.C. 103 as being unpatentable over Waldron (U.S. Patent No. 11,092,821) in view of JP 04-057430 (“JP-430”); cited in IDS, all citations to translation provided with Applicant’s 02/14/2024 IDS. Regarding claim 1: Waldron teaches: an aerial image display device (Fig. 2: 120, aerial display system/device), comprising: a display configured to display an image with traveling image light (se e.g. C6, L26-27, the display “user laser light to create a three-dimensional (3D) image” from Fig. 2: 121, laser projection system, in combination with C6, third paragraph from the top, “The 3-D image 155 may be overlaid on a 2-D image 157 that is presented outside of housing 124, such as on a screen (not shown) outside of housing 124, to give the viewer the optical appearance of a floating 3-D image”. Here, the “screen” corresponds to a display, configured to display an image (in this example a 3D image overlaid on a 2D image), with traveling light (laser light) from the laser projection system); an imaging optical system (C6, L35-37, “optical elements” which together comprise an optical system) including one or more optical elements, the imaging optical system being configured to receive the traveling image light as incident light (C6, third full paragraph, “The optical elements comprise a polarizer 126, a concave mirror 128, …and a beam diverter or beam splitter 130 positioned between concave mirror 128 and the polarizer 126.” Lens Fig. 3: 50 can also be an optical element); and a drive (Fig. 3: 51, an adjuster) configured to change a positional relationship between an object focal point of the imaging optical system and the display relative to each other, the drive being configured to switch between a first positioning and a second positioning, (C9, first full paragraph. “An adjuster 51 is provided on lens 50 to change the focal point of the floating display position and/or size of the floating images, where lens 50 is configured to allow a change in focal point and/or size.” The ability of the adjuster to adjust or change the focal point of the floating display position corresponds to a teaching of changing a positional relationship between an object focal point and the display, and being able to switch between first and second positions). Re: the first positioning being positioning with the display located closer to the imaging optical system than the object focal point of the imaging optical system to display a virtual image in air, the second positioning being positioning with the display located farther from the imaging optical system than the object focal point of the imaging optical system to display a real image in air, consider the following. In analogous art, JP-430 teaches that it is known to have a display switching switch (i.e. another teaching of the above claimed “drive”), “for switching a display image between a real image and a virtual image, wherein when the real image is selected based on a signal of the display switching switch, the screen is made semi-transparent, the display body is positioned farther than a focal point of the convex lens (this teaches Applicant’s claimed “second positioning” for a real image), and the display body is in an inverted display state Control means for making the screen transparent image is selected, making the screen transparent, positioning a display body closer than a focal point of the convex lens, and bringing the display body into an erect display state (Applicant’s claimed “second positioning” for a virtual image)” (quoting page 1, last six lines, to page 2). Modifying the applied reference(-s), such to have included the display positioning of JP-430, in the system of Waldron, such to display both virtual and real images, is all of taught and suggested by the prior art, and would have been obvious and predictable to one of ordinary skill. The prior art included each element recited in claim 1, although not necessarily in a single embodiment, with the only difference being between the claimed element and the prior art being the lack of actual combination of certain elements in a single prior art embodiment, as described above. One of ordinary skill in the art could have combined the elements as claimed by known methods, and in that combination, each element merely performs the same function as it does separately. One of ordinary skill in the art would have also recognized that the results of the combination were predictable as of the effective filing date of the claimed invention. Regarding claim 2: Waldron teaches: the aerial image display device according to claim 1, wherein the drive moves at least one of the display or the one or more optical elements in a direction including a component parallel to an optical axis of the one or more optical elements closest to the display on an optical path of the imaging optical system (see e.g. Fig. 4(a): 216 and Fig. 4(b): 217. 216 and 217 are two different position of the display, moved in a direction parallel to an optical axis 103L of one or more optical elements closest to the display on an optical path, such as lens 205). It would have been obvious for one of ordinary skill in the art, as of the effective filing date of Applicant’s claims, to have further modified the applied reference(-s) in view of Waldron to have obtained the above, motivated to make use of known optical imaging architecture to achieve desired display results. Regarding claim 3: Waldron teaches: the aerial image display device according to claim 1, wherein the imaging optical system is a reflective optical system or a catadioptric optical system (Waldron teaches at least two examples of optical system configurations that are reflective. As a first example, see claim 1, the device can have a beam diverter and a concave mirror placed to receive the reflected light from the beam diverter. This is one teaching of a reflective optical system. A second example is the configuration of claim 18, which includes a rotating mirror, a fourth concave mirror to receive light reflected by the rotating mirror, and a lens or series of lenses, to also receive and reflect light. These pieces teach a reflective optical system). It would have been obvious for one of ordinary skill in the art, as of the effective filing date of Applicant’s claims, to have further modified the applied reference(-s) in view of Waldron to have obtained the above, motivated to make use of known optical imaging architecture to achieve desired display results. Regarding claim 4: Waldron teaches: the aerial image display device according to claim 1, further comprising: a controller configured to change an image to be displayed on the display (C10, last partial paragraph, the device/system can include “a video processor 202 b for generating the video images corresponding to the image data input via the one or more video inputs 202 a, and a laser output 202 c that is operatively connected to video processor 202 b and configured to output the laser beam that include the video images corresponding to the input image data.”). It would have been obvious for one of ordinary skill in the art, as of the effective filing date of Applicant’s claims, to have further modified the applied reference(-s) in view of Waldron to have obtained the above, motivated to include interactivity to change image display. Claim(s) 5 is rejected under 35 U.S.C. 103 as being unpatentable over Waldron in view of JP-430, and further in view of Gilles (U.S. Patent App. Pub. No. 2020/0142356 A1). Regarding claim 5: It would have been obvious for one of ordinary skill in the art to have further modified the applied reference(-s), in view of same, to have obtained: the aerial image display device according to claim 4, wherein the controller switches an up-down orientation of an image to be displayed on the display between an up-down orientation for the first positioning and an up-down orientation for the second positioning, and the results of the modification would have been obvious and predictable to one of ordinary skill in the art as of the effective filing date of the claimed invention. See MPEP §2143(A). JP-430 teaches that it is known that to switch between a real image and a virtual image (i.e. between the first positioning for real images, and the second positioning for virtual images), to invert the display (see e.g. page 7, L1-3, “Further, the display moving device 12 is driven via the drive circuit 18, and the display of the display 7 is switched between inverted and erect via the drive circuit 15” The inverting (and reversing the inverting, or making erect), teaches switching an up-down orientation of an image to be displayed (by inverting the display). Moreover, in the interest of compact prosecution, see also Gilles, which teaches that it is known for a device/system to perform image processing on the image itself, to perform the same inversion (switching of up-down orientation). See Gilles, para. 97, “the obtaining of the scene by image synthesis for a virtual scene or from a stereo or 2D+Z camera for a real scene, the obtaining of intensity and depth maps are achieved on the server equipment, then transmitted to the terminal equipment, which carries out the inverted projection, compensation and propagation steps.” Modifying the applied references, in view of JP-430 and Gilles, to have performed the above inversion of images, as it relates to display of real and virtual images, per the prior art, is all of taught and suggested by the applied references, and would have been obvious and predictable to one of ordinary skill. One of ordinary skill in the art could have combined the elements as claimed by known methods, and in that combination, each element merely performs the same function as it does separately. One of ordinary skill in the art would have also recognized that the results of the combination were predictable as of the effective filing date of the claimed invention. Claim(s) 6 is rejected under 35 U.S.C. 103 as being unpatentable over Waldron in view of JP-430, and further in view of Zhang (CN 110975348B) (all citations to English language machine translation provided with this official action). Regarding claim 6: It would have been obvious for one of ordinary skill in the art to have combined and modified the applied reference(-s), in view of same, to have obtained: the aerial image display device according to claim 4, wherein the controller switches a distortion correction table for an image to be displayed on the display between a distortion correction table for the first positioning and a distortion correction table for the second positioning, and the results of the modification would have been obvious and predictable to one of ordinary skill in the art as of the effective filing date of the claimed invention. See MPEP §2143(A). Zhang teaches that it is known in augmented reality image processing (see page 1), to obtain “corresponding relationship of distortion parameters of the real camera, where the corresponding relationship of distortion parameters is the corresponding relationship between different lens zoom parameters of the real camera and different distortion correction parameters” to allow for: “performing distortion correction on the real image or performing distortion adjustment on the image of the virtual element according to the lens zoom parameter of the real camera and the corresponding relationship between the distortion parameters, the image of the virtual element is fused and displayed to the real image on the image.” (pages 2-3, quoting in part). See also, another example, at pages 18-19: “Step S140: Determine a distortion correction parameter according to the corresponding relationship between the lens zoom parameter of the real camera and the distortion parameter, and perform distortion correction on the real image or perform distortion adjustment on the image of the virtual element according to the distortion correction parameter. In an example of step S140 in this embodiment, the corresponding distortion correction parameters may be queried in the distortion parameter correspondence according to the lens zoom parameters when the real image is captured by the real camera, and then the corresponding distortion correction parameters may be set according to the determined distortion correction parameters. The real image is subjected to distortion correction.” Accordingly, as show above, Zhang teaches that obtaining distortion correctio parameters for real and virtual image display (corresponding to Applicant’s claimed first positioning and second positioning), is known. The examiner also takes official notice that a table data structure (i.e. “correction table”) is known in the art as a method for listing or holding data. Modifying the applied references, such to include distortion correction parameters, per Zhang, for real and virtual images (i.e. first and second positioning), also per Zhang, in a table form, as official notice, is all of taught and suggested by the prior art, and would have been obvious and predictable to one of ordinary skill in the art. The prior art included each element recited in claim 6, although not necessarily in a single embodiment, with the only difference being between the claimed element and the prior art being the lack of actual combination of certain elements in a single prior art embodiment, as described above. One of ordinary skill in the art could have combined the elements as claimed by known methods, and in that combination, each element merely performs the same function as it does separately. One of ordinary skill in the art would have also recognized that the results of the combination were predictable as of the effective filing date of the claimed invention. Claim(s) 10 is rejected under 35 U.S.C. 103 as being unpatentable over Waldron in view of JP-430, and further in view of Gruen (U.S. Patent App. Pub. No. 2021/0375050 A1). Regarding claim 10: The applied references to claim 4 do not proactively teach claim 10. Consider the following. In analogous art, Gruen teaches: the aerial image display device according to claim 4, wherein the controller performs control to cause a frame frequency of an image to be displayed on the display for displaying a virtual image in air to be higher than a frame frequency of an image to be displayed on the display for displaying a real image in air (see para. 42, “Virtual images may be updated with any suitable frame rate. In some examples, the frame rate at which virtual images are presented may match the frame rate at which images of the real-world environment are captured—e.g., 90 frames-per-second—although other frame rates may alternatively be used.” This teaches the features of claim 10, as an embodiment per Gruen whereby the frame frequency (frame rate) of a virtual image is higher than that of a real image. Gruen teaches this as at least one embodiment). It would have been obvious for one of ordinary skill in the art, as of the effective filing date of Applicant’s claims, to have further modified the applied reference(-s) in view of Gruen to have obtained the above, motivated to make use of known manipulations of frame rate to achieve desired image display results. Claim(s) 7-9 and 11-14 are rejected under 35 U.S.C. 103 as being unpatentable over Waldron in view of JP-430, and further in view of Crispin (U.S. Patent App. Pub. No. 2021/0349536 A1). Regarding claim 7: It would have been obvious for one of ordinary skill in the art to have combined and modified the applied reference(-s), in view of same, to have obtained: the aerial image display device according to claim 4, wherein the controller performs control to enlarge an image to be displayed on the display for displaying a virtual image in air relative to an image to be displayed on the display for displaying a real image in air, and the results of the modification would have been obvious and predictable to one of ordinary skill in the art as of the effective filing date of the claimed invention. See MPEP §2143(A). Crispin teaches that it is known to enlarge images (i.e. per para. 4, “adjust a visual characteristic (e.g., hue, saturation, size, shape, spatial frequency, motion, highlighting, etc.) associated with an object” said “object” corresponding to a virtual image. This teaches the above enlargement of a virtual image relative to a real image. This can be done in response to a user’s physiological state, as per Crispin (para. 4-5), though Applicant’s claim 7 does not require or claim any specific triggering event for the size enlargement. Moreover, Crispin further teaches that these image modification of a virtual image can be done in response to similar features of the real world (see para. 58), which further teaches/suggests the modification (here, image enlargement) of the virtual image relative to real image. Modifying the applied references, such to have included the teachings of Crispin for virtual image modification (e.g. relative to a real image), is all of taught and suggested by the prior art, and would have been obvious and predictable to one of ordinary skill. One of ordinary skill in the art could have combined the elements as claimed by known methods, and in that combination, each element merely performs the same function as it does separately. One of ordinary skill in the art would have also recognized that the results of the combination were predictable as of the effective filing date of the claimed invention. Regarding claim 8: It would have been obvious for one of ordinary skill in the art to have further modified the applied reference(-s), in view of same, to have obtained: the aerial image display device according to claim 4, wherein the controller performs control to cause luminance of an image to be displayed on the display for displaying a virtual image in air to be higher than luminance of an image to be displayed on the display for displaying a real image in air, and the results of the modification would have been obvious and predictable to one of ordinary skill in the art as of the effective filing date of the claimed invention. See MPEP §2143(A). Crispin teaches that it is known to change luminance of images corresponding to a virtual image (see para. 33). This teaches/suggests displaying a virtual image of higher luminance than a real image. This can be done in response to a user’s physiological state, as per Crispin (para. 4-5), though Applicant’s claim 8 does not require or claim any specific triggering event for the image modification. Moreover, Crispin further teaches that these image modification of a virtual image can be done in response to similar features of the real world (see para. 58), which further teaches/suggests the modification (here, luminance change) of the virtual image relative to real image. Modifying the applied references, such to have included the teachings of Crispin for virtual image modification (e.g. relative to a real image), is all of taught and suggested by the prior art, and would have been obvious and predictable to one of ordinary skill. One of ordinary skill in the art could have combined the elements as claimed by known methods, and in that combination, each element merely performs the same function as it does separately. One of ordinary skill in the art would have also recognized that the results of the combination were predictable as of the effective filing date of the claimed invention. Regarding claim 9: It would have been obvious for one of ordinary skill in the art to have further modified the applied reference(-s), in view of same, to have obtained: the aerial image display device according to claim 4, wherein the controller performs control to cause a contrast of an image to be displayed on the display for displaying a virtual image in air to be higher than a contrast of an image to be displayed on the display for displaying a real image in air, and the results of the modification would have been obvious and predictable to one of ordinary skill in the art as of the effective filing date of the claimed invention. See MPEP §2143(A). Crispin teaches that it is known to change contrast of images corresponding to a virtual image (see para. 33). This teaches/suggests displaying a virtual image of higher luminance than a real image. This can be done in response to a user’s physiological state, as per Crispin (para. 4-5), though Applicant’s claim 9 does not require or claim any specific triggering event for the image modification. Moreover, Crispin further teaches that these image modification of a virtual image can be done in response to similar features of the real world (see para. 58), which further teaches/suggests the modification (here, contrast change) of the virtual image relative to real image. Modifying the applied references, such to have included the teachings of Crispin for virtual image modification (e.g. relative to a real image), is all of taught and suggested by the prior art, and would have been obvious and predictable to one of ordinary skill. One of ordinary skill in the art could have combined the elements as claimed by known methods, and in that combination, each element merely performs the same function as it does separately. One of ordinary skill in the art would have also recognized that the results of the combination were predictable as of the effective filing date of the claimed invention. Regarding claim 11: The applied references to claim 1 do not proactively each claim 11. Consider the following. In analogous art, Crispin teaches: the aerial image display device according to claim 1, further comprising: a camera configured to capture an image of a user (para. 27, “an eye tracking system may include one or more infrared (IR) light-emitting diodes (LEDs), an eye tracking camera (e.g., near-IR (NIR) camera), and an illumination source (e.g., an NIR light source) that emits light (e.g., NIR light) towards the eyes of the user 25. Moreover, the illumination source of the device 10 may emit NIR light to illuminate the eyes of the user 25 and the NIR camera may capture images of the eyes of the user 25. In some implementations, images captured by the eye tracking system may be analyzed to detect position and movements of the eyes of the user 25, or to detect other information about the eyes such as pupil dilation or pupil diameter.” See also para. 101), wherein the controller changes an image to be displayed on the display based on a position of an eye of the user (see e.g. Fig. 3 and para. 34: “FIG. 3, in accordance with some implementations, is a flowchart representation of a method 300 for adjusting visual characteristics associated with objects (e.g., object 20) to enhance a pupillary response of a user (e.g., user 25).” This teaches changing an image to be displayed (i.e. adjusting visual characteristics of objects) based on a position of an eye (to enhance a pupillary response, based on pupil tracking information, as mapped above) Another example teaching: see paras. 77-79 and/or claims 5, 6 and 10). It would have been obvious for one of ordinary skill in the art, as of the effective filing date of Applicant’s claims, to have further modified the applied reference(-s) in view of Crispin to have obtained the above, motivated to adjust image characteristics in response to user pupillary or eye movements, to enhance content delivery. Regarding claim 12: Crispin teaches: the aerial image display device according to claim 11, wherein the camera captures an image of the user to obtain an image of a pupil of the eye of the user (para. 27, the images captured of the eye includes the pupil, “to detect other information about the eyes such as pupil dilation or pupil diameter.” See also para. 101. Also, and alternatively, the pupil is part of an eye. Therefore, an image of the eye will include the pupil by definition), and the controller enlarges an image to be displayed on the display when the pupil enlarges (see e.g. para. 4, “Various implementations disclosed herein include devices, systems, and methods that adjust a visual characteristic (e.g., hue, saturation, size, shape, spatial frequency, motion, highlighting, etc.) associated with an object … to enhance pupillary responses of a user to the display of the object. The device …displays, on a display, the visual characteristic associated with the object to the user and obtains, with a sensor, physiological data (e.g., pupil dilation) associated with a response of the user to the visual characteristic. Based on the obtained physiological data, the device adjusts the visual characteristic to enhance the pupillary response of the user to the object”). See also paras. 6-7). It would have been obvious for one of ordinary skill in the art, as of the effective filing date of Applicant’s claims, to have further modified the applied reference(-s) in view of Crispin to have obtained the above, and the results further would have been obvious and predictable to one of ordinary skill in the art as of the effective filing date of the claimed invention. See MPEP §2143(A). Crispin teaches that pupil dilation (enlarge pupil) can be used and measured to adjust visual characteristics, such as size. Applicant’s claim 12 is one embodiment of the teachings of Crispin. One of ordinary skill in the art could have combined the elements as claimed by known methods, and in that combination, each element merely performs the same function as it does separately. One of ordinary skill in the art would have also recognized that the results of the combination were predictable as of the effective filing date of the claimed invention. Regarding claim 13: Crispin teaches: the aerial image display device according to claim 11, wherein the camera captures an image of the user to obtain an image of a pupil of the eye of the user (para. 27, the images captured of the eye includes the pupil, “to detect other information about the eyes such as pupil dilation or pupil diameter.” Also, and alternatively, the pupil is part of an eye. Therefore, an image of the eye will include the pupil by definition), and the controller increases luminance of an image to be displayed on the display when the pupil enlarges (see para. 33, changes in luminance of an object is also taught in response to pupil size). It would have been obvious for one of ordinary skill in the art, as of the effective filing date of Applicant’s claims, to have further modified the applied reference(-s) in view of Crispin to have obtained the above, and the results further would have been obvious and predictable to one of ordinary skill in the art as of the effective filing date of the claimed invention. See MPEP §2143(A). Crispin teaches that pupil dilation (enlarge pupil) can be used and measured to adjust visual characteristics, such as object luminance. Applicant’s claim 13 is one embodiment of the teachings of Crispin. One of ordinary skill in the art could have combined the elements as claimed by known methods, and in that combination, each element merely performs the same function as it does separately. One of ordinary skill in the art would have also recognized that the results of the combination were predictable as of the effective filing date of the claimed invention. Regarding claim 14: Crispin teaches: the aerial image display device according to claim 11, wherein the camera captures an image of the user to obtain an image of a pupil of the eye of the user (para. 27, the images captured of the eye includes the pupil, “to detect other information about the eyes such as pupil dilation or pupil diameter.” See also para. 101. Also, and alternatively, the pupil is part of an eye. Therefore, an image of the eye will include the pupil by definition), and the controller increases a contrast of an image to be displayed on the display when the pupil enlarges (see para. 33, changes in contrast of an object is also taught in response to pupil size). It would have been obvious for one of ordinary skill in the art, as of the effective filing date of Applicant’s claims, to have further modified the applied reference(-s) in view of Crispin to have obtained the above, and the results further would have been obvious and predictable to one of ordinary skill in the art as of the effective filing date of the claimed invention. See MPEP §2143(A). Crispin teaches that pupil dilation (enlarge pupil) can be used and measured to adjust visual characteristics, such as object contrast. Applicant’s claim 14 is one embodiment of the teachings of Crispin. One of ordinary skill in the art could have combined the elements as claimed by known methods, and in that combination, each element merely performs the same function as it does separately. One of ordinary skill in the art would have also recognized that the results of the combination were predictable as of the effective filing date of the claimed invention. Claim(s) 15 is rejected under 35 U.S.C. 103 as being unpatentable over Waldron in view of JP-430, and Crispin and further in view of Pacheco (U.S. Patent App. Pub. No. 2016/0334868). Regarding claim 15: It would have been obvious for one of ordinary skill in the art to have combined and modified the applied reference(-s), in view of same, to have obtained: the aerial image display device according to claim 11, wherein the camera captures an image of the user to obtain an image of a pupil of the eye of the user (Crispin, para. 27, the images captured of the eye includes the pupil, “to detect other information about the eyes such as pupil dilation or pupil diameter.” See also para. 101. Also, and alternatively, the pupil is part of an eye. Therefore, an image of the eye will include the pupil by definition), and the controller increases a frame frequency of an image to be displayed on the display when the pupil enlarges (see Pacheco, e.g. paras. 31, 34 and/or 46. Quoting para. 34 in part: “After an image is captured, eye comfort monitor 206 may process the image to determine whether the user eye comfort is adequate…. Eye comfort monitor 206 may analyze the image of an eye of the user to obtain measurements used to identify and/or prevent any suitable symptom of CVS including whether the pupil is dilated or contracted…Based on the analysis of the image, eye comfort monitor 206 may generate a control message to send to display 204 to change any suitable setting of display 204 …, including the brightness, font size, zoom level, sharpness, contrast, refresh rate, or color scale of display 204.”), and the results of the modification would have been obvious and predictable to one of ordinary skill in the art as of the effective filing date of the claimed invention. See MPEP §2143(A). The prior art included each element recited in claim 15, although not necessarily in a single embodiment, with the only difference being between the claimed element and the prior art being the lack of actual combination of certain elements in a single prior art embodiment, as described above. One of ordinary skill in the art could have combined the elements as claimed by known methods, and in that combination, each element merely performs the same function as it does separately. One of ordinary skill in the art would have also recognized that the results of the combination were predictable as of the effective filing date of the claimed invention. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure, relevant to image and light processing. * * * * * Any inquiry concerning this communication or earlier communications from the examiner should be directed to Sarah Lhymn whose telephone number is (571)270-0632. The examiner can normally be reached M-F, 9:00 AM to 6:00 PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Xiao Wu can be reached at 571-272-7761. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. Sarah Lhymn Primary Examiner Art Unit 2613 /Sarah Lhymn/Primary Examiner, Art Unit 2613
Read full office action

Prosecution Timeline

Feb 14, 2024
Application Filed
Nov 11, 2025
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602882
AUGMENTED REALITY DISPLAY DEVICE AND AUGMENTED REALITY DISPLAY SYSTEM
2y 5m to grant Granted Apr 14, 2026
Patent 12602764
METHODS OF ARTIFICIAL INTELLIGENCE-ASSISTED INFRASTRUCTURE ASSESSMENT USING MIXED REALITY SYSTEMS
2y 5m to grant Granted Apr 14, 2026
Patent 12602746
SYSTEM AND METHOD FOR BACKGROUND MODELLING FOR A VIDEO STREAM
2y 5m to grant Granted Apr 14, 2026
Patent 12585888
AUTOMATICALLY GENERATING DESCRIPTIONS OF AUGMENTED REALITY EFFECTS
2y 5m to grant Granted Mar 24, 2026
Patent 12586163
INTERACTIVELY REFINING A DIGITAL IMAGE DEPTH MAP FOR NON DESTRUCTIVE SYNTHETIC LENS BLUR
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
65%
Grant Probability
81%
With Interview (+15.2%)
2y 4m
Median Time to Grant
Low
PTA Risk
Based on 546 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month