Prosecution Insights
Last updated: April 19, 2026
Application No. 18/638,553

AR MIRROR

Final Rejection §103
Filed
Apr 17, 2024
Examiner
GODDARD, TAMMY
Art Unit
2611
Tech Center
2600 — Communications
Assignee
Snap Inc.
OA Round
2 (Final)
30%
Grant Probability
At Risk
3-4
OA Rounds
5y 4m
To Grant
49%
With Interview

Examiner Intelligence

Grants only 30% of cases
30%
Career Allow Rate
41 granted / 138 resolved
-32.3% vs TC avg
Strong +20% interview lift
Without
With
+19.5%
Interview Lift
resolved cases with interview
Typical timeline
5y 4m
Avg Prosecution
10 currently pending
Career history
148
Total Applications
across all art units

Statute-Specific Performance

§101
3.3%
-36.7% vs TC avg
§103
59.4%
+19.4% vs TC avg
§102
19.9%
-20.1% vs TC avg
§112
14.1%
-25.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 138 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment This Office Action is responsive to the amendment received 03 March 2026. Claims 1, 8 and 15-20 are currently amended and claims 2-7 and 9-14 are as originally presented. In summary, claims 1-20 are pending in the application. The amendment of claims 15-20 has cured the basis for the rejection of the claims under 35 U.S.C. § 101 because the claimed invention was directed to non-statutory subject matter, thus, the 35 U.S.C. § 101 rejection of claims 15-20 is hereby withdrawn. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action. Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Vilcovsky et al. (U. S. Patent Application Publication 2014/0226000 A1, already of record, hereafter ‘6000) and in view of Zhou et al. (U. S. Patent Application Publication 2021/0152734 A1, hereafter ‘734). Regarding claim 1 (Currently Amended), Vilcovsky teaches a computer-implemented method (‘6000; Abstract; a method for operating a mirror-display) comprising: providing, by one or more processors (‘6000; ¶ 0014; …a non-transitory computer-readable storage medium for operating a monitor, a camera, and a processor, is provided and configured so as to display a mirror-mimicking image on the monitor…), an Augmented Reality (AR) user interface (‘6000; ¶ 0224; In step 1704 the user is given user control over the display. In one embodiment, the specific control preferences are saved for each particular user and are activated once the user has been recognized. Otherwise, general user interface is enabled, e.g., a hand gesture activated interface) of an AR system to a user (‘6000; ¶ 0066; FIG. 1 is a system block diagram for an augmented reality platform supporting a real-time or recorded video/image processing. The system can include one or a plurality (1:n) of input devices 101, including a video camera, a still camera, an IR camera, a 2D camera or a 3D camera. The input device 101 can be adapted to send information to one or more machine vision augmented reality modules 102, 103, 104, 105, 107, 108 and 109. The one or more machine vision augmented reality modules 102, 103, 104, 105, 107, 108 and 109 can be adapted to send information to one or a plurality (1:m) of screens 106. The one or more machine vision augmented reality modules 102, 103, 104, 105, 107, 108 and 109 can be adapted to send/receive information to/from an interface or user interface module 110. The interface 110 can be adapted to send/receive information to/from one or more of a cloud, a web/store or a user device, e.g., smart phone or tablet.), the AR user interface displayed on a display screen (‘6000; ¶ 0210, When the user is identified, the user's account can be opened and the last recordings can be displayed, e.g., in one embodiment, a thumbnail configuration 1510 can be displayed, such as that depicted in FIG. 15. Alternatively, any other image control bar can be displayed. If the user is not identified, a user registration process can commence, then, after a few seconds, a new account can be opened, and the mirror can be configured to start recording automatically. For example, if the user is not identified, a code, such as a QR can be displayed on the monitor, such that the user can scan it with a mobile device to download the app. When the app is downloaded and the user completes the registration process, the app on the user's device would include a code that can be presented to the frame in future visits.¶ 0232, In one embodiment, the concept of a pre-defined number of thumbnails, as shown in FIGS. 15 and 16, can be replaced with a slide menu of thumbnails); capturing, by the one or more processors (‘6000; ¶ 0014; …a non-transitory computer-readable storage medium for operating a monitor, a camera, and a processor, is provided and configured so as to display a mirror-mimicking image on the monitor…), using one or more cameras (‘6000; ¶ 0040, Embodiments of the present invention utilize a camera, and a flat panel display to provide the user with the experience of looking at a mirror…), scene image data of a real-world scene (‘6000; ¶ 0205; During idle mode the monitor may also display the background image recorded at that particular instance by the camera. In order to properly display the background image, so that it appears like a mirror reflection, the video engine of the controller can take a default setting (e.g., 2 meter from the monitor) and apply the 2 meter transformation on the camera stream to create a mirror effect on the background environment as depicted, for example, in FIG. 14) including the user (‘6000; ¶ 0206; In one embodiment, the presence of a user is done by continuously analyzing the images captured by the camera to detect changes in the image and identify a user; ¶ 0207, the system can be configured so that, when the user steps into the tracking or registration zone in front of the mirror, the video engine of the controller reacts and starts tracking the object. Based on the object location, the video engine can adjust the video transformation to mimic mirror behavior…); generating, by the one or more processors (‘6000; ¶ 0014; …a non-transitory computer-readable storage medium for operating a monitor, a camera, and a processor, is provided and configured so as to display a mirror-mimicking image on the monitor…), pre-processed scene image data using the scene image data (‘6000; ¶ 0206; In one embodiment, the presence of a user is done by continuously analyzing the images captured by the camera to detect changes in the image and identify a user; ¶ 0207, the system can be configured so that, when the user steps into the tracking or registration zone in front of the mirror, the video engine of the controller reacts and starts tracking the object. Based on the object location, the video engine can adjust the video transformation to mimic mirror behavior…) generating, by the one or more processors (‘6000; ¶ 0014; …a non-transitory computer-readable storage medium for operating a monitor, a camera, and a processor, is provided and configured so as to display a mirror-mimicking image on the monitor…), restored scene image data (‘6000; ¶ 0205; During idle mode the monitor may also display the background image recorded at that particular instance by the camera. In order to properly display the background image, so that it appears like a mirror reflection, the video engine of the controller can take a default setting (e.g., 2 meter from the monitor) and apply the 2 meter transformation on the camera stream to create a mirror effect on the background environment as depicted, for example, in FIG. 14) using the pre-processed scene image data (‘6000; ¶ 0206; In one embodiment, the presence of a user is done by continuously analyzing the images captured by the camera to detect changes in the image and identify a user; ¶ 0207, the system can be configured so that, when the user steps into the tracking or registration zone in front of the mirror, the video engine of the controller reacts and starts tracking the object. Based on the object location, the video engine can adjust the video transformation to mimic mirror behavior…); generating, by the one or more processors (‘6000; ¶ 0014; …a non-transitory computer-readable storage medium for operating a monitor, a camera, and a processor, is provided and configured so as to display a mirror-mimicking image on the monitor…), mirror image data using the restored scene image data (‘6000; ¶ 0205; During idle mode the monitor may also display the background image recorded at that particular instance by the camera. In order to properly display the background image, so that it appears like a mirror reflection, the video engine of the controller can take a default setting (e.g., 2 meter from the monitor) and apply the 2 meter transformation on the camera stream to create a mirror effect on the background environment as depicted, for example, in FIG. 14); generating, by the one or more processors (‘6000; ¶ 0014; …a non-transitory computer-readable storage medium for operating a monitor, a camera, and a processor, is provided and configured so as to display a mirror-mimicking image on the monitor…), AR mirror image data using the mirror image data (‘6000; ¶ 0205; During idle mode the monitor may also display the background image recorded at that particular instance by the camera. In order to properly display the background image, so that it appears like a mirror reflection, the video engine of the controller can take a default setting (e.g., 2 meter from the monitor) and apply the 2 meter transformation on the camera stream to create a mirror effect on the background environment as depicted, for example, in FIG. 14); and providing, by the one or more processors (‘6000; ¶ 0014; …a non-transitory computer-readable storage medium for operating a monitor, a camera, and a processor, is provided and configured so as to display a mirror-mimicking image on the monitor…), an AR mirror image to the user using the AR mirror image data (‘6000; ¶ 0205; During idle mode the monitor may also display the background image recorded at that particular instance by the camera. In order to properly display the background image, so that it appears like a mirror reflection, the video engine of the controller can take a default setting (e.g., 2 meter from the monitor) and apply the 2 meter transformation on the camera stream to create a mirror effect on the background environment as depicted, for example, in FIG. 14) and the AR user interface (‘6000; ¶ 0210, When the user is identified, the user's account can be opened and the last recordings can be displayed, e.g., in one embodiment, a thumbnail configuration 1510 can be displayed, such as that depicted in FIG. 15. Alternatively, any other image control bar can be displayed. If the user is not identified, a user registration process can commence, then, after a few seconds, a new account can be opened, and the mirror can be configured to start recording automatically. For example, if the user is not identified, a code, such as a QR can be displayed on the monitor, such that the user can scan it with a mobile device to download the app. When the app is downloaded and the user completes the registration process, the app on the user's device would include a code that can be presented to the frame in future visits.¶ 0232, In one embodiment, the concept of a pre-defined number of thumbnails, as shown in FIGS. 15 and 16, can be replaced with a slide menu of thumbnails) and does not teach a model trained to restore the scene image data by correcting for one or more of a blur introduced by a point spread function of the display screen, a wiring effect introduced by display elements of the display screen, or backscatter from displayed content on the display screen. Zhou, working in the same field of endeavor, however, teaches a model trained to restore the scene image data (‘734; ¶ 0005, Another disclosed example provides a computing device comprising a display, a camera positioned behind the display, a logic subsystem, and a storage subsystem storing instructions executable by the logic subsystem to acquire an image through the display via the camera, input the image into a machine learning model trained on degraded and undegraded image pairs, and output, via the machine learning model, a restored image comprising generated information in a frequency region of the image that is degraded due to having acquiring the image through the display; fig. 12; ¶ 0017, FIG. 12 is a block diagram illustrating an example architecture of an image restoration machine learning model in the form of a convolutional neural network (CNN).) by correcting for one or more of a blur introduced by a point spread function of the display screen (‘734; ¶ 0047, FIG. 4 shows example diffraction patterns obtained by illuminating an example tOLED display 402 and an example pOLED display 404 with coherent red light in a wavelength of 633 nanometers (nm), revealing the point spread function (PSF) of each through-display imaging system. The tOLED display 402 comprises a grating-like structure of vertical slits, and the resulting diffraction pattern 406 extends primarily in a horizontal direction. The pOLED display 404 comprises a pentile structure and the diffraction pattern 408 extends in vertical and horizontal directions.), a wiring effect introduced by display elements of the display screen (‘734; ¶ 0045, ….The light transmission rate for the example pOLED display 300, measured with a spectrophotometer and white light source, was ˜2.9%. This relatively low value may be attributed to various factors, such as transparent traces that may scatter and diffract light, external Fresnel reflections, a circular polarizer, and/or a substrate (e.g., various glasses or plastics, which may absorb in some regions of the visible spectrum, for example) of the display panel…), or backscatter from displayed content on the display screen (‘734; ¶ 0045,…The light transmission rate for the example pOLED display 300, measured with a spectrophotometer and white light source, was ˜2.9%. This relatively low value may be attributed to various factors, such as transparent traces that may scatter and diffract light, external Fresnel reflections, a circular polarizer, and/or a substrate (e.g., various glasses or plastics, which may absorb in some regions of the visible spectrum, for example) of the display panel. The low light transmission translates to less light reaching the camera lens, causing a relatively lower signal-to-noise ratio (SNR) compared to the tOLED display 200) for the benefit of providing a machine learning model to generate a restored image comprising generated information in a frequency region of the image that is degraded due to acquiring the image through the display. It would have been obvious to one of ordinary skill in the art prior to the filing of the invention to have combined the techniques for implementing a model trained to restore the scene image data as taught by Zhou with the AR mirror systems and methods taught by Vilcovsky for the benefit of providing a machine learning model to generate a restored image comprising generated information in a frequency region of the image that is degraded due to acquiring the image through the display. Regarding claim 2 (Original), Vilcovsky and Zhou teach the computer-implemented method of claim 1 and further teach the method as further comprising: capturing, by the one or more processors (‘6000; ¶ 0014; …a non-transitory computer-readable storage medium for operating a monitor, a camera, and a processor, is provided and configured so as to display a mirror-mimicking image on the monitor…), using one or more scene-reference cameras of the AR system (‘6000; ¶ 0040, Embodiments of the present invention utilize a camera, and a flat panel display to provide the user with the experience of looking at a mirror…), reference scene image data (‘6000; ¶ 0034; FIG. 14 depicts an example of transformation on a camera stream to create a mirror effect on the background environment). Regarding claim 3 (Original), Vilcovsky and Zhou teach the computer-implemented method of claim 2 and further teach wherein generating the pre-processed scene image data further uses the reference scene image data (‘6000; ¶ 0034; FIG. 14 depicts an example of transformation on a camera stream to create a mirror effect on the background environment). Regarding claim 4 (Original), Vilcovsky and Zhou teach the computer-implemented method of claim 3 and further teach wherein generating the mirror image data further uses the reference scene image data (‘6000; ¶ 0035; FIG. 15 illustrates an example of a virtual mirror with user recognition and authentication and with user interface on the background (reference scene image data ) of fig. 14). Regarding claim 5 (Original), Vilcovsky and Zhou teach the computer-implemented method of claim 1 and further teach the method as further comprising: capturing, by the one or more processors, using one or more user-reference cameras, user image data (‘6000; fig. 15; ¶ 0223; In step 1702, as a user approaches the mirror and the user's presence is sensed, e.g., by motion sensor or by detecting a change in the image seen by the camera, the system initiates operation in mirror mode. That is, the controller perform transformation operation on the image of the user, such that the image presented on the monitor mimics the user's reflection in a mirror); determining, by the one or more processors (‘6000; ¶ 0014; …a non-transitory computer-readable storage medium for operating a monitor, a camera, and a processor, is provided and configured so as to display a mirror-mimicking image on the monitor…), an eye level of the user using the user image data (‘6000; ¶ 0069; The eyes-match transformation module 103 can be adapted to apply on the image the right mapping to match the camera point of view with theoretical mirror point of view (user eyes reflection) and fill the blind pixels if there are any after the mapping. The eyes-match transformation module 103 can be adapted to send information to the augmented reality module 104 and/or the video/still recording module 105. Also, the eyes-match transformation module 103 can be adapted to send/receive information to/from the control element module 108. Further, the eyes-match transformation module 103 can be adapted to send information to the one or plurality of screens 106, to display an image that mimics a reflection of a mirror); and positioning, by the one or more processors (‘6000; ¶ 0014; …a non-transitory computer-readable storage medium for operating a monitor, a camera, and a processor, is provided and configured so as to display a mirror-mimicking image on the monitor…), the one or more cameras at the eye level of the user using the eye level of the user (‘6000; ¶ 0069; The eyes-match transformation module 103 can be adapted to apply on the image the right mapping to match the camera point of view with theoretical mirror point of view (user eyes reflection) and fill the blind pixels if there are any after the mapping. The eyes-match transformation module 103 can be adapted to send information to the augmented reality module 104 and/or the video/still recording module 105. Also, the eyes-match transformation module 103 can be adapted to send/receive information to/from the control element module 108. Further, the eyes-match transformation module 103 can be adapted to send information to the one or plurality of screens 106, to display an image that mimics a reflection of a mirror). Regarding claim 6 (Original), Vilcovsky and Zhou teach the computer-implemented method of claim 1 and further teach wherein generating the AR mirror image data comprises: detecting a user image in the mirror image data (‘6000; ¶ 0206; In one embodiment, the presence of a user is done by continuously analyzing the images captured by the camera to detect changes in the image and identify a user; ¶ 0207, the system can be configured so that, when the user steps into the tracking or registration zone in front of the mirror, the video engine of the controller reacts and starts tracking the object. Based on the object location, the video engine can adjust the video transformation to mimic mirror behavior…); and overlaying a virtual object on the user image (‘6000; ¶ 0082; FIG. 2 depicts an example of an augmented reality module, which can correspond with the augmented reality module 104 described above. Specifically, the augmented reality module can have a function of allowing a user to virtually dress themselves, change appearances, such as color, accessories, etc. In this embodiment, the system obtains input image or video from, for example, the EyesMatch computerized method 201, or from any other image/video source, e.g., user smartphone, security camera, Google glass, mobile camera or stationary camera. Additional embodiments can include additional geometric information that will help to calculate proportion like user height, gaze and the like. If the user video or image is coming from the EyesMatch module (calibrated image/video), a more comprehensive model can be created that allows for body measurements, object pose, size, highly accurate orientation detection and the like. The additional information that can be calculated from the calibrated object or video can allow for object fitting, object replacement and insertion of new objects into the frame/video, since any distortion introduced by the location and field of view of the camera has been accounted for and corrected. These corrections enable highly accurate measurements of the user height, waist, etc., and fitting of the user's body to generally classified body types). Regarding claim 7 (Original), Vilcovsky and Zhou teach the computer-implemented method of claim 1 and further teach wherein the display screen is a see-through display screen (‘6000; ¶ 0042; In the disclosed embodiments the camera can be located anywhere. A best practice is to provide the camera above the screen facing the user. Additional locations can include the bottom of the screen, the sides of the screen or behind the screen if the screen is a bidirectional screen.), and wherein the one or more cameras are positioned behind the display screen to capture the scene image data (‘6000; ¶ 0042; In the disclosed embodiments the camera can be located anywhere. A best practice is to provide the camera above the screen facing the user. Additional locations can include the bottom of the screen, the sides of the screen or behind the screen if the screen is a bidirectional screen.). Regarding claim 8 (Currently Amended), Vilcovsky teaches a machine (‘6000; ¶ 0015; In some embodiments, computer implemented method for operating a system having a monitor, a camera, and a processor is provided and configured so as to display a user's image on the monitor…), and comprising: on a device having the processor and a memory storing a program for execution by the processor (‘6000; [0235] The embodiments may be implemented in a non-transitory computer-readable storage medium for operating a monitor, a camera, and a processor, so as to display a mirror-mimicking image on the monitor, comprising: on a device having the processor and a memory storing a program for execution by the processor, the program including instructions for: sensing for a user; initiating a mirror-mimicking mode for displaying the mirror-mimicking image on the monitor; initiating an authentication process; and prompting the user to control the monitor); comprising: one or more processors (‘6000; ¶ 0014; …a non-transitory computer-readable storage medium for operating a monitor, a camera, and a processor, is provided and configured so as to display a mirror-mimicking image on the monitor…); and a memory storing instructions that (‘6000; ¶ 0014; …a non-transitory computer-readable storage medium for operating a monitor, a camera, and a processor, is provided and configured so as to display a mirror-mimicking image on the monitor…), when executed by the one or more processors (‘6000; ¶ 0014; …a non-transitory computer-readable storage medium for operating a monitor, a camera, and a processor, is provided and configured so as to display a mirror-mimicking image on the monitor…), cause the machine to perform operations (‘6000; ¶ 0014; …a non-transitory computer-readable storage medium for operating a monitor, a camera, and a processor, is provided and configured so as to display a mirror-mimicking image on the monitor…) comprising: providing an Augmented Reality (AR) user interface (‘6000; ¶ 0224; In step 1704 the user is given user control over the display. In one embodiment, the specific control preferences are saved for each particular user and are activated once the user has been recognized. Otherwise, general user interface is enabled, e.g., a hand gesture activated interface) of an AR system to a user (‘6000; ¶ 0066; FIG. 1 is a system block diagram for an augmented reality platform supporting a real-time or recorded video/image processing. The system can include one or a plurality (1:n) of input devices 101, including a video camera, a still camera, an IR camera, a 2D camera or a 3D camera. The input device 101 can be adapted to send information to one or more machine vision augmented reality modules 102, 103, 104, 105, 107, 108 and 109. The one or more machine vision augmented reality modules 102, 103, 104, 105, 107, 108 and 109 can be adapted to send information to one or a plurality (1:m) of screens 106. The one or more machine vision augmented reality modules 102, 103, 104, 105, 107, 108 and 109 can be adapted to send/receive information to/from an interface or user interface module 110. The interface 110 can be adapted to send/receive information to/from one or more of a cloud, a web/store or a user device, e.g., smart phone or tablet.), the AR user interface displayed on a display screen (‘6000; ¶ 0210, When the user is identified, the user's account can be opened and the last recordings can be displayed, e.g., in one embodiment, a thumbnail configuration 1510 can be displayed, such as that depicted in FIG. 15. Alternatively, any other image control bar can be displayed. If the user is not identified, a user registration process can commence, then, after a few seconds, a new account can be opened, and the mirror can be configured to start recording automatically. For example, if the user is not identified, a code, such as a QR can be displayed on the monitor, such that the user can scan it with a mobile device to download the app. When the app is downloaded and the user completes the registration process, the app on the user's device would include a code that can be presented to the frame in future visits.¶ 0232, In one embodiment, the concept of a pre-defined number of thumbnails, as shown in FIGS. 15 and 16, can be replaced with a slide menu of thumbnails); capturing using one or more cameras (‘6000; ¶ 0040, Embodiments of the present invention utilize a camera, and a flat panel display to provide the user with the experience of looking at a mirror…), scene image data of a real-world scene (‘6000; ¶ 0205; During idle mode the monitor may also display the background image recorded at that particular instance by the camera. In order to properly display the background image, so that it appears like a mirror reflection, the video engine of the controller can take a default setting (e.g., 2 meter from the monitor) and apply the 2 meter transformation on the camera stream to create a mirror effect on the background environment as depicted, for example, in FIG. 14) including the user (‘6000; ¶ 0206; In one embodiment, the presence of a user is done by continuously analyzing the images captured by the camera to detect changes in the image and identify a user; ¶ 0207, the system can be configured so that, when the user steps into the tracking or registration zone in front of the mirror, the video engine of the controller reacts and starts tracking the object. Based on the object location, the video engine can adjust the video transformation to mimic mirror behavior…); generating pre-processed scene image data using the scene image data (‘6000; ¶ 0206; In one embodiment, the presence of a user is done by continuously analyzing the images captured by the camera to detect changes in the image and identify a user; ¶ 0207, the system can be configured so that, when the user steps into the tracking or registration zone in front of the mirror, the video engine of the controller reacts and starts tracking the object. Based on the object location, the video engine can adjust the video transformation to mimic mirror behavior…); generating restored scene image data (‘6000; ¶ 0205; During idle mode the monitor may also display the background image recorded at that particular instance by the camera. In order to properly display the background image, so that it appears like a mirror reflection, the video engine of the controller can take a default setting (e.g., 2 meter from the monitor) and apply the 2 meter transformation on the camera stream to create a mirror effect on the background environment as depicted, for example, in FIG. 14) using the pre-processed scene image data (‘6000; ¶ 0206; In one embodiment, the presence of a user is done by continuously analyzing the images captured by the camera to detect changes in the image and identify a user; ¶ 0207, the system can be configured so that, when the user steps into the tracking or registration zone in front of the mirror, the video engine of the controller reacts and starts tracking the object. Based on the object location, the video engine can adjust the video transformation to mimic mirror behavior…); generating mirror image data using the restored scene image data (‘6000; ¶ 0205; During idle mode the monitor may also display the background image recorded at that particular instance by the camera. In order to properly display the background image, so that it appears like a mirror reflection, the video engine of the controller can take a default setting (e.g., 2 meter from the monitor) and apply the 2 meter transformation on the camera stream to create a mirror effect on the background environment as depicted, for example, in FIG. 14); generating AR mirror image data using the mirror image data (‘6000; ¶ 0205; During idle mode the monitor may also display the background image recorded at that particular instance by the camera. In order to properly display the background image, so that it appears like a mirror reflection, the video engine of the controller can take a default setting (e.g., 2 meter from the monitor) and apply the 2 meter transformation on the camera stream to create a mirror effect on the background environment as depicted, for example, in FIG. 14); and providing an AR mirror image to the user using the AR mirror image data and the AR user interface (‘6000; ¶ 0205; During idle mode the monitor may also display the background image recorded at that particular instance by the camera. In order to properly display the background image, so that it appears like a mirror reflection, the video engine of the controller can take a default setting (e.g., 2 meter from the monitor) and apply the 2 meter transformation on the camera stream to create a mirror effect on the background environment as depicted, for example, in FIG. 14) and the AR user interface (‘6000; ¶ 0210, When the user is identified, the user's account can be opened and the last recordings can be displayed, e.g., in one embodiment, a thumbnail configuration 1510 can be displayed, such as that depicted in FIG. 15. Alternatively, any other image control bar can be displayed. If the user is not identified, a user registration process can commence, then, after a few seconds, a new account can be opened, and the mirror can be configured to start recording automatically. For example, if the user is not identified, a code, such as a QR can be displayed on the monitor, such that the user can scan it with a mobile device to download the app. When the app is downloaded and the user completes the registration process, the app on the user's device would include a code that can be presented to the frame in future visits.¶ 0232, In one embodiment, the concept of a pre-defined number of thumbnails, as shown in FIGS. 15 and 16, can be replaced with a slide menu of thumbnails) and does not tech a model trained to restore the scene image data by correcting for one or more of a blur introduced by a point spread function of the display screen, a wiring effect introduced by display elements of the display screen, or backscatter from displayed content on the display screen. Zhou, working in the same field of endeavor, however, teaches a model trained to restore the scene image data (‘734; ¶ 0005, Another disclosed example provides a computing device comprising a display, a camera positioned behind the display, a logic subsystem, and a storage subsystem storing instructions executable by the logic subsystem to acquire an image through the display via the camera, input the image into a machine learning model trained on degraded and undegraded image pairs, and output, via the machine learning model, a restored image comprising generated information in a frequency region of the image that is degraded due to having acquiring the image through the display; fig. 12; ¶ 0017, FIG. 12 is a block diagram illustrating an example architecture of an image restoration machine learning model in the form of a convolutional neural network (CNN).) by correcting for one or more of a blur introduced by a point spread function of the display screen (‘734; ¶ 0047, FIG. 4 shows example diffraction patterns obtained by illuminating an example tOLED display 402 and an example pOLED display 404 with coherent red light in a wavelength of 633 nanometers (nm), revealing the point spread function (PSF) of each through-display imaging system. The tOLED display 402 comprises a grating-like structure of vertical slits, and the resulting diffraction pattern 406 extends primarily in a horizontal direction. The pOLED display 404 comprises a pentile structure and the diffraction pattern 408 extends in vertical and horizontal directions.), a wiring effect introduced by display elements of the display screen (‘734; ¶ 0045, ….The light transmission rate for the example pOLED display 300, measured with a spectrophotometer and white light source, was ˜2.9%. This relatively low value may be attributed to various factors, such as transparent traces that may scatter and diffract light, external Fresnel reflections, a circular polarizer, and/or a substrate (e.g., various glasses or plastics, which may absorb in some regions of the visible spectrum, for example) of the display panel…), or backscatter from displayed content on the display screen (‘734; ¶ 0045,…The light transmission rate for the example pOLED display 300, measured with a spectrophotometer and white light source, was ˜2.9%. This relatively low value may be attributed to various factors, such as transparent traces that may scatter and diffract light, external Fresnel reflections, a circular polarizer, and/or a substrate (e.g., various glasses or plastics, which may absorb in some regions of the visible spectrum, for example) of the display panel. The low light transmission translates to less light reaching the camera lens, causing a relatively lower signal-to-noise ratio (SNR) compared to the tOLED display 200) for the benefit of providing a machine learning model to generate a restored image comprising generated information in a frequency region of the image that is degraded due to acquiring the image through the display. It would have been obvious to one of ordinary skill in the art prior to the filing of the invention to have combined the techniques for implementing a model trained to restore the scene image data as taught by Zhou with the AR mirror systems and methods taught by Vilcovsky for the benefit of providing a machine learning model to generate a restored image comprising generated information in a frequency region of the image that is degraded due to acquiring the image through the display. Regarding claim 9 (Original), Vilcovsky and Zhao teach the machine of claim 8 and further teach wherein the operations further comprise: capturing, using one or more scene-reference cameras of the AR system (‘6000; ¶ 0040, Embodiments of the present invention utilize a camera, and a flat panel display to provide the user with the experience of looking at a mirror…), reference scene image data (‘6000; ¶ 0034; FIG. 14 depicts an example of transformation on a camera stream to create a mirror effect on the background environment). Regarding claim 10 (Original), Vilcovsky and Zhao teach the machine of claim 9 and further teach wherein generating the pre-processed scene image data further uses the reference scene image data (‘6000; ¶ 0034; FIG. 14 depicts an example of transformation on a camera stream to create a mirror effect on the background environment). Regarding claim 11 (Original), Vilcovsky and Zhao teach the machine of claim 10 and further teach wherein generating the mirror image data further uses the reference scene image data (‘6000; ¶ 0034; FIG. 14 depicts an example of transformation on a camera stream to create a mirror effect on the background environment). Regarding claim 12 (Original), Vilcovsky and Zhao teach the machine of claim 8 and further teach wherein the operations further comprise: capturing, using one or more user-reference cameras, user image data (‘6000; fig. 15; ¶ 0223; In step 1702, as a user approaches the mirror and the user's presence is sensed, e.g., by motion sensor or by detecting a change in the image seen by the camera, the system initiates operation in mirror mode. That is, the controller perform transformation operation on the image of the user, such that the image presented on the monitor mimics the user's reflection in a mirror); determining an eye level of the user using the user image data (‘6000; ¶ 0069; The eyes-match transformation module 103 can be adapted to apply on the image the right mapping to match the camera point of view with theoretical mirror point of view (user eyes reflection) and fill the blind pixels if there are any after the mapping. The eyes-match transformation module 103 can be adapted to send information to the augmented reality module 104 and/or the video/still recording module 105. Also, the eyes-match transformation module 103 can be adapted to send/receive information to/from the control element module 108. Further, the eyes-match transformation module 103 can be adapted to send information to the one or plurality of screens 106, to display an image that mimics a reflection of a mirror); and positioning the one or more cameras at the eye level of the user using the eye level of the user (‘6000; ¶ 0069; The eyes-match transformation module 103 can be adapted to apply on the image the right mapping to match the camera point of view with theoretical mirror point of view (user eyes reflection) and fill the blind pixels if there are any after the mapping. The eyes-match transformation module 103 can be adapted to send information to the augmented reality module 104 and/or the video/still recording module 105. Also, the eyes-match transformation module 103 can be adapted to send/receive information to/from the control element module 108. Further, the eyes-match transformation module 103 can be adapted to send information to the one or plurality of screens 106, to display an image that mimics a reflection of a mirror). Regarding claim 13 (Original), Vilcovsky and Zhao teach the machine of claim 8 and further teach wherein generating the AR mirror image data comprises: detecting a user image in the mirror image data (‘6000; ¶ 0206; In one embodiment, the presence of a user is done by continuously analyzing the images captured by the camera to detect changes in the image and identify a user; ¶ 0207, the system can be configured so that, when the user steps into the tracking or registration zone in front of the mirror, the video engine of the controller reacts and starts tracking the object. Based on the object location, the video engine can adjust the video transformation to mimic mirror behavior…); and overlaying a virtual object on the user image (‘6000; ¶ 0082; FIG. 2 depicts an example of an augmented reality module, which can correspond with the augmented reality module 104 described above. Specifically, the augmented reality module can have a function of allowing a user to virtually dress themselves, change appearances, such as color, accessories, etc. In this embodiment, the system obtains input image or video from, for example, the EyesMatch computerized method 201, or from any other image/video source, e.g., user smartphone, security camera, Google glass, mobile camera or stationary camera. Additional embodiments can include additional geometric information that will help to calculate proportion like user height, gaze and the like. If the user video or image is coming from the EyesMatch module (calibrated image/video), a more comprehensive model can be created that allows for body measurements, object pose, size, highly accurate orientation detection and the like. The additional information that can be calculated from the calibrated object or video can allow for object fitting, object replacement and insertion of new objects into the frame/video, since any distortion introduced by the location and field of view of the camera has been accounted for and corrected. These corrections enable highly accurate measurements of the user height, waist, etc., and fitting of the user's body to generally classified body types). Regarding claim 14 (Original), Vilcovsky and Zhao teach the machine of claim 8 and further teach wherein the display screen is a see-through display screen (‘6000; ¶ 0042; In the disclosed embodiments the camera can be located anywhere. A best practice is to provide the camera above the screen facing the user. Additional locations can include the bottom of the screen, the sides of the screen or behind the screen if the screen is a bidirectional screen.), and wherein the one or more cameras are positioned behind the display screen to capture the scene image data (‘6000; ¶ 0042; In the disclosed embodiments the camera can be located anywhere. A best practice is to provide the camera above the screen facing the user. Additional locations can include the bottom of the screen, the sides of the screen or behind the screen if the screen is a bidirectional screen.). Regarding claim 15 (Currently Amended), Vilcovsky teaches a non-transitory machine-readable storage medium (‘6000; ¶ 0014; …a non-transitory computer-readable storage medium for operating a monitor, a camera, and a processor, is provided and configured so as to display a mirror-mimicking image on the monitor…), the machine-readable storage medium including instructions that, when executed by a machine, cause the machine to perform operations (‘6000; ¶ 0014; In some embodiments, a non-transitory computer-readable storage medium for operating a monitor, a camera, and a processor, is provided and configured so as to display a mirror-mimicking image on the monitor, and comprising: on a device having the processor and a memory, storing a program for execution by the processor, the program including instructions for performing operations) comprising: providing an Augmented Reality (AR) user interface (‘6000; ¶ 0224; In step 1704 the user is given user control over the display. In one embodiment, the specific control preferences are saved for each particular user and are activated once the user has been recognized. Otherwise, general user interface is enabled, e.g., a hand gesture activated interface) of an AR system to a user (‘6000; ¶ 0066; FIG. 1 is a system block diagram for an augmented reality platform supporting a real-time or recorded video/image processing. The system can include one or a plurality (1:n) of input devices 101, including a video camera, a still camera, an IR camera, a 2D camera or a 3D camera. The input device 101 can be adapted to send information to one or more machine vision augmented reality modules 102, 103, 104, 105, 107, 108 and 109. The one or more machine vision augmented reality modules 102, 103, 104, 105, 107, 108 and 109 can be adapted to send information to one or a plurality (1:m) of screens 106. The one or more machine vision augmented reality modules 102, 103, 104, 105, 107, 108 and 109 can be adapted to send/receive information to/from an interface or user interface module 110. The interface 110 can be adapted to send/receive information to/from one or more of a cloud, a web/store or a user device, e.g., smart phone or tablet.), the AR user interface displayed on a display screen (‘6000; ¶ 0210, When the user is identified, the user's account can be opened and the last recordings can be displayed, e.g., in one embodiment, a thumbnail configuration 1510 can be displayed, such as that depicted in FIG. 15. Alternatively, any other image control bar can be displayed. If the user is not identified, a user registration process can commence, then, after a few seconds, a new account can be opened, and the mirror can be configured to start recording automatically. For example, if the user is not identified, a code, such as a QR can be displayed on the monitor, such that the user can scan it with a mobile device to download the app. When the app is downloaded and the user completes the registration process, the app on the user's device would include a code that can be presented to the frame in future visits.¶ 0232, In one embodiment, the concept of a pre-defined number of thumbnails, as shown in FIGS. 15 and 16, can be replaced with a slide menu of thumbnails); capturing using one or more cameras (‘6000; ¶ 0040, Embodiments of the present invention utilize a camera, and a flat panel display to provide the user with the experience of looking at a mirror…), scene image data of a real-world scene (‘6000; ¶ 0205; During idle mode the monitor may also display the background image recorded at that particular instance by the camera. In order to properly display the background image, so that it appears like a mirror reflection, the video engine of the controller can take a default setting (e.g., 2 meter from the monitor) and apply the 2 meter transformation on the camera stream to create a mirror effect on the background environment as depicted, for example, in FIG. 14) including the user (‘6000; ¶ 0206; In one embodiment, the presence of a user is done by continuously analyzing the images captured by the camera to detect changes in the image and identify a user; ¶ 0207, the system can be configured so that, when the user steps into the tracking or registration zone in front of the mirror, the video engine of the controller reacts and starts tracking the object. Based on the object location, the video engine can adjust the video transformation to mimic mirror behavior…); generating pre-processed scene image data using the scene image data (‘6000; ¶ 0206; In one embodiment, the presence of a user is done by continuously analyzing the images captured by the camera to detect changes in the image and identify a user; ¶ 0207, the system can be configured so that, when the user steps into the tracking or registration zone in front of the mirror, the video engine of the controller reacts and starts tracking the object. Based on the object location, the video engine can adjust the video transformation to mimic mirror behavior…); generating restored scene image data (‘6000; ¶ 0205; During idle mode the monitor may also display the background image recorded at that particular instance by the camera. In order to properly display the background image, so that it appears like a mirror reflection, the video engine of the controller can take a default setting (e.g., 2 meter from the monitor) and apply the 2 meter transformation on the camera stream to create a mirror effect on the background environment as depicted, for example, in FIG. 14) using the pre-processed scene image data (‘6000; ¶ 0206; In one embodiment, the presence of a user is done by continuously analyzing the images captured by the camera to detect changes in the image and identify a user; ¶ 0207, the system can be configured so that, when the user steps into the tracking or registration zone in front of the mirror, the video engine of the controller reacts and starts tracking the object. Based on the object location, the video engine can adjust the video transformation to mimic mirror behavior…); generating mirror image data using the restored scene image data (‘6000; ¶ 0205; During idle mode the monitor may also display the background image recorded at that particular instance by the camera. In order to properly display the background image, so that it appears like a mirror reflection, the video engine of the controller can take a default setting (e.g., 2 meter from the monitor) and apply the 2 meter transformation on the camera stream to create a mirror effect on the background environment as depicted, for example, in FIG. 14); generating AR mirror image data using the mirror image data (‘6000; ¶ 0205; During idle mode the monitor may also display the background image recorded at that particular instance by the camera. In order to properly display the background image, so that it appears like a mirror reflection, the video engine of the controller can take a default setting (e.g., 2 meter from the monitor) and apply the 2 meter transformation on the camera stream to create a mirror effect on the background environment as depicted, for example, in FIG. 14); and providing an AR mirror image to the user using the AR mirror image data and the AR user interface (‘6000; ¶ 0205; During idle mode the monitor may also display the background image recorded at that particular instance by the camera. In order to properly display the background image, so that it appears like a mirror reflection, the video engine of the controller can take a default setting (e.g., 2 meter from the monitor) and apply the 2 meter transformation on the camera stream to create a mirror effect on the background environment as depicted, for example, in FIG. 14) and the AR user interface (‘6000; ¶ 0210, When the user is identified, the user's account can be opened and the last recordings can be displayed, e.g., in one embodiment, a thumbnail configuration 1510 can be displayed, such as that depicted in FIG. 15. Alternatively, any other image control bar can be displayed. If the user is not identified, a user registration process can commence, then, after a few seconds, a new account can be opened, and the mirror can be configured to start recording automatically. For example, if the user is not identified, a code, such as a QR can be displayed on the monitor, such that the user can scan it with a mobile device to download the app. When the app is downloaded and the user completes the registration process, the app on the user's device would include a code that can be presented to the frame in future visits.¶ 0232, In one embodiment, the concept of a pre-defined number of thumbnails, as shown in FIGS. 15 and 16, can be replaced with a slide menu of thumbnails) and does not teach a model trained to restore the scene image data by correcting for one or more of a blur introduced by a point spread function of the display screen, a wiring effect introduced by display elements of the display screen, or backscatter from displayed content on the display screen. Zhao, working in the same field of endeavor, however, teaches a model trained to restore the scene image data (‘734; ¶ 0005, Another disclosed example provides a computing device comprising a display, a camera positioned behind the display, a logic subsystem, and a storage subsystem storing instructions executable by the logic subsystem to acquire an image through the display via the camera, input the image into a machine learning model trained on degraded and undegraded image pairs, and output, via the machine learning model, a restored image comprising generated information in a frequency region of the image that is degraded due to having acquiring the image through the display; fig. 12; ¶ 0017, FIG. 12 is a block diagram illustrating an example architecture of an image restoration machine learning model in the form of a convolutional neural network (CNN).) by correcting for one or more of a blur introduced by a point spread function of the display screen (‘734; ¶ 0047, FIG. 4 shows example diffraction patterns obtained by illuminating an example tOLED display 402 and an example pOLED display 404 with coherent red light in a wavelength of 633 nanometers (nm), revealing the point spread function (PSF) of each through-display imaging system. The tOLED display 402 comprises a grating-like structure of vertical slits, and the resulting diffraction pattern 406 extends primarily in a horizontal direction. The pOLED display 404 comprises a pentile structure and the diffraction pattern 408 extends in vertical and horizontal directions.), a wiring effect introduced by display elements of the display screen (‘734; ¶ 0045, ….The light transmission rate for the example pOLED display 300, measured with a spectrophotometer and white light source, was ˜2.9%. This relatively low value may be attributed to various factors, such as transparent traces that may scatter and diffract light, external Fresnel reflections, a circular polarizer, and/or a substrate (e.g., various glasses or plastics, which may absorb in some regions of the visible spectrum, for example) of the display panel…), or backscatter from displayed content on the display screen (‘734; ¶ 0045,…The light transmission rate for the example pOLED display 300, measured with a spectrophotometer and white light source, was ˜2.9%. This relatively low value may be attributed to various factors, such as transparent traces that may scatter and diffract light, external Fresnel reflections, a circular polarizer, and/or a substrate (e.g., various glasses or plastics, which may absorb in some regions of the visible spectrum, for example) of the display panel. The low light transmission translates to less light reaching the camera lens, causing a relatively lower signal-to-noise ratio (SNR) compared to the tOLED display 200) for the benefit of providing a machine learning model to generate a restored image comprising generated information in a frequency region of the image that is degraded due to acquiring the image through the display. It would have been obvious to one of ordinary skill in the art prior to the filing of the invention to have combined the techniques for implementing a model trained to restore the scene image data as taught by Zhou with the AR mirror systems and methods taught by Vilcovsky for the benefit of providing a machine learning model to generate a restored image comprising generated information in a frequency region of the image that is degraded due to acquiring the image through the display. Regarding claim 16 (Currently Amended), Vilcovsky and Zhao teach the non-transitory machine-readable storage medium of claim 15 and further teach wherein the operations further comprise: capturing, using one or more scene-reference cameras of the AR system (‘6000; ¶ 0040, Embodiments of the present invention utilize a camera, and a flat panel display to provide the user with the experience of looking at a mirror…), reference scene image data (‘6000; ¶ 0034; FIG. 14 depicts an example of transformation on a camera stream to create a mirror effect on the background environment). Regarding claim 17 (Currently Amended), Vilcovsky and Zhou teach the non-transitory machine-readable storage medium of claim 16 and further teach wherein generating the pre-processed scene image data further uses the reference scene image data (‘6000; ¶ 0034; FIG. 14 depicts an example of transformation on a camera stream to create a mirror effect on the background environment). Regarding claim 18 (Currently Amended), Vilcovsky and Zhao teach the non-transitory machine-readable storage medium of claim 17 and further teach wherein generating the mirror image data further uses the reference scene image data (‘6000; ¶ 0034; FIG. 14 depicts an example of transformation on a camera stream to create a mirror effect on the background environment). Regarding claim 19 (Currently Amended), Vilcovsky and Zhao teach the non-transitory machine-readable storage medium of claim 15 and further teach wherein the operations further comprise: capturing, using one or more user-reference cameras, user image data (‘6000; fig. 15; ¶ 0223; In step 1702, as a user approaches the mirror and the user's presence is sensed, e.g., by motion sensor or by detecting a change in the image seen by the camera, the system initiates operation in mirror mode. That is, the controller perform transformation operation on the image of the user, such that the image presented on the monitor mimics the user's reflection in a mirror); determining an eye level of the user using the user image data (‘6000; ¶ 0069; The eyes-match transformation module 103 can be adapted to apply on the image the right mapping to match the camera point of view with theoretical mirror point of view (user eyes reflection) and fill the blind pixels if there are any after the mapping. The eyes-match transformation module 103 can be adapted to send information to the augmented reality module 104 and/or the video/still recording module 105. Also, the eyes-match transformation module 103 can be adapted to send/receive information to/from the control element module 108. Further, the eyes-match transformation module 103 can be adapted to send information to the one or plurality of screens 106, to display an image that mimics a reflection of a mirror); and positioning the one or more cameras at the eye level of the user using the eye level of the user (‘6000; ¶ 0069; The eyes-match transformation module 103 can be adapted to apply on the image the right mapping to match the camera point of view with theoretical mirror point of view (user eyes reflection) and fill the blind pixels if there are any after the mapping. The eyes-match transformation module 103 can be adapted to send information to the augmented reality module 104 and/or the video/still recording module 105. Also, the eyes-match transformation module 103 can be adapted to send/receive information to/from the control element module 108. Further, the eyes-match transformation module 103 can be adapted to send information to the one or plurality of screens 106, to display an image that mimics a reflection of a mirror). Regarding claim 20 (Currently Amended), Vilcovsky and Zhao teach the non-transitory machine-readable storage medium of claim 15 and further teach wherein generating the AR mirror image data comprises: detecting a user image in the mirror image data (‘6000; ¶ 0206; In one embodiment, the presence of a user is done by continuously analyzing the images captured by the camera to detect changes in the image and identify a user; ¶ 0207, the system can be configured so that, when the user steps into the tracking or registration zone in front of the mirror, the video engine of the controller reacts and starts tracking the object. Based on the object location, the video engine can adjust the video transformation to mimic mirror behavior…); and overlaying a virtual object on the user image (‘6000; ¶ 0082; FIG. 2 depicts an example of an augmented reality module, which can correspond with the augmented reality module 104 described above. Specifically, the augmented reality module can have a function of allowing a user to virtually dress themselves, change appearances, such as color, accessories, etc. In this embodiment, the system obtains input image or video from, for example, the EyesMatch computerized method 201, or from any other image/video source, e.g., user smartphone, security camera, Google glass, mobile camera or stationary camera. Additional embodiments can include additional geometric information that will help to calculate proportion like user height, gaze and the like. If the user video or image is coming from the EyesMatch module (calibrated image/video), a more comprehensive model can be created that allows for body measurements, object pose, size, highly accurate orientation detection and the like. The additional information that can be calculated from the calibrated object or video can allow for object fitting, object replacement and insertion of new objects into the frame/video, since any distortion introduced by the location and field of view of the camera has been accounted for and corrected. These corrections enable highly accurate measurements of the user height, waist, etc., and fitting of the user's body to generally classified body types). Response to Arguments Applicant’s arguments with respect to claims 1-20 have been considered but are moot because the arguments apply to the amended claims and do not reflect the current combination of references and citations used in the current prior art rejection presented above in which Applicant's amendment necessitated a new grounds of rejection. The Applicant’s arguments filed 03 March 2026 are primarily based upon the extensively amended claim features incorporated into independent claims 1, 8 and 15. The Examiner respectfully submits that, at the time Applicant argued against the references, Applicant was arguing against limitations that had not been previously claimed and thus, were not previously examined nor addressed in the previous office action and requests that Applicant look to the Office Action provided above wherein these newly added limitations and new claims have now been examined and addressed and, in particular, the newly cited prior art reference of Zhou is now relied upon for showing these added features. Independent claims 1, 8 and 15 are rejected as shown in the claim rejection section above and are argued as shown immediately above. Dependent claims 2-7, 9-14, and 16-20 are rejected for being dependent upon a rejected base claim and for the additional features that they add as shown in the claim rejection sections above. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to EDWARD MARTELLO whose telephone number is (571)270-1883. The examiner can normally be reached on M-F from 9AM to 5PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tammy Goddard, can be reached at telephone number (571) 272-7773. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /EDWARD MARTELLO/Primary Examiner, Art Unit 2611
Read full office action

Prosecution Timeline

Apr 17, 2024
Application Filed
Nov 30, 2025
Non-Final Rejection — §103
Feb 10, 2026
Examiner Interview Summary
Feb 10, 2026
Applicant Interview (Telephonic)
Mar 03, 2026
Response Filed
Mar 16, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12573004
GENERATIVE IMAGE FILLING USING A REFERENCE IMAGE
2y 5m to grant Granted Mar 10, 2026
Patent 12548257
Systems and Methods for 3D Facial Modeling
2y 5m to grant Granted Feb 10, 2026
Patent 12530839
RELIGHTABLE NEURAL RADIANCE FIELD MODEL
2y 5m to grant Granted Jan 20, 2026
Patent 12462480
IMAGE PROCESSING METHOD
2y 5m to grant Granted Nov 04, 2025
Patent 10140972
TEXT TO SPEECH PROCESSING SYSTEM AND METHOD, AND AN ACOUSTIC MODEL TRAINING SYSTEM AND METHOD
2y 5m to grant Granted Nov 27, 2018
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
30%
Grant Probability
49%
With Interview (+19.5%)
5y 4m
Median Time to Grant
Moderate
PTA Risk
Based on 138 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month