Notice of Pre-AIA or AIA Status
1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
2. This Office action is in response to Applicant’s amendments/remarks received on October 2, 2025.
3. Claims 28-47 are pending in this application. Claims 28, 32-34, 38 and 42-44 have been amended.
Response to Arguments
4. Applicant's arguments filed October 2, 2025have been fully considered but they are not persuasive.
5. Applicant contends that “MIT, considered alone of in combination with Leister, does not teach or suggest the subject matter that claim 28 requires. Claim 28 requires “determining an image wave front at each of the plurality of layers based on a propagation of an image wave front from the first layer through the second layer to the result layer to form a propagated image wave front at the result layer representing a hologram of the 3D scene”(claim 28). And, claim 28 requires “for each of the first layer and the second layer, applying a respective one of the plurality of phase increment distributions associated with a layer to the image wave front at the layer”…”.
Examiner respectfully disagrees. Leister reference (US 2015/0036199 A1) discloses the propagation of different layers of a 3D-Scene with different depths and of different size/magnification due to the limitation by the frustrum, to a viewing window and a hologram plane suing Fresnel-Propagation, and, therefore, different depth dependent quadratic phase factors [See Leister: at least Figs. 5A-5E and par. 13, 50, 116, 119-157, 165-187].
Further on, MIT or Matusik reference (US 2023/0205133 A1) discloses a method for tensor holography method using neural network. More specifically, the neural network has been additionally trained to cause the holographic representation to be focused on any desired focal plane within the subject three-dimensional scene so as to exhibit a desired depth of field. Further, the neural network has received additional training in two stages to directly optimize the phase-only hologram (with anti-aliasing processing) by incorporating a complex to phase-only conversion into the training, wherein in a first stage the neural network is trained to predict a midpoint hologram propagated to a center of the subject three-dimensional scene and to minimize a difference between a target focal stack and a predicted focal stack, and in a second stage a phase-only target hologram is generated from the predicted midpoint hologram and refined by calculating a dynamic focal stack loss, between a post-encoding focal stack and the target focal stack, and a regularization loss associated therewith. The midpoint hologram is an application of the wavefront recording plane. It propagates the target hologram to the centre of the view frustum to optimally minimize the distance to any scene point, thus reducing the effective W. Further, the double phase method encodes an amplitude-normalized complex hologram Ae iϕ ∈C M×N (0≤A≤1) into a sum of two phase-only holograms at half of the normalized maximum amplitude. [See Matusik: at least Figs. 1-2G, 4, and par. 14-15, 18, 57-63, 65-68, 91-93] Accordingly, in each iteration, result layer representing a hologram of the scene is generated by the propagation of two layers based also on phase increment distributions.
Accordingly, the cited prior art meet with the contended limitations.
Therefore, the Office respectfully stands their position that the cited prior art meets with the contended limitations.
All remaining arguments that are dependent on the aforementioned arguments are therefore deemed unpersuasive.
Claim Rejections - 35 USC § 103
6. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
7. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
8. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
9. Claims 28-47 are rejected under 35 U.S.C. 103 as being unpatentable over Leister et al.(US 2015/0036199 A1)(hereinafter Leister) in view of Matusik et al.(US 2023/0205133 A1)(hereinafter Matusik).
Regarding claim 28, Leister discloses a method [See Leister: at least Figs. 5A-5E regarding method and device for generating a holographic reconstruction of an object] comprising:
obtaining image data associated with a plurality of layers of a 3D scene[See Leister: at least Figs. 5A-5E and par. 165-169 regarding The computation of video holograms with a hologram processor is based on original object information of a real or virtual three-dimensional scene, including values for spatial distribution of the light amplitudes in an RGB or RGB-compatible format. These values are available in a known file format and can be called up from a data memory by a hologram processor.] including a first layer at a first distance from a result layer and a second layer at a second distance from the result layer, wherein the first distance is greater than the second distance[See Leister: at least Figs. 5A-5E and par. 165-169 regarding FIG. 5A shows a preferred embodiment and illustrates how the scene is divided into a number M of virtual section layers L1 . . . LM for computation by a slicer shown in FIG. 5B. The slicer analyses in a known manner the depth information z of the original object information stored in the data memory MEM, assigns each object point of the scene with a matrix point Pmn, and enters according matrix point values in an object data set OSm corresponding with the section layer Lm. For the indices, 0 ≤ m ≤ M, and 1 ≤ n ≤ N, where N is the number of matrix points P in each layer and the number of matrix point values in a data set. Further definitions are necessary to be able to perform the computations: each section layer Lm is situated at a distance Dm to a reference layer RL which has a observer window OW near which there are the viewer's eye(s) EL/ER…]; and
determining a plurality of phase increment distributions, wherein each of the plurality of phase increment distributions is associated with a respective one of the plurality of layers for modifying, at the respective one of the plurality of layers[See Leister: at least Figs. 5A-5E and par. 168-187 regarding Transformation of the object data sets OS1 . . . OSM of the section layers L1 . . . LM in the reference layer RL so to determine the wave field which would generate the complex amplitudes A11 . . . AMN of the object points of each section layer Lm as a contribution to the aggregated wave field in the reference layer RL, if the scene was existent there. Addition of the transformed object data sets DS1 . . . DSM with the components n to form a reference data set RS that defines an aggregated wave field which is to appear in the observer window OW when the scene is reconstructed. Back-transformation of the reference data set RS from the reference layer RL to form a hologram data set HS in the hologram layer HL situated at a distance of DH to get matrix point values H1 . . . Hn . . . HN for encoding the video hologram. The N pixel values for the video hologram are derived from the typically complex values of the hologram data set. In the video hologram, these values represent amplitude values and wave phases for modulating the light during scene reconstruction.], an image size associated with the 3D scene[See Leister: at least Figs. 5A-5E and par. 133, 168-187 regarding each object data set of the section layers is based on a virtual area size which depends on its distance to the reference layer…(Thus, for each section layer, a phase increment is determined)].
Leister does not explicitly disclose determining an image wave front at each of the plurality of layers based on a propagation of an image wave front from the first layer through the second layer to the result layer to form a propagated image wave front at the result layer representing a hologram of the 3D scene, wherein the propagation includes, for each of the first layer and the second layer, applying a respective one of the plurality of phase increment distributions associated with a layer to the image wave front at the layer.
However, determining the image wave front for each of the plurality of layers based on the propagation of an image wave front from the first layer through a second layer to form a propagated image wave front at the result layer representing a hologram of the scene was well known in the art at the time of the invention was filed as evident from the teaching of Matusik[See Matusik: at least Figs. 1-2G, 4, and par. 14-15, 18, 57-63, 65-68 regarding tensor holography method using neural network. More specifically, the neural network has been additionally trained to cause the holographic representation to be focused on any desired focal plane within the subject three-dimensional scene so as to exhibit a desired depth of field. Further, the neural network has received additional training in two stages to directly optimize the phase-only hologram (with anti-aliasing processing) by incorporating a complex to phase-only conversion into the training, wherein in a first stage the neural network is trained to predict a midpoint hologram propagated to a center of the subject three-dimensional scene and to minimize a difference between a target focal stack and a predicted focal stack, and in a second stage a phase-only target hologram is generated from the predicted midpoint hologram and refined by calculating a dynamic focal stack loss, between a post-encoding focal stack and the target focal stack, and a regularization loss associated therewith… The midpoint hologram is an application of the wavefront recording plane. It propagates the target hologram to the centre of the view frustum to optimally minimize the distance to any scene point, thus reducing the effective W…(Thus, in each iteration, result layer representing a hologram of the scene is generated by the propagation of two layers)].
Therefore, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to modify Leister with Matusik teachings by including “determining an image wave front at each of the plurality of layers based on a propagation of an image wave front from the first layer through the second layer to the result layer to form a propagated image wave front at the result layer representing a hologram of the 3D scene, wherein the propagation includes, for each of the first layer and the second layer, applying a respective one of the plurality of phase increment distributions associated with a layer to the image wave front at the layer” because this combination has the benefit of providing an alternate configuration of the image wave front determining operation to generate a result layer representing the hologram of the scene.
Regarding claim 38, Leister teaches a wireless transmit receive unit (WTRU) comprising: a memory [See Leister: at least par. 55, 66-70, 136, 155, 165, 368 regarding the data sets transformed in the reference layer are buffered in buffer memory means… The data memory MEM also provides depth information zo of the three-dimensional scene. I]; and a processor [See Leister: at least par. 55, 66-70, 136, 155, 165, 368 regarding computing device to distribute data over a network and received at a display device. Further, such hardware includes at least one dedicated graphics processor with known modules for slicing and other video processing steps, such as image rendering, and at least one specific processor module for performing the Fresnel transformations with the help of fast Fourier transformation routines. Such processors in the form of digital signal processors (DSP) with the required FFT routines can be made inexpensively using known methods. Recent advantages in common graphics processors enable operations such as Fourier transforming the data of the section layers into the reference layer using so called shading algorithms.] configured to:
obtain image data associated with a plurality of layers of a 3D scene [See Leister: at least Figs. 5A-5E and par. 165-169 regarding The computation of video holograms with a hologram processor is based on original object information of a real or virtual three-dimensional scene, including values for spatial distribution of the light amplitudes in an RGB or RGB-compatible format. These values are available in a known file format and can be called up from a data memory by a hologram processor.] including a first layer at a first distance from a result layer and a second layer at a second distance from the result layer, wherein the first distance is greater than the second distance[See Leister: at least Figs. 5A-5E and par. 165-169 regarding FIG. 5A shows a preferred embodiment and illustrates how the scene is divided into a number M of virtual section layers L1 . . . LM for computation by a slicer shown in FIG. 5B. The slicer analyses in a known manner the depth information z of the original object information stored in the data memory MEM, assigns each object point of the scene with a matrix point Pmn, and enters according matrix point values in an object data set OSm corresponding with the section layer Lm. For the indices, 0 ≤ m ≤ M, and 1 ≤ n ≤ N, where N is the number of matrix points P in each layer and the number of matrix point values in a data set. Further definitions are necessary to be able to perform the computations: each section layer Lm is situated at a distance Dm to a reference layer RL which has a observer window OW near which there are the viewer's eye(s) EL/ER…]; and
determine a plurality of phase increment distributions, wherein each of the plurality of phase increment distributions is associated with a respective one of the plurality of layers for modifying, at the respective one of the plurality of layers[See Leister: at least Figs. 5A-5E and par. 168-187 regarding Transformation of the object data sets OS1 . . . OSM of the section layers L1 . . . LM in the reference layer RL so to determine the wave field which would generate the complex amplitudes A11 . . . AMN of the object points of each section layer Lm as a contribution to the aggregated wave field in the reference layer RL, if the scene was existent there. Addition of the transformed object data sets DS1 . . . DSM with the components n to form a reference data set RS that defines an aggregated wave field which is to appear in the observer window OW when the scene is reconstructed. Back-transformation of the reference data set RS from the reference layer RL to form a hologram data set HS in the hologram layer HL situated at a distance of DH to get matrix point values H1 . . . Hn . . . HN for encoding the video hologram. The N pixel values for the video hologram are derived from the typically complex values of the hologram data set. In the video hologram, these values represent amplitude values and wave phases for modulating the light during scene reconstruction.], an image size associated with the 3D scene[See Leister: at least Figs. 5A-5E and par. 133, 168-187 regarding each object data set of the section layers is based on a virtual area size which depends on its distance to the reference layer…(Thus, for each section layer, a phase increment is determined)].
Leister does not explicitly disclose determine an image wave front at each of the plurality of layers based on a propagation of an image wave front from the first layer through the second layer to the result layer to form a propagated image wave front at the result layer representing a hologram of the 3D scene, wherein the propagation includes, for each of the first layer and the second layer, applying a respective one of the plurality of phase increment distributions associated with a layer to the image wave front at the layer.
However, determining the image wave front for each of the plurality of layers based on the propagation of an image wave front from the first layer through a second layer to form a propagated image wave front at the result layer representing a hologram of the scene was well known in the art at the time of the invention was filed as evident from the teaching of Matusik[See Matusik: at least Figs. 1-2G, 4, and par. 14-15, 18, 57-63, 65-68 regarding tensor holography method using neural network. More specifically, the neural network has been additionally trained to cause the holographic representation to be focused on any desired focal plane within the subject three-dimensional scene so as to exhibit a desired depth of field. Further, the neural network has received additional training in two stages to directly optimize the phase-only hologram (with anti-aliasing processing) by incorporating a complex to phase-only conversion into the training, wherein in a first stage the neural network is trained to predict a midpoint hologram propagated to a center of the subject three-dimensional scene and to minimize a difference between a target focal stack and a predicted focal stack, and in a second stage a phase-only target hologram is generated from the predicted midpoint hologram and refined by calculating a dynamic focal stack loss, between a post-encoding focal stack and the target focal stack, and a regularization loss associated therewith… The midpoint hologram is an application of the wavefront recording plane. It propagates the target hologram to the centre of the view frustum to optimally minimize the distance to any scene point, thus reducing the effective W…(Thus, in each iteration, result layer representing a hologram of the scene is generated by the propagation of two layers)].
Therefore, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to modify Leister with Matusik teachings by including “determine an image wave front at each of the plurality of layers based on a propagation of an image wave front from the first layer through the second layer to the result layer to form a propagated image wave front at the result layer representing a hologram of the 3D scene, wherein the propagation includes, for each of the first layer and the second layer, applying a respective one of the plurality of phase increment distributions associated with a layer to the image wave front at the layer” because this combination has the benefit of providing an alternate configuration of the image wave front determining operation to generate a result layer representing the hologram of the scene.
Regarding claims 29 and 39, Leister and Matusik teach all of the limitations of claims 28 and 38, and are analyzed as previously discussed with respect to those claims. Further on, Leister and Matusik teach wherein each of the plurality of layers represents a corresponding one of a plurality of perspective view images at different depths in the 3D scene [See Leister: at least Figs. 5A-5E and par. 165-169 regarding FIG. 5A shows a preferred embodiment and illustrates how the scene is divided into a number M of virtual section layers L1 . . . LM for computation by a slicer shown in FIG. 5B. The slicer analyses in a known manner the depth information z of the original object information stored in the data memory MEM, assigns each object point of the scene with a matrix point Pmn, and enters according matrix point values in an object data set OSm corresponding with the section layer Lm…See Matusik: at least Figs. 1-2G, 4, par. 7 regarding the training data are configured to follow a probability density function in such a manner as to have a uniform pixel distribution across a range of depths…].
Regarding claims 30 and 40, Leister and Matusik teach all of the limitations of claims 28 and 38, and are analyzed as previously discussed with respect to those claims. Further on, Leister teaches or suggests wherein each of the plurality of perspective view images has a constant resolution [See Leister: at least Figs. 5A-5E and par. 26, 32, 119 regarding in a specific embodiment, the hologram surface and the image surface are separated by an adjustable distance. The image surface may be a variable depth and/or resolution… The object of this embodiment is to provide a method for speeding up computation of computer-generated video holograms, said video holograms allowing simultaneous reconstruction of a three-dimensional scene while maintaining the spatial resolution and reconstruction quality…(Thus, each of the plurality of view images at each depth is configured to have a constant resolution)].
Regarding claims 31 and 41, Leister and Matusik teach all of the limitations of claims 28 and 38, and are analyzed as previously discussed with respect to those claims. Further on, Leister and Matusik teach wherein the first layer corresponds to a background layer of the 3D scene and the second layer corresponds to an intermediate layer of the 3D scene between the background layer and the result layer[See Leister: at least Figs. 5A-5E and par. 165-169 regarding FIG. 5A shows a preferred embodiment and illustrates how the scene is divided into a number M of virtual section layers L1 . . . LM for computation by a slicer shown in FIG. 5B. The slicer analyses in a known manner the depth information z of the original object information stored in the data memory MEM, assigns each object point of the scene with a matrix point Pmn, and enters according matrix point values in an object data set OSm corresponding with the section layer Lm…(Different layers comprising the background and intermediate layers are shown in the Figs. 5A and 5C) See Matusik: least Figs. 1-2G, 4 par. 14-15, 18, 57-63, 65-68 regarding the neural network has been additionally trained to cause the holographic representation to be focused on any desired focal plane within the subject three-dimensional scene so as to exhibit a desired depth of field. Further, the neural network has received additional training in two stages to directly optimize the phase-only hologram (with anti-aliasing processing) by incorporating a complex to phase-only conversion into the training, wherein in a first stage the neural network is trained to predict a midpoint hologram propagated to a center of the subject three-dimensional scene and to minimize a difference between a target focal stack and a predicted focal stack, and in a second stage a phase-only target hologram is generated from the predicted midpoint hologram and refined by calculating a dynamic focal stack loss, between a post-encoding focal stack and the target focal stack, and a regularization loss associated therewith…].
Regarding claims 32 and 42, Leister and Matusik teach all of the limitations of claims 28 and 38, and are analyzed as previously discussed with respect to those claims. Further on, Matusik teaches or suggests further comprising adapting / wherein the processor is further configured to adapt image information corresponding to the propagated image wave front at the result layer to represent a hologram of the 3D scene[See Matusik: at least Figs. 1-2G, 4 par. 14-15, 18, 57-63, 65-68 regarding tensor holography method using neural network. More specifically, the neural network has been additionally trained to cause the holographic representation to be focused on any desired focal plane within the subject three-dimensional scene so as to exhibit a desired depth of field. Further, the neural network has received additional training in two stages to directly optimize the phase-only hologram (with anti-aliasing processing) by incorporating a complex to phase-only conversion into the training, wherein in a first stage the neural network is trained to predict a midpoint hologram propagated to a center of the subject three-dimensional scene and to minimize a difference between a target focal stack and a predicted focal stack, and in a second stage a phase-only target hologram is generated from the predicted midpoint hologram and refined by calculating a dynamic focal stack loss, between a post-encoding focal stack and the target focal stack, and a regularization loss associated therewith… The midpoint hologram is an application of the wavefront recording plane. It propagates the target hologram to the centre of the view frustum to optimally minimize the distance to any scene point, thus reducing the effective W…(Thus, in each iteration, result layer representing a hologram of the scene is generated by the propagation of two layers)].
Regarding claims 33 and 43, Leister and Matusik teach all of the limitations of claims 28 and 38, and are analyzed as previously discussed with respect to those claims. Further on, Matusik further comprising combining / wherein the processor is further configured to combine the propagation of each of the plurality of image wave fronts associated with respective ones of the plurality of layers to form the propagated image wave front at the result layer representing the hologram of the 3D scene[See Matusik: at least Figs. 1-2G, 4 par. 14-15, 18, 57-63, 65-68 regarding tensor holography method using neural network. More specifically, the neural network has been additionally trained to cause the holographic representation to be focused on any desired focal plane within the subject three-dimensional scene so as to exhibit a desired depth of field. Further, the neural network has received additional training in two stages to directly optimize the phase-only hologram (with anti-aliasing processing) by incorporating a complex to phase-only conversion into the training, wherein in a first stage the neural network is trained to predict a midpoint hologram propagated to a center of the subject three-dimensional scene and to minimize a difference between a target focal stack and a predicted focal stack, and in a second stage a phase-only target hologram is generated from the predicted midpoint hologram and refined by calculating a dynamic focal stack loss, between a post-encoding focal stack and the target focal stack, and a regularization loss associated therewith… The midpoint hologram is an application of the wavefront recording plane. It propagates the target hologram to the centre of the view frustum to optimally minimize the distance to any scene point, thus reducing the effective W…(Thus, in each iteration, result layer representing a hologram of the scene is generated by the propagation of two layers)].
Regarding claims 34 and 44, Leister and Matusik teach all of the limitations of claims 28 and 38, and are analyzed as previously discussed with respect to those claims. Further on, Leister teaches or suggests further comprising restructuring / wherein the processor is further configured to restructure a field of view associated with the 3D scene to modify the image size[See Leister: at least Figs. 5A-5E and par. 133, 168-187 regarding Transformation of the object data sets OS1 . . . OSM of the section layers L1 . . . LM in the reference layer RL so to determine the wave field which would generate the complex amplitudes A11 . . . AMN of the object points of each section layer Lm as a contribution to the aggregated wave field in the reference layer RL, if the scene was existent there. Addition of the transformed object data sets DS1 . . . DSM with the components n to form a reference data set RS that defines an aggregated wave field which is to appear in the observer window OW when the scene is reconstructed. Back-transformation of the reference data set RS from the reference layer RL to form a hologram data set HS in the hologram layer HL situated at a distance of DH to get matrix point values H1 . . . Hn . . . HN for encoding the video hologram. The N pixel values for the video hologram are derived from the typically complex values of the hologram data set. In the video hologram, these values represent amplitude values and wave phases for modulating the light during scene reconstruction. Further on, each object data set of the section layers is based on a virtual area size which depends on its distance to the reference layer…(Thus, each field of view associated with the scene has an associated image size, therefore, when the field of view is restructured, then the image size is also modified)].
Regarding claims 35 and 45, Leister and Matusik teach all of the limitations of claims 28 and 38, and are analyzed as previously discussed with respect to those claims. Further on, Matusik teaches or suggests further comprising applying / wherein the processor is further configured to apply to the image wave front at each layer non-binary information associated with the image data of a layer to determine the propagation to form the propagated image wave front at the result layer [See Matusik: at least Fig. 7 and par. 109-115 regarding Using LDI with OA-LBM, as discussed above, is simple and straightforward. Any non-zero pixels in an LDI defines a valid point before depth quantization. When the number of depth layers N is determined, each point is projected to its nearest plane and a silhouette is set at the same spatial location. We use the angular spectrum method to propagate the N layer to the N−1 layer: C N−1. The C N−1 is multiplied by the binary silhouette mask at the N−1 layer0… By iterating this process until reaching the first layer, the final hologram is obtained by propagating from the updated first layer to the hologram plane…].
Regarding claims 36 and 46, Leister and Matusik teach all of the limitations of claims 28 and 38, and are analyzed as previously discussed with respect to those claims. Further on, Matusik teaches or suggests further comprising applying / wherein the processor is further configured to apply at least one of an angular spectrum model or a Fresnel diffraction to the image information to determine the propagation [See Matusik: at least Fig. 7 and par. 109-115 regarding Using LDI with OA-LBM, as discussed above, is simple and straightforward. Any non-zero pixels in an LDI defines a valid point before depth quantization. When the number of depth layers N is determined, each point is projected to its nearest plane and a silhouette is set at the same spatial location. We use the angular spectrum method to propagate the N layer to the N−1 layer: C N−1. The C N−1 is multiplied by the binary silhouette mask at the N−1 layer0… By iterating this process until reaching the first layer, the final hologram is obtained by propagating from the updated first layer to the hologram plane…].
Regarding claims 37 and 47, Leister and Matusik teach all of the limitations of claims 28 and 38, and are analyzed as previously discussed with respect to those claims. Further on, Matusik teaches or suggests further comprising determining / wherein the processor is further configured to determine a plurality of Fresnel Zone Plates (FZP), each of the plurality of FZPs providing a phase shift corresponding to one of the plurality of phase increment distributions, to determine the plurality of phase increment distributions[See Matusik: at least Figs. 1-2G, 4 par. 14-15, 18, 50, 57-69, 89 regarding CNN model is a fully convolutional residual network. It receives a four-channel RGB-D image and predicts a colour hologram as a six-channel image (RGB amplitude and RGB phase), which can be used to drive three optically combined SLMs or one SLM in a time-multiplexed manner to achieve full-colour holography. The network has a skip connection that creates a direct feed of the input RGB-D image to the penultimate residual block and has no pooling layer for preserving high-frequency details (see FIG. 1 section c for a scheme of the network architecture). Let W be the width of the maximum subhologram (Fresnel zone plate) produced by the farthest object points to the hologram… FIGS. 4A-4C are a schematic of the midpoint hologram calculation. FIG. 4A, shows a holographic display magnified through a diverging point light source. FIG. 4B shows a holographic display unmagnified through the thin-lens formula. In FIG. 4C, the target hologram in this example is propagated to the center of the unmagnified view frustum to produce the midpoint hologram. The width of the maximum subhologram is substantially reduced…The midpoint hologram is an application of the wavefront recording plane. It propagates the target hologram to the centre of the view frustum to optimally minimize the distance to any scene point, thus reducing the effective W].
Conclusion
10. THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
11. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANA J PICON-FELICIANO whose telephone number is (571)272-5252. The examiner can normally be reached Monday-Friday 9:00-5:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Christopher Kelley can be reached at 571 272 7331. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Ana Picon-Feliciano/Examiner, Art Unit 2482
/CHRISTOPHER S KELLEY/Supervisory Patent Examiner, Art Unit 2482