DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
The amendments to Claims 5,7,8,17,18,20,23,24, in the submission filed 9/29/2023 are acknowledged and accepted.
Cancellation of Claims 15,16,22, is acknowledged and accepted.
Amendments to the Specification are acknowledged and accepted.
Pending Claims are 1-14,17-21,23-24.
Drawings
The drawings with 7 Sheets of Figs. 1-7 received on 9/29/2023 are acknowledged and accepted.
Claim Objections
Claims 3,4,10-13,14-21, objected to because of the following informalities:
Claim 3 recites “representing spatial coordinates of a point along a line at an intersection of the line and” on line 3. There seems to be a comma missing. It is recommended to be replaced with -- representing spatial coordinates of a point along a line, at an intersection of the line and--
Claim 4 recites “at a point along the line at an intersection of the line and” on line 4. There seems to be a comma missing. It is recommended to be replaced with -- at a point along the line, at an intersection of the line and –
Claim 10 recites “processing the layered representation of a three-dimension image” in lines 1-2. There is sufficient antecedent basis for this limitation. It is suggested to be replaced with -- processing the layered representation of the three-dimensional image--.
Claims 11-13 are dependent on claim 10 and hence inherit its deficiencies.
Claim 14 recites “parameters the one or more neural network”. There seems to be a typo error. It is suggested to be replaced with -- parameters of the one or more neural network--.
Claims 15-21 are dependent on claim 14 and hence inherit its deficiencies.
Appropriate correction is required.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-14,17-21,23,24, as best understood, rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claims 1,23,24, recite “wherein each of the image layers comprises varying depth data across the image”. It is not clear whether the image refers to the three-dimensional image or the image layers. From the current specification (page 2), it appears that the image layers represent varying depth data across the three-dimensional image. For the purpose of examination, “the image” is interpreted to be the three-dimensional image.
Claims 2-13 are dependent on claim 1 and hence inherit its deficiencies.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1,3,5-7,23,24, as best understood, is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Hao et al (Computer-generated hologram with occlusion effect using layer-based processing, Date:03-23-2017, Applied Optics, Vol 56, No. 13, pages F138-F143, of record).
Regarding Claim 1, Hao teaches (fig 1, 4a, b) a method for generating a digital hologram (layer-based algorithm with single viewpoint rendering geometry is proposed to calculate a full parallax CGH with occlusion effect, L col, last para, page F139) comprising:
accepting a layered representation (slicing geometry of the 3D scene, number of layers in the figure is set to N, which locate at the depth range of the 3D scene, R col, 1st para, page F140) of a three-dimensional image (3D scene, fig 1), wherein the layered representation of the three-dimensional image comprises a plurality of image layers (N layers, fig 4), and wherein each of the image layers comprises varying depth data across the image (number of layers in the figure is set to N, which locate at the depth range of the 3D scene, R col, 1st para, page F140); and
forming the digital hologram (CGH1, page F141, R col, 3rd para) from the layered representation (layer-based algorithm, fig 5).
Regarding Claim 3, Hao teaches the method of claim 1,
wherein the layered representation (slicing geometry of the 3D scene, number of layers in the figure is set to N, which locate at the depth range of the 3D scene, R col, 1st para, page F140) of the three-dimensional image (3D scene, fig 1) comprises, for each location of the digital hologram (CGH1, page F141, R col, 3rd para), a plurality of depth values each representing spatial coordinates of a point along a line at an intersection of the line and a surface of an object (3D object, fig 1) in the three-dimensional image (3D scene, fig 1) (depth values of layer L1 to Ln are along a line perpendicular to the hologram plane as in fig 4 and each depth value is the intersection of this line with a surface of the 3D object in the 3D scene, fig 4, see eq 1, page F140).
Regarding Claim 5, Hao teaches the method of claim 1,
further comprising determining (“During the rendering procedure, shading and depth images can be fetched from the corresponding viewpoint”, “According to the depth image, the 3D scene can be sliced into multiple parallel layers”, page F139, R col, Sec 2) the layered representation (slicing geometry of the 3D scene, number of layers in the figure is set to N, which locate at the depth range of the 3D scene, R col, 1st para, page F140) of the three-dimensional image (3D scene, fig 1).
Regarding Claim 6, Hao teaches the method of claim 5,
further comprising determining a direction of view (“During the rendering procedure, shading and depth images can be fetched from the corresponding viewpoint”, page F139, R col, Sec 2, viewpoint indicates a direction of view), and wherein determining (“During the rendering procedure, shading and depth images can be fetched from the corresponding viewpoint”, “According to the depth image, the 3D scene can be sliced into multiple parallel layers”, page F139, R col, Sec 2) the layered representation (slicing geometry of the 3D scene, number of layers in the figure is set to N, which locate at the depth range of the 3D scene, R col, 1st para, page F140) depends on said direction of view (“layer-based algorithm with single-viewpoint rendering geometry”, page F139, L col, last para, this indicates that the dependence on the viewpoint).
Regarding Claim 7, Hao teaches the method of claim 1,
wherein forming the digital hologram (layer-based algorithm with single viewpoint rendering geometry is proposed to calculate a full parallax CGH with occlusion effect, L col, last para, page F139) comprises iterating through a sequence of present planes with successive depths (“After propagation calculations from Nth layer to the first layer of the 3D scene”, Eq (1)-Eq(5) pages F140-141) each iteration including one or more operations of:
(i) for points in the layered representation for which a depth of the point maps to a depth of the present plane, generating a contribution to complex amplitude distribution based on an intensity of the point in the layered representation (The complex amplitude distribution of the Nth layer is Eq (1), “An(x,y) is the amplitude distribution of the layer”, page F140, R col);
(ii) propagating a previously generated complex amplitude distribution to the present plane (After propagation from the Nth layer to N-1th layer, the complex amplitude distribution on the N-1th layer”, pages F140-141);
(iii) masking a contribution from a prior plane according to a mask based on points that map to the present plane (“The complex amplitude distribution on the N-1th plane is then multiplied by the silhouette mask Mn-1”, page F141, L col), and
(iv) combining contributions of points whose depths map to the depth of the present and masked propagation of contributions from prior layers (“By mask filtering of all the sliced layers, the contribution areas of all the hidden primitives on the hologram plane are calculated”, page F141, R col).
Regarding Claim 23, Hao teaches (fig 1, 4a, b) a digital processor (algorithm, page F139, this indicates a digital processor) configured to generating a digital hologram (layer-based algorithm with single viewpoint rendering geometry is proposed to calculate a full parallax CGH with occlusion effect, L col, last para, page F139) by:
accepting a layered representation (slicing geometry of the 3D scene, number of layers in the figure is set to N, which locate at the depth range of the 3D scene, R col, 1st para, page F140) of a three-dimensional image (3D scene, fig 1), wherein the layered representation of the three-dimensional image comprises a plurality of image layers (N layers, fig 4), and wherein each of the image layers comprises varying depth data across the image (number of layers in the figure is set to N, which locate at the depth range of the 3D scene, R col, 1st para, page F140); and
forming the digital hologram (CGH1, page F141, R col, 3rd para) from the layered representation (layer-based algorithm, fig 5).
Regarding Claim 24, Hao teaches (fig 1, 4a, b) a non-transitory machine-readable medium comprising instructions stored thereon, execution of said instructions by a digital processor (algorithm, page F139, this indicates a computer with a digital processor) causing said processor to generating a digital hologram (layer-based algorithm with single viewpoint rendering geometry is proposed to calculate a full parallax CGH with occlusion effect, L col, last para, page F139) by:
accepting a layered representation (slicing geometry of the 3D scene, number of layers in the figure is set to N, which locate at the depth range of the 3D scene, R col, 1st para, page F140) of a three-dimensional image (3D scene, fig 1), wherein the layered representation of the three-dimensional image comprises a plurality of image layers (N layers, fig 4), and wherein each of the image layers comprises varying depth data across the image (number of layers in the figure is set to N, which locate at the depth range of the 3D scene, R col, 1st para, page F140); and
forming the digital hologram (CGH1, page F141, R col, 3rd para) from the layered representation (layer-based algorithm, fig 5).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 2,4, as best understood, is/are rejected under 35 U.S.C. 103 as being unpatentable over Hao et al (Computer-generated hologram with occlusion effect using layer-based processing, Date:03-23-2017, Applied Optics, Vol 56, No. 13, pages F138-F143, of record) in view of Yoon et al (A Framework for Multi-view Video coding Using layered Depth images, PCM 2005, Part I, Springer, LNCS 3767, pp 431-442, Nov 13 2005, of record).
Regarding Claim 2, Hao teaches the method of claim 1.
However, Hao does not teach
wherein each image layer comprises an image comprising at least one color channel and a depth channel.
Hao and Yoon are related as depth layers.
Yoon teaches (fig 1)
wherein each image layer (layers of layered depth image LDI, page 433, sec 3) comprises an image comprising at least one color channel and a depth channel (“each layered depth pixel has different number of depth pixels, which contain color and depth information”, page 433, sec 3, last para).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the image layer of Hao to include the color and depth image of Yoon for the purpose of utilizing techniques in multi-view video coding for a variety applications such as 3D TV and home entertainment (page 31, sec 1, 1st para).
Regarding Claim 4, Hao teaches the method of claim 1.
However, Hao does not teach
wherein the layered representation of the three-dimensional image further comprises for each location of the digital hologram a plurality of image values each representing intensity of one or more color channels at a point along the line at an intersection of the line and the surface of the object in the three- dimensional image.
Hao and Yoon are related as depth layers.
Yoon teaches (fig 1)
wherein the layered representation (layered depth image LDI, page 433) of the three-dimensional image (3-D scene or object, fig 1, page 433) further comprises a plurality of image values each representing intensity of one or more color channels at a point (intersecting points, page 433, 2nd para) along the line at an intersection of the line and the surface of the object (points in the three- dimensional image (3-D scene or object, fig 1, page 433).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the layered representation of Hao to include plurality of image values of Yoon for the purpose of utilizing techniques in multi-view video coding for a variety of applications such as 3D TV and home entertainment (page 31, sec 1, 1st para).
Claim(s) 8,10-11, as best understood, is/are rejected under 35 U.S.C. 103 as being unpatentable over Hao et al (Computer-generated hologram with occlusion effect using layer-based processing, Date:03-23-2017, Applied Optics, Vol 56, No. 13, pages F138-F143, of record) in view of Chakravarthula et al (US 2020/0192287 A1).
Regarding Claim 8, Hao teaches the method of claim 1.
However, Hao does not teach
wherein forming the digital hologram from the layered representation comprises processing said layered representation using at least one neural network to generate the digital hologram.
Hao and Chakravarthula are related as digital hologram and layered representations.
Chakravarthula teaches (fig 4)
wherein forming the digital hologram from the layered representation (“our optimization framework can be extended to 3D volumetric scenes. One can slice the 3D scene into multiple depth planes and superpose all complex holograms corresponding to each depth plane, thereby forming a true 3D hologram”, “One option is to generate 2D holograms approximating spatially variant focus by fast depth switching”, para 132 “The proposed Wirtinger holography framework can be extended to computing 3D holograms in a multi-layered or multi-focal approach. Given a 3D model or the scene data, one can voxelize and remap it into multiple depth planes. The hologram corresponding to each depth plane can be computed separately and all holograms can be later superposed to generate a complex hologram of the 3D scene”, para 160) comprises processing said layered representation using at least one neural network to generate the digital hologram (“We obtain the gradient for the loss function component (i.e., Part I of Equation 9) from backpropagation in TensorFlow, especially for losses parameterized by convolutional neural networks”, para 118).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the processing of layered representation of Hao to include neural network of Chakravarthula for the purpose of utilizing common optimization techniques in generating holograms and for parametrizing learned losses (para 79).
Regarding Claim 10, Hao-Chakravarthula teaches the method of claim 8.
However, Hao does not teach
wherein processing the layered representation of a three- dimension image using the at least one neural network comprises applying a first convolutional neural network (CNN) to an input based on the layered representation.
Hao and Chakravarthula are related as digital hologram and layered representations.
Chakravarthula teaches (fig 4)
wherein processing the layered representation of a three- dimension image (“our optimization framework can be extended to 3D volumetric scenes. One can slice the 3D scene into multiple depth planes and superpose all complex holograms corresponding to each depth plane, thereby forming a true 3D hologram”, “One option is to generate 2D holograms approximating spatially variant focus by fast depth switching”, para 132 “The proposed Wirtinger holography framework can be extended to computing 3D holograms in a multi-layered or multi-focal approach. Given a 3D model or the scene data, one can voxelize and remap it into multiple depth planes. The hologram corresponding to each depth plane can be computed separately and all holograms can be later superposed to generate a complex hologram of the 3D scene”, para 160) using the at least one neural network comprises applying a first convolutional neural network (“We obtain the gradient for the loss function component (i.e., Part I of Equation 9) from backpropagation in TensorFlow, especially for losses parameterized by convolutional neural networks”, para 118) (CNN) to an input based on the layered representation.
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the processing of layered representation of Hao to include CNN of Chakravarthula for the purpose of utilizing common optimization techniques in generating holograms and for parametrizing learned losses (para 79).
Regarding Claim 11, Hao-Chakravarthula teaches the method of claim 10.
However, Hao does not teach
wherein applying said first CNN comprises applying said first CNN to an input comprising at least one of the layered representation and a function of said layered representation, and producing an output comprising at least one of said digital hologram and data from which said digital hologram is computed.
Hao and Chakravarthula are related as digital hologram and layered representations.
Chakravarthula teaches (fig 4)
wherein applying said first CNN (“We obtain the gradient for the loss function component (i.e., Part I of Equation 9) from backpropagation in TensorFlow, especially for losses parameterized by convolutional neural networks”, para 118) comprises
applying said first CNN to an input comprising at least one of the layered representation and a function of said layered representation (“our optimization framework can be extended to 3D volumetric scenes. One can slice the 3D scene into multiple depth planes and superpose all complex holograms corresponding to each depth plane, thereby forming a true 3D hologram”, “One option is to generate 2D holograms approximating spatially variant focus by fast depth switching”, para 132 “The proposed Wirtinger holography framework can be extended to computing 3D holograms in a multi-layered or multi-focal approach. Given a 3D model or the scene data, one can voxelize and remap it into multiple depth planes. The hologram corresponding to each depth plane can be computed separately and all holograms can be later superposed to generate a complex hologram of the 3D scene”, para 160), and
producing an output comprising at least one of said digital hologram and data from which said digital hologram is computed (“superpose all complex holograms corresponding to each depth plane, thereby forming a true 3D hologram”, para 160).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the processing of layered representation of Hao to include steps of applying said first CNN to an input and producing an output of a hologram of Chakravarthula for the purpose of utilizing common optimization techniques in generating holograms and for parametrizing learned losses (para 79).
Claim(s) 9, as best understood, is/are rejected under 35 U.S.C. 103 as being unpatentable over Hao et al (Computer-generated hologram with occlusion effect using layer-based processing, Date:03-23-2017, Applied Optics, Vol 56, No. 13, pages F138-F143, of record) in view of Chakravarthula et al (US 2020/0192287 A1) and further in view of Maimone et al (Holographic Near-eye displays for virtual and augmented reality, ACM transactions on Graphics, Vol 36, No 4, Article 85, pages 1-16, July 2017).
Regarding Claim 9, Hao-Chakravarthula teaches the method of claim 8.
However, Hao-Chakravarthula does not teach
wherein the digital hologram comprises a double-phase representation of said hologram.
Hao-Chakravarthula and Maimone are related as digital hologram.
Maimone teaches
wherein the digital hologram (phase only hologram, page 5, R col, 2nd para) comprises a double-phase representation of said hologram (“double phase”, page 5, R col, 2nd para).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the digital hologram of Hao-Chakravarthula to include a double phase hologram of Maimone for the purpose of utilizing common non-iterative direct encoding methods of computing holograms for improving contrast (page 6, L col, 3rd para)
Claim(s) 12, is/are rejected under 35 U.S.C. 103 as being unpatentable over Hao et al (Computer-generated hologram with occlusion effect using layer-based processing, Date:03-23-2017, Applied Optics, Vol 56, No. 13, pages F138-F143, of record) in view of Chakravarthula et al (US 2020/0192287 A1) and further in view of Peng et al (Neural holography with camera-in-the-loop training, ACM Trans Graph, Vol 39, No 6, Article 185, pages 1-14, of record).
Regarding Claim 12, Hao-Chakravarthula teaches the method of claim 11.
However, Hao-Chakravarthula does not teach
further comprising producing a first complex hologram with the first CNN, and applying a second CNN to an input comprising at least one of the first complex hologram and a function of said first complex hologram, and producing an output of said second CNN comprising at least one of said digital hologram and data from which said digital hologram is computed.
Hao-Chakravarthula and Peng are related as generating holograms.
Peng teaches (fig 7)
further comprising producing a first complex hologram (complex-valued wave-field, page 9, sec 6) with the first CNN (target phase generator subnetwork, page 9, L col, sec 6), and applying a second CNN (phase encoder subnetwork, page 9, L col, sec 6) to an input comprising at least one of the first complex hologram (complex-valued wave-field, page 9, sec 6) and a function of said first complex hologram (complex-valued wave-field, page 9, sec 6), and producing an output of said second CNN (phase encoder subnetwork, page 9, L col, sec 6, un-diffracted light subnetwork, page 9, sec 6, R col) comprising at least one of said digital hologram and data from which said digital hologram is computed (phase only representation, sec 6).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the generation of digital hologram of Hao-Chakravarthula to include a first and second CNNs of Peng for the purpose of high image quality (page 10, L col, 2nd para).
Claim(s) 14, 17-19, as best understood, is/are rejected under 35 U.S.C. 103 as being unpatentable over Chakravarthula et al (US 2020/0192287 A1) in view of Maimone et al (Holographic Near-eye displays for virtual and augmented reality, ACM transactions on Graphics, Vol 36, No 4, Article 85, pages 1-16, July 2017).
Regarding Claim 14, Chakravarthula teaches (fig 4) a method for determining values of configurable parameters of the one or more neural network (“We obtain the gradient for the loss function component (i.e., Part I of Equation 9) from backpropagation in TensorFlow, especially for losses parameterized by convolutional neural networks”, para 118) for generating a digital hologram (complex hologram of the scene, para 160) comprises
using training data comprising a plurality of training items (“We validate the flexibility of the proposed phase retrieval method by modifying the objective with a learned perceptual loss”, para 83, learned indicates a training data set),
each training item comprising at least one of a layered representation (“our optimization framework can be extended to 3D volumetric scenes. One can slice the 3D scene into multiple depth planes and superpose all complex holograms corresponding to each depth plane, thereby forming a true 3D hologram”, “One option is to generate 2D holograms approximating spatially variant focus by fast depth switching”, para 132 “The proposed Wirtinger holography framework can be extended to computing 3D holograms in a multi-layered or multi-focal approach. Given a 3D model or the scene data, one can voxelize and remap it into multiple depth planes. The hologram corresponding to each depth plane can be computed separately and all holograms can be later superposed to generate a complex hologram of the 3D scene”, para 160) of a three-dimensional image (3D scene) and a function of said layered representation and
a corresponding function of encoding of a target hologram (complex hologram of the 3D scene ) determined based on said layered representation, and determining said values of the configurable parameters to match predictions of said functions of the encodings (phase encoding of hologram) determined using said one or more neural networks (“The proposed Wirtinger Holography is flexible and facilitates the use of different loss functions, including learned perceptual losses parametrized by deep neural networks”, para 80, 114, loss function, para 11, loss functions act as predictions of functions).
However, Chakravarthula does not teach
determining a corresponding function of a double-phase encoding of a target hologram
Chakravarthula and Maimone are related as digital hologram.
Maimone teaches
determining a corresponding function of a double-phase encoding (“double phase”, page 5, R col, 2nd para) of a target hologram (phase only hologram, page 5, R col, 2nd para).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the digital hologram of Hao-Chakravarthula to include a double phase hologram of Maimone for the purpose of utilizing common non-iterative direct encoding methods of computing holograms for improving contrast (page 6, L col, 3rd para).
Regarding Claim 17, Chakravarthula-Maimone teach the method of claim 14.
However, Chakravarthula does not teach
wherein determining the target hologram comprises determining said hologram to incorporate correction of a vision or lens characteristic.
Chakravarthula and Maimone are related as digital hologram.
Maimone teaches
wherein determining the target hologram (phase only hologram, page 5, R col, 2nd para) comprises determining said hologram to incorporate correction of a vision or lens characteristic (aberration correction, page 10, R col, 3rd para).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify determination of the digital hologram of Chakravarthula to include a vision or lens characteristic correction of Maimone for the purpose of utilizing common non-iterative direct encoding methods of computing holograms (para 28) for reducing aberrations (page 10, R col, 3rd para).
Regarding Claim 18, Chakravarthula-Maimone teach the method of claim 14.
However, Chakravarthula does not teach
wherein the function of a double-phase encoding comprises a focal stack of images.
Chakravarthula and Maimone are related as digital hologram.
Maimone teaches
wherein the function of a double-phase encoding (“double phase”, page 5, R col, 2nd para) comprises a focal stack of images (“A high speed option is to approximate spatially variant focus and aberration control by providing the correct lens function where the user is looking rather than computing or approximating the full spatially variant solution”, page 8, sec 3.1.7, 4th para).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify determination of the digital hologram of Chakravarthula to include a focal stack of images of Maimone for the purpose of utilizing common non-iterative direct encoding methods of computing holograms (para 28) for reducing aberrations (page 10, R col, 3rd para).
Regarding Claim 19, Chakravarthula-Maimone teach the method of claim 18.
However, Chakravarthula does not teach
comprising computing the focal stack to include a set of images determined at different focal lengths derived from the double-phase encoding.
Chakravarthula and Maimone are related as digital hologram.
Maimone teaches
comprising computing the focal stack (“A high speed option is to approximate spatially variant focus and aberration control by providing the correct lens function where the user is looking rather than computing or approximating the full spatially variant solution”, page 8, sec 3.1.7, 4th para) to include a set of images determined at different focal lengths derived from the double-phase encoding (“double phase”, page 5, R col, 2nd para).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify determination of the digital hologram of Chakravarthula to computing a focal stack of images of Maimone for the purpose of utilizing common non-iterative direct encoding methods of computing holograms (para 28) for reducing aberrations (page 10, R col, 3rd para).
Claim(s) 20, is/are rejected under 35 U.S.C. 103 as being unpatentable over in view of Chakravarthula et al (US 2020/0192287 A1) in view of Maimone et al (Holographic Near-eye displays for virtual and augmented reality, ACM transactions on Graphics, Vol 36, No 4, Article 85, pages 1-16, July 2017) and further in view of Peng et al (Neural holography with camera-in-the-loop training, ACM Trans Graph, Vol 39, No 6, Article 185, pages 1-14, of record).
Regarding Claim 20, Chakravarthula-Maimone teach the method of claim 14.
However, Chakravarthula-Maimone does not teach
wherein the one or more neural networks include a first CNN and a second CNN, and wherein the method comprises determining values of configurable parameters of the first CNN based on a matching of target holograms with outputs of said first CNN.
Chakravarthula-Maimone and Peng are related as generating holograms.
Peng teaches (fig 7)
wherein the one or more neural networks include a first CNN (target phase generator subnetwork, page 9, L col, sec 6) and a second CNN (phase encoder subnetwork, page 9, L col, sec 6), and wherein the method comprises determining values of configurable parameters of the first CNN (target phase generator subnetwork, page 9, L col, sec 6) based on a matching of target holograms (complex-valued wave-field, page 9, sec 6) with outputs of said first CNN (target phase generator subnetwork, page 9, L col, sec 6).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the generation of digital hologram of Chakravarthula-Maimone to include a first and second CNNs of Peng for the purpose of high image quality (page 10, L col, 2nd para).
Allowable Subject Matter
Claims 13,21, would be allowable if rewritten to overcome the rejection(s) under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), 2nd paragraph, set forth in this Office action and to include all of the limitations of the base claim and any intervening claims.
Claim 13 is allowable for at least the reason:
“wherein producing an output for the second CNN comprises producing a second complex hologram, and wherein processing the layered representation further comprises encoding the second complex hologram as a double-phase representation of the digital hologram.”
Claim 21 is allowable for at least the reason:
“further comprising determining values of configurable parameters of the second CNN using inputs produced by the first CNN and based on matching a function of a double-phase encoding determined from an output for the second CNN with a corresponding function of the target hologram.”
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JYOTSNA V DABBI whose telephone number is (571)270-3270. The examiner can normally be reached M-Fri: 9:00am-5:00pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, STEPHONE ALLEN can be reached at 571-272-2434. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JYOTSNA V DABBI/Examiner, Art Unit 2872 11/28/2025