DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant's arguments filed 12 February 2026 have been fully considered but they are not fully persuasive. An applicant interview IS recommended in this case.
Claims 1-8, 10-21 and 23 are pending in this application and have been considered below. Claims 9 and 22 are canceled by the applicant.
Argument:
The applicant argues that the Elmalem et al. reference, teaches joint end-to-end training of a phase mask and back-end machine learning (e.g., CNN) to restore an in-focus image. Applicants argue that the Elmalem et al. method discloses ( [0134]) that both the optics (i.e., the phase mask) and the back-end machine learning (i.e.., computational layers in the CNN) are all trained together for a holistic design. Applicants then state that the claimed optical neural networks and methods do not require any back-end CNN because the plurality of optically transmissive or reflective layers that generate the output optical signal that is substantially invariant to object or signal transformations including one or more of lateral translation, rotation, or scaling. Applicant has also amended the claims to further clarify that the plurality of optically transmissive or reflective layers are arranged in an optical path and are separated from one another.
Response:
Examiner agrees with applicant that Elmalem et al. should not be relied upon to teach the amended language “substrate layers arranged in an optical path and separated from one another.” However, Ozcan et al. teaches "in FIG. 29, two different classifiers were optimized to recognize (1) hand-written digits, 0 through 9, using the MNIST (Mixed National Institute of Standards and Technology) image dataset, and (2) various fashion products, including t-shirts, trousers, pullovers, dresses, coats, sandals, shirts, sneakers, bags, and ankle boots (using the Fashion MNIST image dataset)," paragraph [0178] where each substrate layer is a classifier, labeled 10, 16 or 22 in the Figure.
Argument:
The applicant argues that Ozcan et al. does not disclose training the plurality of optically transmissive or reflective features to have physical features therein such that the input object image or signal is invariant or partially invariant to object or signal transformations such as lateral translation, rotation, or scaling.
Response:
Concerning the applicant’s arguments that claimed “that the one or more output optical signal(s) are substantially invariant to object or signal transformations comprising one or more of lateral translation, rotation, or scaling” has unique advantages not realized or suggested in Elmalem et al. or Ozcan et al. In response to applicant's arguments that the references fail to show certain features of applicant’s invention, it is noted that the features upon which the applicant relies (i.e., invariant to object or signal transformations) are not recited in the rejected claim(s). Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993).
Specifically, applicant is attempting to broaden the scope of the claims through including “substantially” and “one or more of.” The issue is that would allow the Broadest Reasonable Interpretation of the claims to cover a system that allows for minor lateral translation, which the systems of Elmalem et al. and Ozcan et al. do. Specifically, during patent prosecution, the pending claims must be “given their broadest reasonable interpretation assistant with the specification.” The Examiner has interpreted the claim language in reference to the specification. Because applicant has the opportunity to amend the claims during prosecution, given a claim in its broadest reasonable interpretation will reduce the possibility that the claim, once issued will be interpreted more or broadly than is justified.
In this case, Examiner is not arguing that Applicant has not invented an improvement over their previous allowed patents and patent applications. The requirement is that applicants need to include those features that differentiate this application from prior art and applicants similar applications in the claims.
Priority
Receipt is acknowledged that application is a National Stage application of PCT US2021/056161. Priority to US Provisional 63/105,138 with a priority date of 23 October 2020 is acknowledged under 35 USC 119(e) and 37 CFR 1.78.
Information Disclosure Statement
The IDSs dated 19 April 2023, 12 May 2023 and 3 April 2025 that have been previously considered remain placed in the application file.
Claim Rejections - 35 USC § 112
Claims 1-8, 10-21 and 23 have been amended. The rejection under 35 USC 112 is withdrawn.
Claim Interpretation
Under MPEP 2143.03, "All words in a claim must be considered in judging the patentability of that claim against the prior art." In re Wilson, 424 F.2d 1382, 1385, 165 USPQ 494, 496 (CCPA 1970). As a general matter, the grammar and ordinary meaning of terms as understood by one having ordinary skill in the art used in a claim will dictate whether, and to what extent, the language limits the claim scope. Language that suggests or makes a feature or step optional but does not require that feature or step does not limit the scope of a claim under the broadest reasonable claim interpretation. In addition, when a claim requires selection of an element from a list of alternatives, the prior art teaches the element if one of the alternatives is taught by the prior art. See, e.g., Fresenius USA, Inc. v. Baxter Int’l, Inc., 582 F.3d 1288, 1298, 92 USPQ2d 1163, 1171 (Fed. Cir. 2009).
Claims 1, 10, 11 and 23 recite “one or more,” and “at least one.” Since “one or more” and “at least one” are disjunctive, any one of the elements found in the prior art is sufficient to reject the claim. While citations have been provided for completeness and rapid prosecution, only one element is required. Because, on balance, it appears the disjunctive interpretation enjoys the most specification support and for that reason the disjunctive interpretation (one of A, B OR C) is being adopted for the purposes of this Office Action. Applicant’s comments and/or amendments relating to this issue are invited to clarify the claim language and the prosecution history.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1-8, 10-21 and 23 (all claims) are rejected under 35 U.S.C. 103 as obvious over US Patent Publication 2021 0073959 A1, (Elmalem et al.) in view of International Patent Publication WO 2019 200289 A1, (Ozcan et al.). The references are listed in a PTO-892 from the Office Action in which they are first used.
Claim 1
[AltContent: textbox (Elmalem et al. Fig. 3, showing an optical device inline with an optical system.)]
PNG
media_image1.png
339
594
media_image1.png
Greyscale
Regarding Claim 1, Elmalem et al. teach an optical neural network for processing an input object image or signal ("Some embodiments of the present invention relate to a technique for co-designing of a hardware element for manipulating a wave and an image processing technique," paragraph [0003])that is invariant or partially invariant to object or signal transformations comprising:
each of the plurality of optically transmissive or reflective substrate layers comprising a plurality of physical features formed on or within the plurality of optically transmissive or reflective substrate layers ("In some embodiments of the present invention, each of the channels is characterized by a different depth dependence of a spatial frequency response of the imaging device used for captured the image," paragraph [0079] where a physical feature is the depth dependence of a frequency response) and having different transmission or reflection coefficients as a function of the lateral coordinates across each substrate layer ("its design imposes two limitations: (i) its production requires custom and non-standard optical design; and (ii) by enhancing axial chromatic aberrations, lateral chromatic aberrations are usually also enhanced," paragraph [0130]), wherein the plurality of optically transmissive or reflective substrate layers and the plurality of physical features thereon collectively define a trained mapping function ("The mask is composed of a ring/s pattern, whereby each ring introduces a different phase-shift to the wavefront emerging from the scene; the resultant image is aperture coded," paragraph [0133] where the function is aperture coding) between the input object image or signal to the plurality of optically transmissive or reflective substrate layers and one or more output optical signal(s) created by optical diffraction through or optical reflection from the plurality of optically transmissive or reflective substrate layers ("As used herein "manipulation" refers to one or more of: refraction, diffraction, reflection, redirection, focusing, absorption and transmission," paragraph [0074]);
a plurality of optical sensors configured to capture the one or more output optical signal(s) resulting from the plurality of optically transmissive or reflective substrate layers, with each optical sensor of the plurality associated with a particular object or signal class that is inferred or decided by the optical neural network and the output inference or decision is made based on a maximum signal among the plurality of optical sensors, which corresponds to a particular object class or signal class ("the imaging device includes an array of image sensors. In these embodiments, one or more of the image sensors can include or be operatively associated with the optical element to be designed, and the method can optionally and preferably be executed for designing each of the optical elements of the imaging device," paragraph [0078]);
wherein the plurality of optically transmissive or reflective substrate layers are designed during a training phase to define the plurality of physical features formed on or within the plurality of optically transmissive or reflective substrate layers ("the machine learning procedure is trained on the training imaging data. Preferably, but not necessarily, the machine learning procedure is trained using backpropagation, so as to obtain, at 14, values for the weight parameters that describe the hardware ( e.g., optical) element," paragraph [0095]) such that the one or more output optical signal(s) are substantially invariant to object or signal transformations comprising one or more of lateral translation, rotation, or scaling ("data augmentation by rotations of 90°, 180° and 270° was used, to achieve rotation-invariance in the CNN [AltContent: textbox (Elmalem et al. Fig. 29, showing substrate layers arranged in an optical path.)]
PNG
media_image2.png
405
540
media_image2.png
Greyscale
operation," paragraph [0149]).
Elmalem et al. is not relied upon to explicitly teach all of substrate layers arranged in an optical path and separated from one another.
However, Ozcan et al. teach a plurality of optically transmissive or reflective substrate layers arranged in an optical path and separated from one another ("in FIG. 29, two different classifiers were optimized to recognize (1) hand-written digits, 0 through 9, using the MNIST (Mixed National Institute of Standards and Technology)image dataset, and (2) various fashion products, including t-shirts, trousers, pullovers, dresses, coats, sandals, shirts, sneakers, bags, and ankle boots (using the Fashion MNIST image dataset)," paragraph [0178] where each substrate layer is a classifier, labeled 10, 16 or 22 in the Figure),
Therefore, taking the teachings of Elmalem et al. and Ozcan et al. as a whole, it would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify “Method and System for Imaging and Image Processing” as taught by Elmalem et al. to use “Devices and Methods Employing optical Based Machine Learning using Diffractive Deep Neural Networks” as taught by Ozcan et al. The suggestion/motivation for doing so would have been that, “Optics in machine learning has been widely explored due to its unique advantages, encompassing power efficiency, speed and scalability.” as noted by the Ozcan et al. disclosure in paragraph [0004], which also motivates combination because the combination would predictably have a higher productivity as there is a reasonable expectation that different optical materials would be useful in different situations; and/or because doing so merely combines prior art elements according to known methods to yield predictable results.
The rejection of system claim 1 above applies mutatis mutandis to the corresponding limitations of apparatus claim 10, method claim 11 and method claim 23 while noting that the rejection above cites to both device and method disclosures. Claims 10, 11 and 23 are mapped below for clarity of the record and to specify any new limitations not included in claim 1.
Claim 2
Regarding claim 2, Elmalem et al. teach the optical neural network of claim 1, wherein the plurality of physical features of the plurality of optically transmissive or reflective substrate layers comprise regions of varied thicknesses ("The radius parameters can include the inner and outer radii of the ring pattern, and the phase-related parameter can include the phase acquired by the light passing through the mask, or a depth of the groove or the relief," paragraph [0082] where a depth of a groove teaches varied thickness).
Claim 3
Regarding claim 3, Elmalem et al. teach the optical neural network of claim 1, wherein the plurality of physical features of the plurality of optically transmissive or reflective substrate layers comprise regions having different optical properties ("The radius parameters can include the inner and outer radii of the ring pattern, and the phase-related parameter can include the phase acquired by the light passing through the mask, or a depth of the groove or the relief," paragraph [0082] where a depth of a groove teaches varied physical features having different optical properties).
Claim 4
Regarding claim 4, Elmalem et al. teach the optical neural network of claim 1, wherein the plurality of physical features of the plurality of optically transmissive or reflective substrate layers comprise regions having different refractive index or absorption or spectral features ("A is the illumination wavelength, n is the refractive index, and h is the ring height. Notice that the performance of such a mask is sensitive to the illumination wavelength. Taking advantage of the nature of the diffractive optical element structure, such a mask can be designed for a significantly different response for each band in the illumination spectrum," paragraph [00138]).
Claim 5
Regarding claim 5, Elmalem et al. teach the optical neural network of claim 1, as noted above.
Elmalem et al. is not relied upon to explicitly teach all of metamaterials.
However, Ozcan et al. teach wherein the plurality of physical features of the plurality of optically transmissive or reflective substrate layers comprise metamaterials or metasurfaces ("Metamaterials or plasmonic structures may also be incorporated into the substrate layer," paragraph [0020]).
Elmalem et al. and Ozcan et al. are combined as per claim 1.
Claim 6
Regarding claim 6, Elmalem et al. teach the optical neural network of claim 1, as noted above.
Elmalem et al. is not relied upon to explicitly teach all of substrate layers are surrounded.
However, Ozcan et al. teach wherein the plurality of optically transmissive or reflective substrate layers are positioned within or surrounded by vacuum, air, a gas, a liquid or a solid material ("The results also revealed that the printed D2NN 10 can resolve a line-width of 1.8 mm at 0.4 THz (corresponding to a wavelength of 0.75 mm in air)," paragraph [0130]).
Elmalem et al. and Ozcan et al. are combined as per claim 1.
Claim 7
Regarding claim 7, Elmalem et al. teach the optical neural network of claim 1, as noted above.
Elmalem et al. is not relied upon to explicitly teach all of non-linear optical material.
However, Ozcan et al. teach wherein the plurality of optically transmissive or reflective substrate layers comprise at least one nonlinear optical material ("employ a physical gain (e.g., through optical or electrical pumping, or nonlinear optical phenomena, including but not limited to plasmonics and metamaterials)," paragraph [0130]).
Elmalem et al. and Ozcan et al. are combined as per claim 1.
Claim 8
Regarding claim 8, Elmalem et al. teach the optical neural network of claim 1, as noted above.
Elmalem et al. is not relied upon to explicitly teach all of reconfigurable physical features.
However, Ozcan et al. teach wherein the plurality of optically transmissive or reflective substrate layers comprises one or more physical substrate layers that comprise reconfigurable physical features that can change as a function of time ("Using the same D2NN 10 design, this time with both the phase and the amplitude of each neuron's transmission as learnable parameters in a complex-valued D2NN 10 design, the inference performance was increased," paragraph [0178]).
Elmalem et al. and Ozcan et al. are combined as per claim 1.
Claim 10
Regarding claim 10, Elmalem et al. teach an optical neural network for processing an input object image or signal ("Some embodiments of the present invention relate to a technique for co-designing of a hardware element for manipulating a wave and an image processing technique," paragraph [0003]) that is invariant or partially invariant to object or signal transformations comprising:
each of the plurality of optically transmissive or reflective substrate layers comprising a plurality of physical features formed on or within the plurality of optically transmissive or reflective substrate layers ("In some embodiments of the present invention, each of the channels is characterized by a different depth dependence of a spatial frequency response of the imaging device used for captured the image," paragraph [0079] where a physical feature is the depth dependence of a frequency response) and having different transmission or reflection coefficients as a function of the lateral coordinates across each substrate layer ("its design imposes two limitations: (i) its production requires custom and non-standard optical design; and (ii) by enhancing axial chromatic aberrations, lateral chromatic aberrations are usually also enhanced," paragraph [0130]), wherein the plurality of optically transmissive or reflective substrate layers and the plurality of physical features thereon collectively define a trained mapping function ("The mask is composed of a ring/s pattern, whereby each ring introduces a different phase-shift to the wavefront emerging from the scene; the resultant image is aperture coded," paragraph [0133] where the function is aperture coding) between the input object image or signal to the plurality of optically transmissive or reflective substrate layers and one or more output optical signal(s) created by optical diffraction through or optical reflection from the plurality of optically transmissive or reflective substrate layers ("As used herein "manipulation" refers to one or more of: refraction, diffraction, reflection, redirection, focusing, absorption and transmission," paragraph [0074]);
a plurality of optical sensors configured to capture the one or more output optical signal(s) resulting from the plurality of optically transmissive or reflective substrate layers wherein pairs of optical sensors of the plurality are associated with a particular object class or signal class that is inferred or decided by the optical neural network and the output inference or decision is made based on a maximum signal calculated using the optical sensor pairs, which corresponds to a particular object class or signal class ("the imaging device includes an array of image sensors. In these embodiments, one or more of the image sensors can include or be operatively associated with the optical element to be designed, and the method can optionally and preferably be executed for designing each of the optical elements of the imaging device," paragraph [0078]); and
wherein the plurality of optically transmissive or reflective substrate layers are designed during a training phase to define the plurality of physical features formed on or within the plurality of optically transmissive or reflective substrate layers ("the machine learning procedure is trained on the training imaging data. Preferably, but not necessarily, the machine learning procedure is trained using backpropagation, so as to obtain, at 14, values for the weight parameters that describe the hardware ( e.g., optical) element," paragraph [0095]) such that the one or more output optical signal(s) are substantially invariant to object or signal transformations comprising one or more of lateral translation, rotation, or scaling ("data augmentation by rotations of 90°, 180° and 270° was used, to achieve rotation-invariance in the CNN operation," paragraph [0149]).
Elmalem et al. is not relied upon to explicitly teach all of layers arranged in an optical path and separated from one another.
However, Ozcan et al. teach a plurality of optically transmissive or reflective substrate layers arranged in an optical path and separated from one another ("in FIG. 29, two different classifiers were optimized to recognize (1) hand-written digits, 0 through 9, using the MNIST (Mixed National Institute of Standards and Technology)image dataset, and (2) various fashion products, including t-shirts, trousers, pullovers, dresses, coats, sandals, shirts, sneakers, bags, and ankle boots (using the Fashion MNIST image dataset)," paragraph [0178] where each substrate layer is a classifier, labeled 10, 16 or 22 in the figure),
Elmalem et al. and Ozcan et al. are combined as per claim 1.
Claim 11
Regarding claim 11, Elmalem et al. teach a method of forming a multi-layer optical neural network for processing an input object image or input optical signal ("Some embodiments of the present invention relate to a technique for co-designing of a hardware element for manipulating a wave and an image processing technique," paragraph [0003]) that is invariant or partially invariant to object transformations comprising:
training a software-based neural network model to perform one or more specific optical functions for a multi-layer transmissive or reflective network having a plurality of optically diffractive physical features located in different locations in each of the layers of the transmissive or reflective network ("Therefore, in the FCN training stage, the optical imaging simulation is done as a pre-processing step with the best phase mask achieved in the inner net training stage," paragraph [0195]), wherein the training comprises feeding a plurality of different input object images or input optical signals that have random transformations or shifts to the software-based neural network model and computing at least one optical output of optical transmission or reflection through the multi-layer transmissive or reflective network using an optical wave propagation model and iteratively adjusting transmission/reflection coefficients for each layer of the multi-layer transmissive or reflective network until optimized transmission/reflection coefficients are obtained or a certain time or epochs have elapsed ("Data augmentation of four rotations is used to increase the dataset size and achieve rotation invariance," paragraph [0191]); and
having physical features that match the optimized transmission/reflection coefficients obtained by the trained neural network model ("To test the depth estimation method of the present embodiments, several experiments were carried. The experimental setup included an f=16 mm F/7 lens (LM16JCM-V by Kawa) with the phase coded aperture incorporated in the aperture stop plane (see FIG. 21A). The lens was mounted on a UI3590LE camera made by IDS Imaging," paragraph [0205] where experiments show manufacture of at least one physical embodiment) and;
providing a plurality of optical sensors with each optical sensor of the plurality associated with a particular object class or signal class that is inferred or decided by the physical embodiment of the multi-layer transmissive or reflective network ("the imaging device includes an array of image sensors. In these embodiments, one or more of the image sensors can include or be operatively associated with the optical element to be designed, and the method can optionally and preferably be executed for designing each of the optical elements of the imaging device," paragraph [0078]) and the output inference or decision is made based on a maximum signal among the plurality of optical sensors, which corresponds to a particular object class or signal class ("Many types of activation functions that are known in the art, can be used in the artificial neural network of the present embodiments, including, without limitation, Binary step, Soft step, TanH, Arc Tan, Softsign, Inverse square root unit (ISRU), Rectified linear unit (ReLU), Leaky rectified linear unit, Parameteric rectified linear unit (PReLU), Randomized leaky rectified linear unit (RReLU), Exponential linear unit (ELU), Scaled exponential linear unit (SELU), S-shaped rectified linear activation unit (SReLU), Inverse square root linear unit (ISRLU), Adaptive piecewise linear (APL), SoftPlus, Bent identity, SoftExponential, Sinusoid, Sine, Gaussian, Softmax and Maxout," paragraph [0088]).
Elmalem et al. is not relied upon to explicitly teach all of substrate layers arranged along an optical path and separated from one another.
However, Ozcan et al. teach manufacturing or having manufactured a physical embodiment of the multi-layer transmissive or reflective network comprising a plurality of substrate layers arranged along an optical path and separated from one another ("in FIG. 29, two different classifiers were optimized to recognize (1) hand-written digits, 0 through 9, using the MNIST (Mixed National Institute of Standards and Technology)image dataset, and (2) various fashion products, including t-shirts, trousers, pullovers, dresses, coats, sandals, shirts, sneakers, bags, and ankle boots (using the Fashion MNIST image dataset)," paragraph [0178] where each substrate layer is a classifier, labeled 10, 16 or 22 in the figure).
Elmalem et al. and Ozcan et al. are combined as per claim 1.
Claim 12
Regarding claim 12, Elmalem et al. teach the method of claim 11, wherein the optimized transmission/reflective coefficients are obtained by error back-propagation ("the machine learning procedure is trained on the training imaging data. Preferably, but not necessarily, the machine learning procedure is trained using backpropagation, so as to obtain, at 14, values for the weight parameters that describe the hardware ( e.g., optical) element," paragraph [0095]).
Claim 13
Regarding claim 13, Elmalem et al. teach the method of claim 11, wherein the plurality of physical features of the plurality of optically transmissive or reflective substrate layers comprise regions having different optical properties ("The radius parameters can include the inner and outer radii of the ring pattern, and the phase-related parameter can include the phase acquired by the light passing through the mask, or a depth of the groove or the relief," paragraph [0082] where a depth of a groove teaches varied physical features having different optical properties).
Claim 14
Regarding claim 14, Elmalem et al. teach the method of claim 11, wherein the plurality of physical features of the plurality of optically transmissive or reflective substrate layers comprise regions having different refractive index or absorption or spectral features ("A is the illumination wavelength, n is the refractive index, and h is the ring height. Notice that the performance of such a mask is sensitive to the illumination wavelength. Taking advantage of the nature of the diffractive optical element structure, such a mask can be designed for a significantly different response for each band in the illumination spectrum," paragraph [00138]).
Claim 15
Regarding claim 15, Elmalem et al. teach the method of claim 11, as noted above.
Elmalem et al. is not relied upon to explicitly teach all of additive manufacturing.
However, Ozcan et al. teach wherein the physical embodiment of the multi-layer transmissive or reflective network is manufactured by additive manufacturing ("In one particular embodiment, the physical features are created by additive manufacturing techniques such as 3D printing but it should be appreciated that other techniques such as lithography or the like may be used to generate the "neurons" in the different layers," paragraph [0006]).
Elmalem et al. and Ozcan et al. are combined as per claim 1.
Claim 16
Regarding claim 16, Elmalem et al. teach the method of claim 11, as noted above.
Elmalem et al. is not relied upon to explicitly teach all of lithography.
However, Ozcan et al. teach wherein the physical embodiment of the multi-layer transmissive or reflective network is manufactured by lithography ("In one particular embodiment, the physical features are created by additive manufacturing techniques such as 3D printing but it should be appreciated that other techniques such as lithography or the like may be used to generate the "neurons" in the different layers," paragraph [0006]).
Elmalem et al. and Ozcan et al. are combined as per claim 1.
Claim 17
Regarding claim 17, Elmalem et al. teach the method of claim 11, as noted above.
Elmalem et al. is not relied upon to explicitly teach all of layers surrounded by air.
However, Ozcan et al. teach wherein the plurality of optically transmissive or reflective substrate layers are positioned within or surrounded by vacuum, air, a gas, a liquid or a solid material ("The results also revealed that the printed D2NN 10 can resolve a line-width of 1.8 mm at 0.4 THz (corresponding to a wavelength of 0.75 mm in air)," paragraph [0130]).
Elmalem et al. and Ozcan et al. are combined as per claim 1.
Claim 18
Regarding claim 18, Elmalem et al. teach the method of claim 11, as noted above.
Elmalem et al. is not relied upon to explicitly teach all of nonlinear optical material.
However, Ozcan et al. teach wherein the physical embodiment of the multi-layer transmissive or reflective network comprises one or more physical substrate layers that comprise a nonlinear optical material ("employ a physical gain (e.g., through optical or electrical pumping, or nonlinear optical phenomena, including but not limited to plasmonics and metamaterials)," paragraph [0130]).
Elmalem et al. and Ozcan et al. are combined as per claim 1.
Claim 19
Regarding claim 19, Elmalem et al. teach the method of claim 11, as noted above.
Elmalem et al. is not relied upon to explicitly teach all of reconfigurable physical features.
However, Ozcan et al. teach wherein the physical embodiment of the multi-layer transmissive or reflective network comprises one or more physical substrate layers that comprise reconfigurable physical features that can change as a function of time ("Using the same D2NN 10 design, this time with both the phase and the amplitude of each neuron's transmission as learnable parameters in a complex-valued D2NN 10 design, the inference performance was increased," paragraph [0178]).
Elmalem et al. and Ozcan et al. are combined as per claim 1.
Claim 20
Regarding claim 20, Elmalem et al. teach the method of claim 11, wherein the random transformations or shifts comprise one or more of lateral translation, rotation, or scaling ("data augmentation by rotations of 90°, 180° and 270° was used, to achieve rotation-invariance in the CNN operation," paragraph [0149]).
Claim 21
Regarding claim 21, Elmalem et al. teach the method of claim 11, wherein the training comprises feeding a plurality of different input object images or input optical signals that have random affine transformations or warping or aberrations to the software-based neural network ("Data augmentation of four rotations is used to increase the dataset size and achieve rotation invariance," paragraph [0191]).
Claim 23
Regarding claim 23, Elmalem et al. teach a method of forming a multi-layer optical neural network for processing an input object image or input optical signal ("Some embodiments of the present invention relate to a technique for co-designing of a hardware element for manipulating a wave and an image processing technique," paragraph [0003]) that is invariant or partially invariant to object transformations comprising:
training a software-based neural network model to perform one or more specific optical functions for a multi-layer transmissive or reflective network having a plurality of optically diffractive physical features located in different locations in each of the layers of the transmissive or reflective network ("Therefore, in the FCN training stage, the optical imaging simulation is done as a pre-processing step with the best phase mask achieved in the inner net training stage," paragraph [0195]), wherein the training comprises feeding a plurality of different input object images or input optical signals that have random transformations or shifts to the software-based neural network model and computing at least one optical output of optical transmission or reflection through the multi-layer transmissive or reflective network using an optical wave propagation model and iteratively adjusting transmission/reflection coefficients for each layer of the multi-layer transmissive or reflective network until optimized transmission/reflection coefficients are obtained or a certain time or epochs have elapsed ("Data augmentation of four rotations is used to increase the dataset size and achieve rotation invariance," paragraph [0191]); and
having physical features that match the optimized transmission/reflection coefficients obtained by the trained neural network model ("To test the depth estimation method of the present embodiments, several experiments were carried. The experimental setup included an f=16 mm F/7 lens (LM16JCM-V by Kawa) with the phase coded aperture incorporated in the aperture stop plane (see FIG. 21A). The lens was mounted on a UI3590LE camera made by IDS Imaging," paragraph [0205] where experiments show manufacture of at least one physical embodiment); and
providing a plurality of optical sensors wherein pairs of optical sensors of the plurality are associated with a particular object class or signal class that is inferred or decided by the physical embodiment of the multi-layer transmissive or reflective network ("the imaging device includes an array of image sensors. In these embodiments, one or more of the image sensors can include or be operatively associated with the optical element to be designed, and the method can optionally and preferably be executed for designing each of the optical elements of the imaging device," paragraph [0078]) and the output inference or decision is made based on a maximum signal calculated using the optical sensor pairs, which corresponds to a particular object class or signal class ("Many types of activation functions that are known in the art, can be used in the artificial neural network of the present embodiments, including, without limitation, Binary step, Soft step, TanH, Arc Tan, Softsign, Inverse square root unit (ISRU), Rectified linear unit (ReLU), Leaky rectified linear unit, Parameteric rectified linear unit (PReLU), Randomized leaky rectified linear unit (RReLU), Exponential linear unit (ELU), Scaled exponential linear unit (SELU), S-shaped rectified linear activation unit (SReLU), Inverse square root linear unit (ISRLU), Adaptive piecewise linear (APL), SoftPlus, Bent identity, SoftExponential, Sinusoid, Sine, Gaussian, Softmax and Maxout," paragraph [0088]).
Elmalem et al. is not relied upon to explicitly teach all of substrate layers arranged along an optical path and separated from one another.
However, Ozcan et al. teach manufacturing or having manufactured a physical embodiment of the multi-layer transmissive or reflective network comprising a plurality of substrate layers arranged along an optical path and separated from one another ("in FIG. 29, two different classifiers were optimized to recognize (1) hand-written digits, 0 through 9, using the MNIST (Mixed National Institute of Standards and Technology)image dataset, and (2) various fashion products, including t-shirts, trousers, pullovers, dresses, coats, sandals, shirts, sneakers, bags, and ankle boots (using the Fashion MNIST image dataset)," paragraph [0178] where each substrate layer is a classifier, labeled 10, 16 or 22 in the figure).
Elmalem et al. and Ozcan et al. are combined as per claim 1.
Reference Cited
The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure.
US Patent Publication 2021 0287078 A1 to Liu et al. discloses an Optical Hardware Accelerator (OHA) for an Artificial Neural Network (ANN) that includes a communication bus interface, a memory, a controller, and an optical computing engine (OCE). The OCE is configured to execute an ANN model with ANN weights. Each ANN weight includes a quantized phase shift value 8, and a phase shift value cp,. The OCE includes a digital-to-optical (D/0) converter configured to generate input optical signals based on the input data, an optical neural network (ONN) configured to generate output optical signals based on the input optical signals, and an optical-to-digital (0/D) converter configured to generate the output data based on the output optical signals.
US Patent Publication 2021 0142170 A1 to Ozcan et al. discloses all-optical Diffractive Deep Neural Network (D2NN) architecture that learns to implement various functions or tasks after deep learning-based design of the passive diffractive or reflective substrate layers that work collectively to perform the desired function or task. This architecture was successfully confirmed experimentally by creating 3D-printed D2NNs that learned to implement handwritten classifications and lens function at the terahertz spectrum. This all-optical deep learning framework can perform, at the speed of light, various complex functions and tasks that computer-based neural networks can implement, and will find applications in all-optical image analysis, feature detection and object classification, also enabling new camera designs and optical components that can learn to perform unique tasks using D2NNs.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to HEATH E WELLS whose telephone number is (703)756-4696. The examiner can normally be reached Monday-Friday 8:00-4:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ms. Jennifer Mehmood can be reached on 571-272-2976. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/H.E.W/Examiner, Art Unit 2664
Date: 6 April 2026
/JENNIFER MEHMOOD/Supervisory Patent Examiner, Art Unit 2664