DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after allowance or after an Office action under Ex Parte Quayle, 25 USPQ 74, 453 O.G. 213 (Comm'r Pat. 1935). Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, prosecution in this application has been reopened pursuant to 37 CFR 1.114. Applicant's submission filed on 11/10/2025 has been entered.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1, 11 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Moll et al. (US Pub. 2025/0218143 A1) in view of Ozcan et al. (US Pub. 2021/0142170 A1).
Regarding claim 1; Moll teaches an apparatus (an AR system 100, Fig.1), comprising:
a pair of glasses (a pair of glasses 100, Fig.1); and
a computing unit (a computer 120, Fig.1), having:
a neural network configured to process image lights representative of a scene in a view of the pair of glasses to generate a light pattern (Figs. 4 and 5, para. [0052-0057 and 0071], the AR system comprises an object detection system (426/510) including one or more artificial neural networks to analyze a camera data to identify objects within a real-world scene. It is noted that the camera would capture image lights reflected from the real-world scene, then the camera generates a camera data corresponding to the real-world scene. The object detection system 426 may analyze the camera data to determine at least one of a number of edges, contours, shapes that may individually or in combination correspond to an object).
Moll does not teach a passive neural network configured to process image lights; and an array of light sensing pixels configured to convert the light pattern into data representative of outputs of a first set of artificial neurons.
Ozcan teaches a passive neural network (a passive diffractive optical neural network D2NN 10, Fig.1. Para. [0133, 0135, 0138, 0171, and 0203], the neural network 10 comprises passive optical components) configured to process image lights (Fig.1, para. [0007-0010 and 0013], an output optical signal 22 is created by optical diffraction through a plurality of optically transmissive/reflective substrate layers); and an array of light sensing pixels (an optical sensor 26, Fig.1) configured to convert the light pattern into data representative of outputs of a first set of artificial neurons (Fig.1, para. [0100, 0115, and 0179], the optical sensor 26 comprising a plurality of optical detectors is configured to capture the output optical signal 22 and to digitize the output optical signal. A computing device 27 is coupled to the optical sensor 26 for acquiring, storing, processing, and manipulating the optical signal 22. In other words, the optical sensor 26 is configured to convert the light intensity obtained from the diffractive neural network 10 into data representative of outputs of a first set of artificial neurons of the diffractive neural network).
PNG
media_image1.png
238
524
media_image1.png
Greyscale
(Fig.1 of Ozcan reproduced)
At the time of invention was effectively filed, it would have been obvious to one of ordinary skill in the art to modify the neural network in the AR system of Moll to include the teaching of Ozcan of providing a (passive) diffractive neural network to analyze the input light; and providing optical sensors for sensing the optical signal and outputting data resulting from the plurality of optically transmissive and/or reflective substrate layers for each of the plurality of diffractive optical neural network devices. The motivation would have been in order to increase computation speed and to reduce power consumption (Ozcan, para. [0133 and 0135]).
Regarding claim 11; Moll in view of Ozcan teaches a method, comprising: implementing, via a passive neural network in a device, a first set of artificial neurons of an artificial neural network; and generating a light pattern via the passive neural network processing image lights representative of a scene (similar to the analysis of claim 1).
Regarding claim 17; Moll in view of Ozcan teaches a computing device, comprising: a passive neural network configured according to a first set of artificial neurons of an artificial neural network to generate a light pattern from image lights; and an image sensor configured to convert the light pattern into data representative (similar to the analysis of claim 1).
Claims 2-4 and 12-13 are rejected under 35 U.S.C. 103 as being unpatentable over Moll et al. (US Pub. 2025/0218143 A1) in view of Ozcan et al. (US Pub. 2021/0142170 A1) as applied to claims 1, 11, and 17 above, and further in view of Kim et al. (US Pub. 2025/0036915 A1).
Regarding claim 2; Moll in view of Ozcan teaches the apparatus of claim 1 as discussed above. Moll further teaches a processor (processors 302, Fig.3) configured via instructions (para. [0036-0038]) to perform computations (para. [0055], the object detection system is configured to analyze the camera data to identify one or more objects included in the camera data); and a memory (a storage 318, Fig.3) configured to store the instructions (para. [0038]).
Moll does not explicitly teach to perform computations on a second set of artificial neurons responsive to the outputs of the first set of artificial neurons; and a memory configured to store weight matrices of the second set of artificial neurons.
Ozcan teaches a processor (a processor 102, Fig.7) configured via instructions (software 114, Fig.9) to perform computations on a second set of artificial neurons (Figs. 9 and 33A, a digital neural network 44) responsive to the outputs of the first set of artificial neurons (Figs. 9 and 33A, para. [0109], the input of the digital neural network 44 is the output optical image 22 which is an output of the diffractive neural network 10).
PNG
media_image2.png
270
504
media_image2.png
Greyscale
(Fig.33A of Ozcan reproduced)
At the time of invention was effectively filed, it would have been obvious to one of ordinary skill in the art to modify the AR system of Moll to include the digital neural network of Ozcan. The motivation would have been in order to generate a final output (e.g., identify a face in the image, Ozcan in para. [0106]).
Moll in view of Ozcan does not explicitly teach a memory configured to store the instructions and weight matrices of the second set of artificial neurons.
Kim teaches a memory (a memory 200, Fig.8) configured to store the instructions and weight matrices of the second set of artificial neurons (Fig.1, para. [0011-0015, 0020, and 0075], Kim discloses an artificial neural network 110a for detecting an object in an image. Fig.8, para. [0221], information about the artificial neural network model stored in the memory 200 may include a weight matrix used for each channel in each layer).
At the time of invention was effectively filed, it would have been obvious to one of ordinary skill in the art to modify the digital neural network in the AR system of Moll in view of Ozcan to include the teaching of Kim of providing a memory for storing a weight matrix. The motivation would have been in order to store values acting as learnable parameters that determine the influence of inputs on outputs; and allowing the neural network to recognize patterns.
Regarding claim 3; Moll in view of Ozcan and Kim teaches the apparatus of claim 2 as discussed above. Moll does not teach an artificial neural network containing the first set of artificial neurons and the second set of artificial neurons is configured to recognize an object in the scene; and the apparatus is configured to present information about the object in response to recognition of the object.
Ozcan teaches an artificial neural network containing the first set of artificial neurons (the diffractive optical neural network 10/42; Figs. 1, 9, and 33A) and the second set of artificial neurons (the digital neural network 44; Figs. 9 and 33A) is configured to recognize an object in the scene (Fig.10, para. [0087, 0109], the final output 46 includes a recognition of an object (e.g., a tree) in the scene); and the apparatus is configured to present information about the object in response to recognition of the object (e.g., Fig.10, the object (e.g., Tree) is displayed in the final output 46).
The motivation is the same as the rejection of claim 2.
Regarding claim 4; Moll in view of Ozcan and Kim teaches the apparatus of claim 3 as discussed above. Moll does not teach the passive neural network includes cells of photonic crystals or metamaterials configured to interact with the image lights in accordance with the first set of artificial neurons.
Ozcan teaches the passive neural network includes cells of photonic crystals or metamaterials configured to interact with the image lights in accordance with the first set of artificial neurons (para. [0020, 0104, 0134, and 0175], the substrate layer 16 may be formed by metamaterials or plasmonic structures).
The motivation is the same as the rejection of claim 2.
Regarding claim 12; Moll in view of Ozcan teaches the method of claim 11 as discussed above. The limitation of claim 12 is substantially similar to claims 2 and 3. Accordingly, claim 12 is rejected based on the same analysis as claims 2 and 3.
Regarding claim 13; Moll in view of Ozcan teaches the method of claim 12 as discussed above. The limitation of claim 13 is substantially similar to claim 4. Accordingly, claim 13 is rejected based on the same analysis as claim 4.
Claims 5-7 and 14-15 are rejected under 35 U.S.C. 103 as being unpatentable over Moll et al. (US Pub. 2025/0218143 A1) in view of Ozcan et al. (US Pub. 2021/0142170 A1) and Kim et al. (US Pub. 2025/0036915 A1) as applied to claims 4 and 13 above, and further in view of Demaj et al. (US Pub. 2023/0131067 A1).
Regarding claim 5; Moll in view of Ozcan and Kim teaches the apparatus of claim 4 as discussed above. Moll in view of Ozcan does not teach a battery pack configured to power the processor in response to an output of the first set of artificial neurons exceeding a threshold.
Demaj teaches a battery pack (Fig.1, a microcontroller MCU is powered by a battery) configured to power the processor in response to an output of the first set of artificial neurons exceeding a threshold (Para. [0003, 0010, and 0017], Demaj discloses a method of detecting events or elements in physical signals by implementing an artificial neural network. The neural network may be operated in two modes: a nominal mode in which the neural network is executed by taking as input a physical signal having a first resolution, called nominal resolution, or a low power mode in which the neural network is executed by taking as input a physical signal having a second resolution, called reduced resolution, lower than the first resolution. The method including determining a probability of presence of the event or the element by an implementation of the neural network; operating the neural network according to the nominal mode when the probability of presence of the event or the element is greater than a threshold, and operating the neural network according to the low power mode when the probability of presence of the event or the element is below the threshold. More specifically, the probability of the presence of an event (or object) may be assessed from a number of detections obtained over a given period (para. [0040]). The physical signal may be an image of a scene acquired by a camera (para. [0023]). In other words, Demaj discloses a method of detecting a number of events (e.g., objects) in a scene acquired by a camera; operating the neural network in a nominal mode (i.e., full power mode) if the number of events (e.g., objects) exceeds a threshold; and operating the neural network in a low power mode if the number of events (e.g., objects) lower than the threshold).
At the time of invention was effectively filed, it would have been obvious to one of ordinary skill in the art to modify the AR system of Moll in view of Ozcan to include the teaching of Demaj of operating a neural network in a nominal mode or a low power mode based on detecting a number of events (e.g., objects) in a scene obtained by a camera. The motivation would have been in order to reduce power consumption (Demaj, para. [0006-0009]).
Regarding claim 6; Moll in view of Ozcan, Kim, and Demaj teaches the apparatus of claim 5 as discussed above. Moll in view of Ozcan does not teach the processor and a portion of the light sensing pixels are configured in a low power mode before the output exceeding the threshold.
Demaj teaches the microcontroller is configured in a low power mode before the output exceeding the threshold (see the analysis of claim 5 above, when the number of events (e.g., objects) in a scene is below the threshold, the microcontroller operates in a low power mode).
Therefore, a combination of Moll, Ozcan, and Demaj further teaches “the processor and a portion of the light sensing pixels are configured in a low power mode before the output exceeding the threshold “. The motivation is the same as the rejection of claim 5.
Regarding claim 7; Moll in view of Ozcan, Kim, and Demaj teaches the apparatus of claim 6 as discussed above. Moll does not teach the outputs of the first set of artificial neurons are configured to be representative of features extracted from an image representative of the scene.
Ozcan teaches the outputs of the first set of artificial neurons are configured to be representative of features extracted from an image representative of the scene (referred to the analysis of claim 2 above, Figs. 29D, 29E, 29I, 29J, para. [0002, 0111, 0125, 0141, and 0192], intensity patterns at an output plane 22 of the optical neural network is corresponding to features from an image of a scene).
The motivation is the same as the rejection of claim 2.
Regarding claim 14; Moll in view of Ozcan teaches the method of claim 13 as discussed above. The limitation of claim 14 is substantially similar to claim 5. Accordingly, claim 14 is rejected based on the same analysis as claim 5.
Regarding claim 15; Moll in view of Ozcan and Demaj teaches the method of claim 14 as discussed above. The limitation of claim 15 is substantially similar to claim 6. Accordingly, claim 15 is rejected based on the same analysis as claim 6.
Claims 8-10 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Moll et al. (US Pub. 2025/0218143 A1) in view of Ozcan et al. (US Pub. 2021/0142170 A1), Kim et al. (US Pub. 2025/0036915 A1), and Demaj et al. (US Pub. 2023/0131067 A1) as applied to claims 7 and 15 above, and further in view of Maizels et al. (US Pub. 2024/0079012 A1).
Regarding claim 8; Moll in view of Ozcan, Kim, and Demaj teaches the apparatus of claim 7 as discussed above. Moll does not teach the image lights are a plane wave rebounded from objects in the scene.
Ozcan teaches the image lights are a plane wave rebounded from objects in the scene (Figs. 1 and 2, the image lights are a plane wave rebounded from the object 14 in the scene to the optical neural network 10).
The motivation is the same as the rejection of claim 2.
Moll in view of Ozcan, Kim, and Demaj does not teach monochromatic light.
Maizels teaches monochromatic light (para. [0007 and 0233], Maizels discloses a method of facial detection. The method comprises projecting coherent light toward a user’s face; and detecting light reflecting from the user’s face. The light may be monochromatic wave).
At the time of invention was effectively filed, it would have been obvious to one of ordinary skill in the art to modify the AR system of Moll in view of Ozcan and Demaj to include the teaching of Maizels of using a monochromatic light for facial detection. The motivation would have been in order to improves accuracy by reducing background noise (scattering/reflection) and to enhancing contrast between features.
Regarding claim 9; Moll in view of Ozcan, Kim, Demaj, and Maizels teaches the apparatus of claim 8 as discussed above. Moll further teaches a display device integrated with the pair of glasses and configured to present the information (Figs.1 and 2, para. [0024], a right optical element 110 and a left optical element 108 can be a lens, a display, a display assembly, or a combination of the foregoing. For example, Fig.2 shows near eye display 206).
Regarding claim 10; Moll in view of Ozcan, Kim, Demaj, and Maizels teaches the apparatus of claim 9 as discussed above. Moll further teaches a wireless transceiver (a communication 342, Fig.3) configured to communicate with a computing device to retrieve the information (para. [0041]).
Regarding claim 16; Moll in view of Ozcan and Demaj teaches the method of claim 15 as discussed above. The limitation of claim 16 is substantially similar to claims 4, 7, and 8. Accordingly, claim 16 is rejected based on the same analysis as claims 4, 7, and 8.
Claim 18 is rejected under 35 U.S.C. 103 as being unpatentable over Moll et al. (US Pub. 2025/0218143 A1) in view of Ozcan et al. (US Pub. 2021/0142170 A1) as applied to claim 17 above, and further in view of Demaj et al. (US Pub. 2023/0131067 A1).
Regarding claim 18; Moll in view of Ozcan teaches the computing device of claim 17 as discussed above.
Moll further teaches logic circuits (processors 302, Fig.3) configured via instructions (para. [0036-0038]) to perform computations (para. [0055], the object detection system is configured to analyze the camera data to identify one or more objects included in the camera data).
Moll does not teach that the logic circuits configured to perform computations on a second set of artificial neurons of the artificial neural network, responsive to the features as inputs, wherein the logic circuits include a digital accelerator configured to accelerate multiplication and accumulation operations applied on weight matrices of the second set of artificial neurons; and wherein the image sensor is further configured to convert the light pattern into data representative of features extracted by the first set of artificial neurons from the image lights.
Ozcan teaches logic circuits configured to perform computations on a second set of artificial neurons of the artificial neural network (Figs. 9 and 33A, a digital neural network 44), and wherein the image sensor is further configured to convert the light pattern into data representative of features extracted by the first set of artificial neurons from the image lights (Fig.1, para. [0100, 0115, and 0179], the optical sensor 26 comprising a plurality of optical detectors is configured to capture the output optical signal 22 and to digitize the output optical signal. A computing device 27 is coupled to the optical sensor 26 for acquiring, storing, processing, and manipulating the optical signal 22. In other words, the optical sensor 26 is configured to convert the light intensity obtained from the diffractive neural network 10 into data representative of outputs of a first set of artificial neurons of the diffractive neural network).
At the time of invention was effectively filed, it would have been obvious to one of ordinary skill in the art to modify the neural network in the AR system of Moll to include the teaching of Ozcan of providing a (passive) diffractive neural network to analyze the input light; and providing optical sensors for sensing the optical signal and outputting data resulting from the plurality of optically transmissive and/or reflective substrate layers for each of the plurality of diffractive optical neural network devices. The motivation would have been in order to increase computation speed and to reduce power consumption (Ozcan, para. [0133 and 0135]).
Moll in view of Ozcan does not teach responsive to the features as inputs, wherein the logic circuits include a digital accelerator configured to accelerate multiplication and accumulation operations applied on weight matrices of the second set of artificial neurons.
Demaj teaches responsive to the features as inputs, wherein the logic circuits include a digital accelerator configured to accelerate multiplication and accumulation operations applied on weight matrices of the second set of artificial neurons (Para. [0003, 0010, 0017, and 0031], Demaj discloses a method of detecting events or elements in physical signals by implementing an artificial neural network. The neural network may be operated in two modes: a nominal mode in which the neural network is executed by taking as input a physical signal having a first resolution, called nominal resolution, or a low power mode in which the neural network is executed by taking as input a physical signal having a second resolution, called reduced resolution, lower than the first resolution. The method including determining a probability of presence of the event or the element by an implementation of the neural network; operating the neural network according to the nominal mode when the probability of presence of the event or the element is greater than a threshold, and operating the neural network according to the low power mode when the probability of presence of the event or the element is below the threshold. More specifically, the probability of the presence of an event (or object) may be assessed from a number of detections obtained over a given period (para. [0040]). The physical signal may be an image of a scene acquired by a camera (para. [0023]). In other words, Demaj discloses a method of detecting a number of events (e.g., objects) in a scene acquired by a camera; operating the neural network in a nominal mode (i.e., full power mode) if the number of events (e.g., objects) exceeds a threshold; and operating the neural network in a low power mode if the number of events (e.g., objects) lower than the threshold).
At the time of invention was effectively filed, it would have been obvious to one of ordinary skill in the art to modify the AR system of Moll in view of Ozcan to include the teaching of Demaj of operating a neural network in a nominal mode or a low power mode based on detecting a number of events (e.g., objects) in a scene obtained by a camera. The motivation would have been in order to reduce power consumption (Demaj, para. [0006-0009]).
Claims 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Moll et al. (US Pub. 2025/0218143 A1) in view of Ozcan et al. (US Pub. 2021/0142170 A1) and Demaj et al. (US Pub. 2023/0131067 A1) as applied to claim 18 above, and further in view of Dahlgren et al. (US Pub. 2026/0006304 A1).
Regarding claim 19; Moll in view of Ozcan and Demaj teaches the computing device of claim 18 as discussed above. Moll in view of Ozcan and Demaj does not teach the image sensor includes a first portion configured to provide an interest level indicator; and the logic circuits are configured in a low power mode when the interest level indicator is below a threshold.
Dahlgren teaches the image sensor (an image sensor 200, Fig.2a) includes a first portion (change detectors 231 and 232; Fig.2a) configured to provide an interest level indicator (para. [0039 and 0262], the change detectors are configured to detect events in a scene (e.g., motion, shape, or optical characteristics such as changes in light conditions)); and the logic circuits are configured in a low power mode when the interest level indicator is below a threshold (para. [0039], the image sensor may be partially inactive or operated in a low frame rate until the output from the change detectors triggers the image sensor (e.g., camera module) based on a detected motion).
At the time of invention was effectively filed, it would have been obvious to one of ordinary skill in the art to modify the AR system of Moll in view of Ozcan and Demaj to include the teaching of Dahlgren of switching an operation mode of an image sensor based on an output from change detectors included in the image sensor. The motivation would have been in order to reduce power consumption (Dahlgren, para. [0071]).
Regarding claim 20; Moll in view of Ozcan, Demaj, and Dahlgren teaches the computing device of claim 19 as discussed above. Moll in view of Ozcan and Demaj does not teach the image sensor includes a second portion configured to be inactive in generating outputs when the interest level indicator is below a threshold.
Dahlgren teaches the image sensor includes a second portion configured to be inactive in generating outputs when the interest level indicator is below a threshold (para. [0039 and 0262], the image sensor may be at least partially inactive based on an output from change detectors (e.g., motionless)).
The motivation is the same as the rejection of claim 19.
Inquiries
Any inquiry concerning this communication or earlier communications from the examiner should be directed to NGUYEN H TRUONG whose telephone number is (571)270-1630. The examiner can normally be reached M-F: 10-6.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chanh Nguyen can be reached at 571-272-7772. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/NGUYEN H TRUONG/Examiner, Art Unit 2623
/CHANH D NGUYEN/Supervisory Patent Examiner, Art Unit 2623