Prosecution Insights
Last updated: April 19, 2026
Application No. 18/213,201

ALIGNMENT-BASED FAULT DETECTION FOR PHYSICAL COMPONENTS

Non-Final OA §103§112§DP
Filed
Jun 22, 2023
Examiner
ZAK, JACQUELINE ROSE
Art Unit
2666
Tech Center
2600 — Communications
Assignee
Apple Inc.
OA Round
1 (Non-Final)
67%
Grant Probability
Favorable
1-2
OA Rounds
2y 10m
To Grant
55%
With Interview

Examiner Intelligence

Grants 67% — above average
67%
Career Allow Rate
8 granted / 12 resolved
+4.7% vs TC avg
Minimal -11% lift
Without
With
+-11.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
46 currently pending
Career history
58
Total Applications
across all art units

Statute-Specific Performance

§101
5.7%
-34.3% vs TC avg
§103
56.3%
+16.3% vs TC avg
§102
21.1%
-18.9% vs TC avg
§112
13.8%
-26.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 12 resolved cases

Office Action

§103 §112 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Status Claims 1-18 are pending for examination in the application filed 06/22/2023. Priority Acknowledgement is made of Applicant’s claim to priority of provisional applications 63409474, 63409490, 63409485, 63409482, 63409496, 63409487, 63409480, and 63409478 filing date 09/23/2022. Information Disclosure Statement The information disclosure statement (IDS) submitted on 01/26/2024 has been considered by the examiner. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 1-2, 4, 13, and 17-18 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 3, 6, 8, 10, and 15-16 of copending Application No. 18/213,203 (reference application). This is a provisional nonstatutory double patenting rejection because the patentably indistinct claims have not in fact been patented. Although the claims at issue are not identical, they are not patentably distinct from each other as will be described in reference to the table below. Emphasis has been added in bold to the elements which are identical. Reference Application 18/213,203 Current Application 18/213,201 1. A method, comprising: in accordance with a determination to determine whether a component has a fault, causing, via an emitter, output of an emission; after causing output of the emission, receiving, via a first sensor, data with respect to a physical environment; and in response to receiving the data: in accordance with a determination that a first set of one or more criteria is met, determining that the component has a fault; and in accordance with a determination that a second set of one or more criteria is met, performing a first operation, wherein the second set of one or more criteria includes a criterion that is based on one or more characteristics of an artifact corresponding to the emission, and wherein the second set of one or more criteria is different from the first set of one or more criteria. 3. The method of claim 1, wherein the first set of one or more criteria includes a criterion that is met when the artifact corresponding to the emission is not detected. 1. A method, comprising: in accordance with a determination to determine whether a component has a fault, causing, via an emitter, output of an emission; after causing output of the emission, receiving, via a sensor, data with respect to a physical environment; and in response to receiving the data: in accordance with a determination that a first set of one or more criteria is met, determining that the component has a fault, wherein the first set of one or more criteria includes a first criterion that is met when a predicted artifact corresponding to the emission is not detected; and in accordance with a determination that a second set of one or more criteria is met, performing a first operation, wherein the second set of one or more criteria includes a second criterion that is met when the predicted artifact corresponding to the emission is detected, and wherein the second set of one or more criteria is different from the first set of one or more criteria. 6. The method of claim 1, wherein the emission is light output via a light source. 2. The method of claim 1, wherein the emission is light output via a light source. 8. The method of claim 1, wherein the first sensor is a camera, and wherein the data includes an image captured by the camera. 4. The method of claim 1, wherein the sensor is a camera, and wherein the data includes an image captured by the camera. 10. The method of claim 1, further comprising: in response to determining that the component has a fault, performing a corrective operation. 13. The method of claim 1, further comprising: in response to determining that the component has a fault, performing a corrective action. 15. A non-transitory computer-readable storage medium storing one or more programs configured to be executed by one or more processors of an electronic device that is in communication with an emitter and a first sensor, the one or more programs including instructions for: in accordance with a determination to determine whether a component has a fault, causing, via the emitter, output of an emission; after causing output of the emission, receiving, via the first sensor, data with respect to a physical environment; and in response to receiving the data: in accordance with a determination that a first set of one or more criteria is met, determining that the component has a fault; and in accordance with a determination that a second set of one or more criteria is met, performing a first operation, wherein the second set of one or more criteria includes a criterion that is based on one or more characteristics of an artifact corresponding to the emission, and wherein the second set of one or more criteria is different from the first set of one or more criteria. 17. A non-transitory computer-readable storage medium storing one or more programs configured to be executed by one or more processors of an electronic device that is in communication with an emitter and a sensor, the one or more programs including instructions for: in accordance with a determination to determine whether a component has a fault, causing, via the emitter, output of an emission; after causing output of the emission, receiving, via the sensor, data with respect to a physical environment; and in response to receiving the data: in accordance with a determination that a first set of one or more criteria is met, determining that the component has a fault, wherein the first set of one or more criteria includes a first criterion that is met when a predicted artifact corresponding to the emission is not detected; and in accordance with a determination that a second set of one or more criteria is met, performing a first operation, wherein the second set of one or more criteria includes a second criterion that is met when the predicted artifact corresponding to the emission is detected, and wherein the second set of one or more criteria is different from the first set of one or more criteria. 16. An electronic device, comprising: an emitter; a first sensor; one or more processors; and memory storing one or more programs configured to be executed by the one or more processors, the one or more programs including instructions for: in accordance with a determination to determine whether a component has a fault, causing, via the emitter, output of an emission; after causing output of the emission, receiving, via the first sensor, data with respect to a physical environment; and in response to receiving the data: in accordance with a determination that a first set of one or more criteria is met, determining that the component has a fault; and in accordance with a determination that a second set of one or more criteria is met, performing a first operation, wherein the second set of one or more criteria includes a criterion that is based on one or more characteristics of an artifact corresponding to the emission, and wherein the second set of one or more criteria is different from the first set of one or more criteria. 18. An electronic device, comprising: an emitter; a sensor; one or more processors; and memory storing one or more programs configured to be executed by the one or more processors, the one or more programs including instructions for: in accordance with a determination to determine whether a component has a fault, causing, via the emitter, output of an emission; after causing output of the emission, receiving, via the sensor, data with respect to a physical environment; and in response to receiving the data: in accordance with a determination that a first set of one or more criteria is met, determining that the component has a fault, wherein the first set of one or more criteria includes a first criterion that is met when a predicted artifact corresponding to the emission is not detected; and in accordance with a determination that a second set of one or more criteria is met, performing a first operation, wherein the second set of one or more criteria includes a second criterion that is met when the predicted artifact corresponding to the emission is detected, and wherein the second set of one or more criteria is different from the first set of one or more criteria. Independent claims 1, 17, and 18 of Current Application 18/213,201 are identical to claims 1, 15, and 16 of Reference Application 18/213,203, respectively, other than the limitation in the current application: wherein the first set of one or more criteria includes a first criterion that is met when a predicted artifact corresponding to the emission is not detected. This limitation is identical to claim 3 of reference application ‘203, as shown in the table above. Therefore, it would have been obvious to person of ordinary skill in the art before the effective filing date of the present invention to include a first criterion that is met when a predicted artifact corresponding to the emission is not detected, which would produce known results with a reasonable expectation for success. Dependent claims 2 and 4 of Current Application 18/213,201 are identical to claims 6 and 8 of Reference Application 18/213,203, respectively. Dependent claim 13 of Current Application 18/213,201 is identical to claim 10 of Reference Application 18/213,203 other than the corrective action. The reference application ‘203 describes a corrective operation. Therefore, it would have been obvious to person of ordinary skill in the art before the effective filing date of the present invention to substitute a corrective operation for a corrective action, which would produce known results with a reasonable expectation for success. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-18 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Regarding claims 1, 17, and 18, the limitations of “in accordance with a determination to determine whether a component has a fault” followed by “determining that the component has a fault” are unclear. It is unclear if this determination step is the same as what has already occurred. Please clarify. Claims 2-16 are rejected because of their dependency on claim 1. Similarly, please clarify if the fault described in claims 7-9, 11, and 13 is understood to be the same fault. For the purpose of compact prosecution, these faults will be understood to be the same fault. Claim 9 describes “a second optical element”, which is unclear because no first optical element was claimed. It is unclear whether the terms “optical element” and “optical component” are referring to the same part. Please clarify. For the purpose of compact prosecution, “optical element” and “optical component” are understood to be the same. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1-6, 8-9, 11-14, and 17-18 are rejected under 35 U.S.C. 103 as being unpatentable over Beck (US9245333B1) in view of Takahashi (JP2020038288A). Regarding claim 1, Beck teaches a method, comprising: in accordance with a determination to determine whether a component has a fault, causing, via an emitter, output of an emission ([col. 6 ln. 22-26] In the example of FIG. 2, system 10 may include light source 36 for determining if there is a near-field obstruction such as near-field obstruction 34 present within field-of-view 30. As shown in FIG. 2, light source 36 may emit light such as light 38); after causing output of the emission, receiving, via a sensor, data with respect to a physical environment ([col. 5 ln. 59-61] Image sensor 14 may have a field-of-view 30. Image sensor 14 may generate image data in response to light received from field-of-view 30); and in response to receiving the data: in accordance with a determination that a first set of one or more criteria is met, determining that the component has a fault ([col. 9 ln. 6-12] If circuitry 16 determines that the image data includes portions generated in response to reflected light 40 (e.g., if circuitry 16 determines that the image data includes portions having the predetermined pattern as emitted by source 36) over the period of time, circuitry 16 may determine that near-field obstruction 34 is present and processing may proceed to step 50), and in accordance with a determination that a second set of one or more criteria is met (obstruction prevents imaging sensor from functioning properly), performing a first operation, wherein the second set of one or more criteria includes a second criterion that is met when the predicted artifact corresponding to the emission is detected, and wherein the second set of one or more criteria is different from the first set of one or more criteria ([col. 9 ln. 17-36] At step 50, control and processing circuitry 16 may take appropriate action based on the captured image data. For example, control and processing circuitry 16 may disable the imaging system if the obstruction is determined to prevent the image sensor from obtaining a requisite minimum amount of image data from the surroundings of vehicle 100…As another example, control and processing circuitry 16 may indicate to a user that the imaging system needs inspection and/or repair (e.g., may issue display an alert to the user, may issue an audible alert, etc.)). Beck does not teach wherein the first set of one or more criteria includes a first criterion that is met when a predicted artifact corresponding to the emission is not detected. Takahashi, in the same field of endeavor of fault detection, teaches wherein the first set of one or more criteria includes a first criterion that is met when a predicted artifact corresponding to the emission is not detected ([pg. 7 para. 3-4] In FIG. 12C, the center of the spot Sp is outside the light receiving range A1. In this case, the output intensity of the light receiving unit 11 becomes an insufficient value as shown in FIG. In FIG. 14, the peak value P2 of the output value of the light receiving unit 11 is smaller than the threshold value Pt. The cause of the deviation of the spot Sp from the light receiving range A1 is the positional deviation of the screen 9. FIG. 15 shows the positional relationship between the scanning region R1 and the light receiving unit 11 when the position of the screen 9 is appropriate, and FIG. 16 shows the scanning region when the position of the screen 9 is misaligned. The positional relationship between R1 and the light receiving unit 11 is shown). Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the method of Beck with the teachings of Takahashi to determine that the component has a fault when the emission is not detected because "As a cause of the output intensity of the light receiving unit 11 becoming abnormal, a spot of the laser beam may come off from the light receiving unit 11" [Takahashi pg. 7 para 1]. Regarding claim 2, Beck and Takahashi teach the method of claim 1. Beck further teaches wherein the emission is light output via a light source ([col. 6 ln. 22-26] In the example of FIG. 2, system 10 may include light source 36 for determining if there is a near-field obstruction such as near-field obstruction 34 present within field-of-view 30. As shown in FIG. 2, light source 36 may emit light such as light 38). Regarding claim 3, Beck and Takahashi teach the method of claim 2. Takahashi teaches wherein the light is collimated light of a single wavelength ([pg. 4 para. 5-6] The red laser diode 71, the green laser diode 72, and the blue laser diode 73 are laser elements that generate laser light in different wavelength bands. Each of the laser diodes 71, 72, and 73 emits a laser beam having an output corresponding to the supplied current value. The red laser diode 71 generates laser light in a red wavelength band. The laser light output from the red laser diode 71 passes through the collimator lens 79A (see FIG. 4) and is irradiated on the dichroic mirror 74). Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the method of Beck with the teachings of Takahashi to use collimated light of a single wavelength because "The laser light of each color reflected by the mirror 76 passes through the emission hole 70a of the housing 70 and enters the MEMS mirror 8" [Takahashi pg. 4 para. 7] and "The MEMS mirror 8 generates an image on the screen 9 by reflecting the laser light toward the screen 9 while rotating and oscillating the mirror 82" [Takahashi pg. 5 para. 2]. Regarding claim 4, Beck and Takahashi teach the method of claim 1. Beck further teaches wherein the sensor is a camera, and wherein the data includes an image captured by the camera ([col. 3 ln. 56-63] FIG. 1 is a diagram of an illustrative system having an imaging system that uses an image sensor to capture images and a corresponding host subsystem. System 100 of FIG. 1 may, for example, be a vehicle safety system (e.g., an active braking system or other vehicle safety system), a surveillance system, an electronic device such as a camera, a cellular telephone, a video camera, or other electronic device that captures digital image data). Regarding claim 5, Beck and Takahashi teach the method of claim 4. Beck further teaches wherein the component includes an optical component in the optical path of the camera ([col. 5 ln. 43-50] In scenarios where image sensor 14 is mounted within the interior of a vehicle, protective layer 32 may be used in a vehicle to separate the image sensor from the exterior of the vehicle. Protective layer 32 may be made of a transparent material to allow image sensor 14 to capture accurate images of the surroundings of the vehicle. Protective layer 32 may be formed from glass, plastic, plexiglass, or any other desired material, for example). Beck does not teach wherein the optical component includes an embedded component, and wherein the first criterion is met when the predicted artifact corresponding to the emission is not detected at a location corresponding to the embedded component. Takahashi teaches wherein the optical component includes an embedded component ([pg. 3 para. 5] As shown in FIGS. 4 and 5, the screen 9 is fixed to a wall 61 of the housing 6. The wall 61 has a rectangular opening 62. The screen 9 is fixed to the wall 61 from the inside, and closes the opening 62. The screen 9 of the present embodiment is a microlens array having many microlenses 9a. The microlenses 9a are arranged along the horizontal and vertical directions of the image on the screen 9. The screen 9 is arranged with the convex surface of the micro lens 9a facing the MEMS mirror 8 side. Laser light transmitted through the micro lens 9a is diffused by the micro lens 9a), and wherein the first criterion is met when the predicted artifact corresponding to the emission is not detected at a location corresponding to the embedded component ([pg. 7 para. 3-4] In FIG. 14, the peak value P2 of the output value of the light receiving unit 11 is smaller than the threshold value Pt. The cause of the deviation of the spot Sp from the light receiving range A1 is the positional deviation of the screen 9. FIG. 15 shows the positional relationship between the scanning region R1 and the light receiving unit 11 when the position of the screen 9 is appropriate, and FIG. 16 shows the scanning region when the position of the screen 9 is misaligned. The positional relationship between R1 and the light receiving unit 11 is shown). Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the method of Beck with the teachings of Takahashi to determine that the component has a fault when the emission is not detected at a location corresponding to the embedded component because "the screen 9 diffuses the laser light…and emits the laser light" [Takahashi pg. 3 para. 5] and "As a cause of the output intensity of the light receiving unit 11 becoming abnormal, a spot of the laser beam may come off from the light receiving unit 11" [Takahashi pg. 7 para 1]. Regarding claim 6, Beck and Takahashi teach the method of claim 5. Beck further teaches a field of view of a camera ([col. 3 ln. 34-37] An image sensor may be formed as part of a camera module that includes a verification system to ensure that the field-of-view of the image sensor is not obstructed). Beck does not teach wherein the optical component includes a plurality of embedded components, and wherein the plurality of embedded components are located proximate to an edge of a field of view. Takahashi teaches wherein the optical component includes a plurality of embedded components ([pg. 3 para. 5] As shown in FIGS. 4 and 5, the screen 9 is fixed to a wall 61 of the housing 6. The wall 61 has a rectangular opening 62. The screen 9 is fixed to the wall 61 from the inside, and closes the opening 62. The screen 9 of the present embodiment is a microlens array having many microlenses 9a. The microlenses 9a are arranged along the horizontal and vertical directions of the image on the screen 9. The screen 9 is arranged with the convex surface of the micro lens 9a facing the MEMS mirror 8 side. Laser light transmitted through the micro lens 9a is diffused by the micro lens 9a), and wherein the plurality of embedded components are located proximate to an edge of a field of view (Figure 4). Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the method of Beck with the teachings of Takahashi to have a plurality of embedded components proximate to the edge because "the screen 9 diffuses the laser light…and emits the laser light" [Takahashi pg. 3 para. 5]. Regarding claim 8, Beck and Takahashi teach the method of claim 4. Beck further teaches the camera ([col. 3 ln. 56-63] FIG. 1 is a diagram of an illustrative system having an imaging system that uses an image sensor to capture images and a corresponding host subsystem. System 100 of FIG. 1 may, for example, be a vehicle safety system (e.g., an active braking system or other vehicle safety system), a surveillance system, an electronic device such as a camera, a cellular telephone, a video camera, or other electronic device that captures digital image data). Beck does not teach wherein determining that the component includes a fault includes determining that a location or orientation of an optical component of the component is misaligned. Takahashi teaches wherein determining that the component includes a fault includes determining that a location or orientation of an optical component of the component is misaligned ([pg. 7 para. 4] The cause of the deviation of the spot Sp from the light receiving range A1 is the positional deviation of the screen 9. FIG. 15 shows the positional relationship between the scanning region R1 and the light receiving unit 11 when the position of the screen 9 is appropriate, and FIG. 16 shows the scanning region when the position of the screen 9 is misaligned. The positional relationship between R1 and the light receiving unit 11 is shown). Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the method of Beck with the teachings of Takahashi to "detect an abnormality of the screen 9 when the position of the screen 9 is displaced such that the display unit 11 deviates from the scanning region… [and] can suppress a decrease in display quality" [Takahashi pg. 10 para. 1]. Regarding claim 9, Beck and Takahashi teach the method of claim 8. Beck teaches an optical element at least partially in the optical path of the camera ([col. 8 ln. 1-5] In scenarios where image sensor 14 is positioned behind protective layer 32, image sensor 14 may be positioned at a desired distance 42 behind the protective layer. Distance 42 may be, for example, less than 2 centimeters, between 2 and 10 centimeters, or greater than 10 centimeters). Beck does not teach in accordance with a determination to determine whether a second optical element of the component has a fault, causing, via a second emitter different from the emitter, output of a second emission different from the emission, wherein the component includes a plurality of separate, disconnected optical components including the second optical element, and wherein the plurality of separate, disconnected optical components are at least partially in the optical path. Takahashi teaches in accordance with a determination to determine whether a second optical element of the component has a fault ([pg. 9 para. 8] The screen 9 has a display area 91 and a non-display area 92, and is arranged so that the display area 91 and the non-display area 92 overlap the scanning area R1. The display area 91 is an area that transmits irradiated light. The non-display area 92 is an area provided around the display area 91 and is shielded so that irradiated light is not transmitted. ([pg. 11 para. 4] FIG. 21 shows a state where the second non-display area 92B is displaced along the vertical direction of the image. In the case of such a displacement, the spot Sp of the laser beam deviates from the light receiving range A1 of the second light receiving unit 11B in the abnormality detection. As a result, an abnormality is detected based on the detection result of the second light receiving unit 11B), causing, via a second emitter different from the emitter, output of a second emission different from the emission ([pg. 4 para. 4] As shown in FIG. 3, the laser unit 7 includes a housing 70, a red laser diode 71, a green laser diode 72, a blue laser diode 73). wherein the component includes a plurality of separate, disconnected optical components including the second optical element, and wherein the plurality of separate, disconnected optical components are at least partially in the optical path ([pg. 4 para. 7] The dichroic mirror 74 transmits red laser light and reflects green laser light. The red laser light and the green laser light reflected by the dichroic mirror 74 become laser light on the same optical axis and enter the dichroic mirror 75. The dichroic mirror 75 transmits the red and green laser beams and reflects the blue laser beam. The red and green laser beams and the blue laser beam reflected by the dichroic mirror 75 become laser beams on the same optical axis and enter the mirror 76. The mirror 76 is a mirror that totally reflects the laser light. The laser light of each color reflected by the mirror 76 passes through the emission hole 70 a of the housing 70 and enters the MEMS mirror 8. [pg. 5 para. 2] The MEMS mirror 8 generates an image on the screen 9 by reflecting the laser light toward the screen 9). Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the method of Beck with the teachings of Takahashi to determine whether additional optical elements have a fault because "The first non-display area 92A is an area at one end of the screen 9 in the horizontal direction of the image. The second non-display area 92B is an area at the other end of the screen 9 in the horizontal direction of the image" [Takahashi pg. 3 para 8]. Regarding claim 11, Beck and Takahashi teach the method of claim 1. Beck further teaches in response to receiving the data, performing an object detection operation using the data, wherein the object detection operation is different from (1) the first operation and (2) determining whether the component has a fault ([col. 4 ln. 31-36] Still and video image data from image sensor 14 may be provided to control and processing circuitry 16 via path 26. Control and processing circuitry 16 may be used to perform image processing functions such as data formatting, adjusting white balance and exposure, implementing video image stabilization, face detection, etc.). Regarding claim 12, Beck and Takahashi teach the method of claim 1. Takahashi teaches wherein the first set of one or more criteria includes a third criterion, different from the first criterion, that is met when a second predicted artifact, different from the predicted artifact, corresponding to the emission is not detected ([pg. 7 para. 6] The screen 9 shown in FIG. 16 is displaced from the original position such that the long side 93 is inclined with respect to the horizontal direction of the image. More specifically, the screen 9 is displaced so as to rotate about the first vertex V1. Due to this positional shift, the second light receiving unit 11B is shifted with respect to the second vertex V2 of the scanning region R1. In this case, in step S70 described later, it is determined that the light receiving intensity at the second light receiving unit 11B is abnormal. [pg. 10 para. 7] The control unit 10 of the present embodiment determines that there is an abnormality when at least one of the light receiving amount of the first light receiving unit 11A and the light receiving amount of the second light receiving unit 11B is out of a predetermined range). Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the method of Beck with the teachings of Takahashi to determine that a second emission artifact is not detected because "By determining an abnormality based on the amounts of light received by the two light receiving units 11A and 11B, the determination accuracy is improved." [Takahashi pg. 10 para 7]. Regarding claim 13, Beck and Takahashi teach the method of claim 1. Beck further teaches in response to determining that the component has a fault, performing a corrective action ([col. 9 ln. 17-36] At step 50, control and processing circuitry 16 may take appropriate action based on the captured image data. For example, control and processing circuitry 16 may disable the imaging system if the obstruction is determined to prevent the image sensor from obtaining a requisite minimum amount of image data from the surroundings of vehicle 100…As another example, control and processing circuitry 16 may indicate to a user that the imaging system needs inspection and/or repair (e.g., may issue display an alert to the user, may issue an audible alert, etc.)). Regarding claim 14, Beck and Takahashi teach the method of claim 1. Beck further teaches periodically causing, via the emitter, output of the emission ([col. 6 ln. 50-53] If desired, control and processing circuitry 16 may control light source 36 to emit light in a predetermined temporal and/or chromatic pattern (e.g., a predetermined intensity and color pattern with respect to time)). Regarding claim 17, Beck teaches a non-transitory computer-readable storage medium storing one or more programs configured to be executed by one or more processors of an electronic device ([col. 2 ln. 21-27] The vehicle safety system may include computing equipment (e.g., implemented on storage and processing circuitry having volatile or non-volatile memory and a processor such as a central processing system or other processing equipment) and corresponding drive control equipment that translates instructions generated by the computing equipment into mechanical operations associated with driving the vehicle) that is in communication with an emitter (light source 36) and a sensor (Image sensor 14), the one or more programs including instructions for: in accordance with a determination to determine whether a component has a fault, causing, via an emitter, output of an emission ([col. 6 ln. 22-26] In the example of FIG. 2, system 10 may include light source 36 for determining if there is a near-field obstruction such as near-field obstruction 34 present within field-of-view 30. As shown in FIG. 2, light source 36 may emit light such as light 38); after causing output of the emission, receiving, via a sensor, data with respect to a physical environment ([col. 5 ln. 59-61] Image sensor 14 may have a field-of-view 30. Image sensor 14 may generate image data in response to light received from field-of-view 30); and in response to receiving the data: in accordance with a determination that a first set of one or more criteria is met, determining that the component has a fault ([col. 9 ln. 6-12] If circuitry 16 determines that the image data includes portions generated in response to reflected light 40 (e.g., if circuitry 16 determines that the image data includes portions having the predetermined pattern as emitted by source 36) over the period of time, circuitry 16 may determine that near-field obstruction 34 is present and processing may proceed to step 50), and in accordance with a determination that a second set of one or more criteria is met (obstruction prevents imaging sensor from functioning properly), performing a first operation, wherein the second set of one or more criteria includes a second criterion that is met when the predicted artifact corresponding to the emission is detected, and wherein the second set of one or more criteria is different from the first set of one or more criteria ([col. 9 ln. 17-36] At step 50, control and processing circuitry 16 may take appropriate action based on the captured image data. For example, control and processing circuitry 16 may disable the imaging system if the obstruction is determined to prevent the image sensor from obtaining a requisite minimum amount of image data from the surroundings of vehicle 100…As another example, control and processing circuitry 16 may indicate to a user that the imaging system needs inspection and/or repair (e.g., may issue display an alert to the user, may issue an audible alert, etc.)). Beck does not teach wherein the first set of one or more criteria includes a first criterion that is met when a predicted artifact corresponding to the emission is not detected. Takahashi, in the same field of endeavor of fault detection, teaches wherein the first set of one or more criteria includes a first criterion that is met when a predicted artifact corresponding to the emission is not detected ([pg. 7 para. 3-4] In FIG. 12, in FIG. 12C, the spot Sp is out of the light receiving range A1. At least a part of the spot Sp is outside the light receiving range A1. In FIG. 12C, the center of the spot Sp is outside the light receiving range A1. In this case, the output intensity of the light receiving unit 11 becomes an insufficient value as shown in FIG. In FIG. 14, the peak value P2 of the output value of the light receiving unit 11 is smaller than the threshold value Pt. The cause of the deviation of the spot Sp from the light receiving range A1 is the positional deviation of the screen 9. FIG. 15 shows the positional relationship between the scanning region R1 and the light receiving unit 11 when the position of the screen 9 is appropriate, and FIG. 16 shows the scanning region when the position of the screen 9 is misaligned. The positional relationship between R1 and the light receiving unit 11 is shown). Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the medium of Beck with the teachings of Takahashi to determine that the component has a fault when the emission is not detected because "As a cause of the output intensity of the light receiving unit 11 becoming abnormal, a spot of the laser beam may come off from the light receiving unit 11" [Takahashi pg. 7 para 1]. Regarding claim 18, Beck teaches an electronic device (Fig. 1-2), comprising: an emitter (light source 36); a sensor (Image sensor 14); one or more processors; and memory storing one or more programs configured to be executed by the one or more processors ([col. 2 ln. 21-27] The vehicle safety system may include computing equipment (e.g., implemented on storage and processing circuitry having volatile or non-volatile memory and a processor such as a central processing system or other processing equipment) and corresponding drive control equipment that translates instructions generated by the computing equipment into mechanical operations associated with driving the vehicle), the one or more programs including instructions for: in accordance with a determination to determine whether a component has a fault, causing, via an emitter, output of an emission ([col. 6 ln. 22-26] In the example of FIG. 2, system 10 may include light source 36 for determining if there is a near-field obstruction such as near-field obstruction 34 present within field-of-view 30. As shown in FIG. 2, light source 36 may emit light such as light 38); after causing output of the emission, receiving, via a sensor, data with respect to a physical environment ([col. 5 ln. 59-61] Image sensor 14 may have a field-of-view 30. Image sensor 14 may generate image data in response to light received from field-of-view 30); and in response to receiving the data: in accordance with a determination that a first set of one or more criteria is met, determining that the component has a fault ([col. 9 ln. 6-12] If circuitry 16 determines that the image data includes portions generated in response to reflected light 40 (e.g., if circuitry 16 determines that the image data includes portions having the predetermined pattern as emitted by source 36) over the period of time, circuitry 16 may determine that near-field obstruction 34 is present and processing may proceed to step 50), and in accordance with a determination that a second set of one or more criteria is met (obstruction prevents imaging sensor from functioning properly), performing a first operation, wherein the second set of one or more criteria includes a second criterion that is met when the predicted artifact corresponding to the emission is detected, and wherein the second set of one or more criteria is different from the first set of one or more criteria ([col. 9 ln. 17-36] At step 50, control and processing circuitry 16 may take appropriate action based on the captured image data. For example, control and processing circuitry 16 may disable the imaging system if the obstruction is determined to prevent the image sensor from obtaining a requisite minimum amount of image data from the surroundings of vehicle 100…As another example, control and processing circuitry 16 may indicate to a user that the imaging system needs inspection and/or repair (e.g., may issue display an alert to the user, may issue an audible alert, etc.)). Beck does not teach wherein the first set of one or more criteria includes a first criterion that is met when a predicted artifact corresponding to the emission is not detected. Takahashi, in the same field of endeavor of fault detection, teaches wherein the first set of one or more criteria includes a first criterion that is met when a predicted artifact corresponding to the emission is not detected ([pg. 7 para. 3-4] In FIG. 12, in FIG. 12C, the spot Sp is out of the light receiving range A1. At least a part of the spot Sp is outside the light receiving range A1. In FIG. 12C, the center of the spot Sp is outside the light receiving range A1. In this case, the output intensity of the light receiving unit 11 becomes an insufficient value as shown in FIG. In FIG. 14, the peak value P2 of the output value of the light receiving unit 11 is smaller than the threshold value Pt. The cause of the deviation of the spot Sp from the light receiving range A1 is the positional deviation of the screen 9. FIG. 15 shows the positional relationship between the scanning region R1 and the light receiving unit 11 when the position of the screen 9 is appropriate, and FIG. 16 shows the scanning region when the position of the screen 9 is misaligned. The positional relationship between R1 and the light receiving unit 11 is shown). Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the device of Beck with the teachings of Takahashi to determine that the component has a fault when the emission is not detected because "As a cause of the output intensity of the light receiving unit 11 becoming abnormal, a spot of the laser beam may come off from the light receiving unit 11" [Takahashi pg. 7 para 1]. Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Beck in view of Takahashi and Hu (US20230215045A1). Regarding claim 7, Beck and Takahashi teach the method of claim 6. Beck further teaches determining a location or orientation of the optical component relative to the camera ([col. 8 ln. 1-5] In scenarios where image sensor 14 is positioned behind protective layer 32, image sensor 14 may be positioned at a desired distance 42 behind the protective layer. Distance 42 may be, for example, less than 2 centimeters, between 2 and 10 centimeters, or greater than 10 centimeters). Beck does not teach wherein determining that the component includes a fault includes determining that a location or orientation of the optical component has changed wherein the location and orientation are determined based on at least 4 embedded components. Takahashi teaches wherein determining that the component includes a fault includes determining that a location or orientation of the optical component has changed ([pg. 7 para. 4] The cause of the deviation of the spot Sp from the light receiving range A1 is the positional deviation of the screen 9. FIG. 15 shows the positional relationship between the scanning region R1 and the light receiving unit 11 when the position of the screen 9 is appropriate, and FIG. 16 shows the scanning region when the position of the screen 9 is misaligned. The positional relationship between R1 and the light receiving unit 11 is shown), wherein the location and orientation are determined based on at least 4 embedded components (microlenses 9a in screen in Figure 4). Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the method of Beck with the teachings of Takahashi to "detect an abnormality of the screen 9 when the position of the screen 9 is displaced such that the display unit 11 deviates from the scanning region… [and] can suppress a decrease in display quality" [Takahashi pg. 10 para. 1]. Beck does not teach wherein the location and orientation are defined in 6 degrees. Hu, in the same field of endeavor of alignment-based fault detection, teaches wherein the location and orientation are defined in 6 degrees ([0001] A set of parameters with six degrees-of-freedom (x, y, z, roll, pitch and yaw) is used to represent the transform from a camera coordinate system to a reference coordinate system. An alignment process runs offline and/or online to determine these parameters. An alignment-related fault of an on-vehicle camera refers to a fault in the alignment process, which may be caused by system hardware issues, data quality issues, system degradation, vibration, an undetected or unwanted mechanical adjustment, etc.). Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the method of Beck with the teachings of Hu to determine the location and orientation in 6 degrees because "Correct alignment of one or more on-vehicle cameras relative to a reference such as ground is necessary for operation of a bird's eye view imaging system, travel lane sensing, autonomic vehicle control, etc." [Hu 0001]. Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Beck in view of Takahashi and Monahan (US20240010121A1). Regarding claim 10, Beck and Takahashi teach the method of claim 1. Monahan, in the same field of endeavor of fault detection, teaches in accordance with a determination that the emitter is not outputting an e
Read full office action

Prosecution Timeline

Jun 22, 2023
Application Filed
Aug 26, 2025
Non-Final Rejection — §103, §112, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586340
PIXEL PERSPECTIVE ESTIMATION AND REFINEMENT IN AN IMAGE
2y 5m to grant Granted Mar 24, 2026
Patent 12462343
MEDICAL DIAGNOSTIC APPARATUS AND METHOD FOR EVALUATION OF PATHOLOGICAL CONDITIONS USING 3D OPTICAL COHERENCE TOMOGRAPHY DATA AND IMAGES
2y 5m to grant Granted Nov 04, 2025
Patent 12373946
ASSAY READING METHOD
2y 5m to grant Granted Jul 29, 2025
Study what changed to get past this examiner. Based on 3 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
67%
Grant Probability
55%
With Interview (-11.4%)
2y 10m
Median Time to Grant
Low
PTA Risk
Based on 12 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month