Prosecution Insights
Last updated: April 19, 2026
Application No. 18/213,211

CAMERA-ALIGNMENT-BASED FAULT DETECTION FOR PHYSICAL COMPONENTS

Final Rejection §102§103
Filed
Jun 22, 2023
Examiner
ZAK, JACQUELINE ROSE
Art Unit
2666
Tech Center
2600 — Communications
Assignee
Apple Inc.
OA Round
2 (Final)
67%
Grant Probability
Favorable
3-4
OA Rounds
2y 10m
To Grant
55%
With Interview

Examiner Intelligence

Grants 67% — above average
67%
Career Allow Rate
8 granted / 12 resolved
+4.7% vs TC avg
Minimal -11% lift
Without
With
+-11.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
46 currently pending
Career history
58
Total Applications
across all art units

Statute-Specific Performance

§101
5.7%
-34.3% vs TC avg
§103
56.3%
+16.3% vs TC avg
§102
21.1%
-18.9% vs TC avg
§112
13.8%
-26.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 12 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Status Claims 1-10 are pending for examination in the application filed 06/22/2023. Priority Acknowledgement is made of Applicant’s claim to priority of provisional applications 63/409,485, 63/409,480, 63/409,496, 63/409,487, 63/409,474, 63/409,482, 63/409,490, 63/409,478, filing date 09/23/2022. Information Disclosure Statement The information disclosure statements (IDS) submitted on 02/02/2026 has been considered by the examiner. Response to Arguments Applicant's arguments filed 02/05/2026 have been fully considered but they are not persuasive. Applicant argues on pages 3-4 of the Remarks that Henry does not teach the limitation “in accordance with a determination that a first set of one or more criteria are met, determining that an alignment of the first camera with respect to the second camera has not changed…and in accordance with a determination that a second set of one or more criteria are met, determining that an alignment of the first camera with respect to the second camera has changed”. As stated on page 3 of the Non-Final Office Action filed 11/06/2025: Henry teaches… in accordance with a determination that a first set of one or more criteria are met (pitch and/ or yaw alignment), determining that an alignment of the first camera with respect to the second camera has not changed, wherein the first set of one or more criteria includes a criterion that is based on identifying a location of an artifact corresponding to the light in the first image and the second image; and in accordance with a determination that a second set of one or more criteria are met (roll misalignment), determining that an alignment of the first camera with respect to the second camera has changed, wherein the second set of one or more criteria is different from the first set of one or more criteria ([col. 5 ln. 21-35] Misalignments are identified via comparison of the received images, to each other, and potentially with respect to a display reference point. In one embodiment, an alignment correction 1206, 1208 for each of the first camera and second camera is determined with respect to pitch and/or yaw. Likewise, an alignment correction 1210, 1212 for each of the first camera and second camera is determined with respect to roll. In at least one embodiment, an operator or calibration processor determines 1214 if application of the determined 1206, 1208, 1210, 1212 corrections causes a new misalignment. If so, the process repeats iteratively until the images from the first camera and second camera are aligned with respect to the projected 1200 lasers within an acceptable threshold). In Henry, when the criteria of pitch and/ or yaw alignment is met, it is determined that the alignment of the first camera with respect to the second camera has not changed, and when the criteria of roll misalignment is met, it is determined that the alignment of the first camera with respect to the second camera has changed. For further clarification, Henry states: “Referring to FIG. 7, a view during a step of a calibration process according to an exemplary embodiment is shown. Because the pitch, yaw, and roll axes of each of the left camera and right camera are related, adjustments to correct misalignments in roll axes are likely to create misalignments in terms of pitch and/or yaw. Therefore, in a third step, new misalignments in the left camera and right camera are iteratively identified and new adjustments in terms of pitch, yaw, and roll are iteratively determined to re-align the central convergence points of the overlapping views 700, 702” [col. 4 ln. 18-28]. Therefore, when a correction is made in one axis, the system evaluates whether the other axes have been affected. Henry further states “In at least one embodiment, an operator or calibration processor determines 1214 if application of the determined 1206, 1208, 1210, 1212 corrections causes a new misalignment. If so, the process repeats iteratively until the images from the first camera and second camera are aligned with respect to the projected 1200 lasers within an acceptable threshold” [col. 5 ln. 29-35]. For example, if correction is first performed in the roll axes and it is determined that a new misalignment has not been caused in the pitch/ yaw axes (the criteria of pitch and/ or yaw alignment is met), then the alignment of the first camera with respect to the second camera has not changed because a new misalignment was not caused by the correction and no additional correction takes place. Applicant further argues that Henry fails to disclose an alignment of a first camera with respect to a second camera because Henry determines alignment on a per-camera basis. However, in Henry’s stereoscopic camera system, it is critical for the alignment of the first camera to be determined with respect to the second camera because stereo depth depends on relative geometry: “Referring to FIG. 4, a view as seen through a misaligned stereoscopic camera system is shown. In a misaligned stereoscopic camera system, an image from the right camera 400 and an image from the left camera 402 are misaligned both in terms of horizontal alignment and convergence point” [col. 3 ln. 56-61] and “Because the pitch, yaw, and roll axes of each of the left camera and right camera are related, adjustments to correct misalignments in roll axes are likely to create misalignments in terms of pitch and/or yaw” [col. 4 ln. 20-23]. Therefore, the 35 U.S.C. 102(a)(2) rejections of claims 1-2, 5, and 8-10 and the 35 U.S.C. 103 rejections of claims 3-4 and 6-7 as described in the Non-Final Office Action filed 11/06/2025 are maintained. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-2, 5, and 8-10 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Henry (US11360375B1). Regarding claim 1, Henry teaches a method, comprising: causing output, via a light source, of light ([col. 1 ln. 19-23] In one aspect, embodiments of the inventive concepts disclosed herein are directed to a system and method for calibrating stereoscopic cameras in an aircraft. A plurality of lasers are aimed at a desired convergence point, such as at a point on a runway or other ground facility); after causing output of the light: receiving, via a first camera, a first image of a physical environment; and receiving, via a second camera, a second image of the physical environment ([col. 5 ln. 15-19] Referring to FIG. 12, a flowchart of an exemplary embodiment is shown. A plurality of lasers are projected 1200 at a convergence point within the field of view of a stereoscopic camera system. Images from each of a first camera and second camera are received 1202, 1204); and in response to receiving the first image or the second image: in accordance with a determination that a first set of one or more criteria are met (pitch and/ or yaw alignment), determining that an alignment of the first camera with respect to the second camera has not changed, wherein the first set of one or more criteria includes a criterion that is based on identifying a location of an artifact corresponding to the light in the first image and the second image; and in accordance with a determination that a second set of one or more criteria are met (roll misalignment), determining that an alignment of the first camera with respect to the second camera has changed, wherein the second set of one or more criteria is different from the first set of one or more criteria ([col. 5 ln. 21-35] Misalignments are identified via comparison of the received images, to each other, and potentially with respect to a display reference point. In one embodiment, an alignment correction 1206, 1208 for each of the first camera and second camera is determined with respect to pitch and/or yaw. Likewise, an alignment correction 1210, 1212 for each of the first camera and second camera is determined with respect to roll. In at least one embodiment, an operator or calibration processor determines 1214 if application of the determined 1206, 1208, 1210, 1212 corrections causes a new misalignment. If so, the process repeats iteratively until the images from the first camera and second camera are aligned with respect to the projected 1200 lasers within an acceptable threshold). Regarding claim 2, Henry teaches the method of claim 1. Henry further teaches wherein the light is collimated light of a single wavelength (lasers 206, 208, 210). Regarding claim 5, Henry teaches the method of claim 1. Henry further teaches in response to determining that the alignment of the first camera with respect to the second camera has changed, causing the first camera or the second camera to move ([col. 3 ln. 5-7] Each of the stereoscopic cameras are adjusted in terms of pitch, yaw, and roll to align the convergence point in each camera. Misalignments are iteratively corrected). Regarding claim 8, Henry teaches the method of claim 1. Henry further teaches wherein the first set of one or more criteria includes a second criterion (pitch and/or yaw alignment), different from the criterion, that is based on identifying a second location of a second artifact, different from the artifact (artifact from second laser), corresponding to the light in the first image and the second image ([col. 5 ln. 15-35] Referring to FIG. 12, a flowchart of an exemplary embodiment is shown. A plurality of lasers are projected 1200 at a convergence point within the field of view of a stereoscopic camera system. Images from each of a first camera and second camera are received 1202, 1204; for example, at a calibration display or by a calibration processor. Misalignments are identified via comparison of the received images, to each other, and potentially with respect to a display reference point. In one embodiment, an alignment correction 1206, 1208 for each of the first camera and second camera is determined with respect to pitch and/or yaw. Likewise, an alignment correction 1210, 1212 for each of the first camera and second camera is determined with respect to roll. In at least one embodiment, an operator or calibration processor determines 1214 if application of the determined 1206, 1208, 1210, 1212 corrections causes a new misalignment. If so, the process repeats iteratively until the images from the first camera and second camera are aligned with respect to the projected 1200 lasers within an acceptable threshold). Regarding claim 9, Henry teaches a non-transitory computer-readable storage medium storing one or more programs configured to be executed by one or more processors of an electronic device that is in communication with a light source, a first camera, and a second camera, the one or more programs including instructions for ([col. 4 ln 54-62] Referring to FIG. 11, a block diagram of an exemplary embodiment of a stereoscopic calibration system is shown. The system comprises a processor 1100, non-transitory computer readable medium having a memory 1102 connected to the processor 1100 for storing processor executable code, and a left camera 1104 and right camera 1106 connected to the processor 1100. The left camera 1104 and right camera 1106 are calibrated with respect to one or more laser 1108 projections): causing output, via the light source, of light; after causing output of the light: receiving, via the first camera, a first image of a physical environment; and receiving, via the second camera, a second image of the physical environment ([col. 5 ln. 15-19] Referring to FIG. 12, a flowchart of an exemplary embodiment is shown. A plurality of lasers are projected 1200 at a convergence point within the field of view of a stereoscopic camera system. Images from each of a first camera and second camera are received 1202, 1204); and in response to receiving the first image or the second image: in accordance with a determination that a first set of one or more criteria are met (pitch and/ or yaw alignment), determining that an alignment of the first camera with respect to the second camera has not changed, wherein the first set of one or more criteria includes a criterion that is based on identifying a location of an artifact corresponding to the light in the first image and the second image; and in accordance with a determination that a second set of one or more criteria are met (roll misalignment), determining that an alignment of the first camera with respect to the second camera has changed, wherein the second set of one or more criteria is different from the first set of one or more criteria ([col. 5 ln. 21-35] Misalignments are identified via comparison of the received images, to each other, and potentially with respect to a display reference point. In one embodiment, an alignment correction 1206, 1208 for each of the first camera and second camera is determined with respect to pitch and/or yaw. Likewise, an alignment correction 1210, 1212 for each of the first camera and second camera is determined with respect to roll. In at least one embodiment, an operator or calibration processor determines 1214 if application of the determined 1206, 1208, 1210, 1212 corrections causes a new misalignment. If so, the process repeats iteratively until the images from the first camera and second camera are aligned with respect to the projected 1200 lasers within an acceptable threshold). Regarding claim 10, Henry teaches an electronic device, comprising: a light source; a first camera; a second camera; one or more processors; and memory storing one or more programs configured to be executed by the one or more processors, the one or more programs including instructions for ([col. 4 ln 54-62] Referring to FIG. 11, a block diagram of an exemplary embodiment of a stereoscopic calibration system is shown. The system comprises a processor 1100, non-transitory computer readable medium having a memory 1102 connected to the processor 1100 for storing processor executable code, and a left camera 1104 and right camera 1106 connected to the processor 1100. The left camera 1104 and right camera 1106 are calibrated with respect to one or more laser 1108 projections): causing output, via the light source, of light; after causing output of the light: receiving, via the first camera, a first image of a physical environment; and receiving, via the second camera, a second image of the physical environment ([col. 5 ln. 15-19] Referring to FIG. 12, a flowchart of an exemplary embodiment is shown. A plurality of lasers are projected 1200 at a convergence point within the field of view of a stereoscopic camera system. Images from each of a first camera and second camera are received 1202, 1204); and in response to receiving the first image or the second image: in accordance with a determination that a first set of one or more criteria are met (pitch and/ or yaw alignment), determining that an alignment of the first camera with respect to the second camera has not changed, wherein the first set of one or more criteria includes a criterion that is based on identifying a location of an artifact corresponding to the light in the first image and the second image; and in accordance with a determination that a second set of one or more criteria are met (roll misalignment), determining that an alignment of the first camera with respect to the second camera has changed, wherein the second set of one or more criteria is different from the first set of one or more criteria ([col. 5 ln. 21-35] Misalignments are identified via comparison of the received images, to each other, and potentially with respect to a display reference point. In one embodiment, an alignment correction 1206, 1208 for each of the first camera and second camera is determined with respect to pitch and/or yaw. Likewise, an alignment correction 1210, 1212 for each of the first camera and second camera is determined with respect to roll. In at least one embodiment, an operator or calibration processor determines 1214 if application of the determined 1206, 1208, 1210, 1212 corrections causes a new misalignment. If so, the process repeats iteratively until the images from the first camera and second camera are aligned with respect to the projected 1200 lasers within an acceptable threshold). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 3-4 are rejected under 35 U.S.C. 103 as being unpatentable over Henry in view of Vilaca (Vilaça, João L., Jaime C. Fonseca, and António M. Pinho. "Calibration procedure for 3D measurement systems using two cameras and a laser line." Optics & Laser Technology 41.2 (2009): 112-119). Regarding claim 3, Henry teaches the method of claim 1. Vilaca, in the same field of endeavor of determining camera alignment, teaches wherein the location of the artifact corresponding to the light in the first image and the second image is aligned to an edge of a field of view of the first camera or the second camera. PNG media_image1.png 753 1201 media_image1.png Greyscale Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the method of Henry with the teachings of Vilaca to align the artifact to the edge of a field of view because "the relative angle of the cameras to the laser plane…[causes] the horizontal length of the vision field at the top of the image [to be] smaller than the horizontal length of the vision field at the bottom of the image" [Vilaca pg. 114 para. 1] and "the points acquired from each image will have to correspond to the horizontal limits of the selected vision field" and "the points acquired from each image will have to correspond to the vertical limits of the selected vision field" [Vilaca pg. 115 para. 5 and 6]. Regarding claim 4, Henry teaches the method of claim 1. Vilaca teaches in response to determining that the alignment of the first camera with respect to the second camera has changed, instructing one or more models to compensate for the alignment ([pg. 113 para. 9] The laser is located at the same physical distance from both cameras, and the cameras are oriented at 30° to the horizontal line. Although, the system is mechanically adjusted (distance between cameras and laser, angles between cameras and laser line), the acquired images of both cameras showed that, in the laser line projection plane, the system was far from being calibrated). PNG media_image2.png 948 653 media_image2.png Greyscale PNG media_image3.png 932 669 media_image3.png Greyscale Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the method of Henry with the teachings of Vilaca to use a model to compensate for the alignment with "a calibration procedure…which calibrates each image, asserting that a point P10 in O1 is equal to a point P20 in O2 on the laser plane, eliminating the radial distortion caused by the lenses and image dead zones" [Vilaca pg. 114 para. 2]. Claims 6-7 are rejected under 35 U.S.C. 103 as being unpatentable over Henry in view of Wu (US20190246036A1). Regarding claim 6, Henry teaches the method of claim 1. Wu, in the same field of endeavor of object recognition, teaches before receiving the first image or the second image, receiving, via the first camera, a third image of the physical environment; and in response to receiving the third image, performing an object recognition operation using the third image ([0086] In operation 810, the gaze detection module 660 estimates a gaze point of a driver using an internal sensor (e.g., the image sensor 140). For example, the driver may focus on an object to be photographed. In operation 820, the gesture detection module 665 detects a gesture of the driver using the internal sensor. For example, the driver may mime pressing a camera shutter using the gesture shown in FIG. 4, the gesture shown in FIG. 5, or another gesture. [0055] The diagram 520 may indicate an intermediate step of image processing in gesture recognition. [0048] Each image sensor may be a camera, a CCD, an image sensor array, a depth camera, or any suitable combination thereof. [0095] In operation 930, the image acquisition module 670 tracks a target object identified based on the driver's gaze. For example, a first image may be captured using the camera 220 for processing by an object recognition algorithm. If the driver's gaze point is within a depicted recognized object, that object may be determined to be the target object for image acquisition. Additional images that include the identified object may be captured by the camera 220 and processed to determine a path of relative motion between the object and the vehicle. Using the determined path of relative motion, the direction and depth of focus of the camera 220 may be adjusted so that a following acquired image, acquired in operation 940, is focused on the identified object). Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the method of Henry with the teachings of Wu to receive a third image and perform object recognition before receiving the first and second image because "By using gaze direction detection (and, as in alternative embodiments, head pose direction detection or gaze point detection) to identify the region to be photographed and a hand gesture to cause the image capture, the system enables the photograph to be captured without the driver having to hold a cell phone, reducing the distraction to the driver" [Wu 0043]. Regarding claim 7, Henry teaches the method of claim 1. Wu teaches performing an object recognition operation using the first image or the second image ([0095] In operation 930, the image acquisition module 670 tracks a target object identified based on the driver's gaze. For example, a first image may be captured using the camera 220 for processing by an object recognition algorithm. If the driver's gaze point is within a depicted recognized object, that object may be determined to be the target object for image acquisition. Additional images that include the identified object may be captured by the camera 220 and processed to determine a path of relative motion between the object and the vehicle. Using the determined path of relative motion, the direction and depth of focus of the camera 220 may be adjusted so that a following acquired image, acquired in operation 940, is focused on the identified object). Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the method of Henry with the teachings of Wu to perform object recognition on the first or second image because "The view 310 may include a representation of multiple objects at varying distances from the vehicle" [Wu 0053]. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Jacqueline R Zak whose telephone number is (571)272-4077. The examiner can normally be reached M-F 9-5. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Emily Terrell can be reached at (571) 270-3717. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JACQUELINE R ZAK/Examiner, Art Unit 2666 /EMILY C TERRELL/Supervisory Patent Examiner, Art Unit 2666
Read full office action

Prosecution Timeline

Jun 22, 2023
Application Filed
Nov 02, 2025
Non-Final Rejection — §102, §103
Feb 05, 2026
Response Filed
Feb 26, 2026
Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586340
PIXEL PERSPECTIVE ESTIMATION AND REFINEMENT IN AN IMAGE
2y 5m to grant Granted Mar 24, 2026
Patent 12462343
MEDICAL DIAGNOSTIC APPARATUS AND METHOD FOR EVALUATION OF PATHOLOGICAL CONDITIONS USING 3D OPTICAL COHERENCE TOMOGRAPHY DATA AND IMAGES
2y 5m to grant Granted Nov 04, 2025
Patent 12373946
ASSAY READING METHOD
2y 5m to grant Granted Jul 29, 2025
Study what changed to get past this examiner. Based on 3 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
67%
Grant Probability
55%
With Interview (-11.4%)
2y 10m
Median Time to Grant
Moderate
PTA Risk
Based on 12 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month