Prosecution Insights
Last updated: April 19, 2026
Application No. 18/885,884

CALIBRATING CAMERA IN ELECTRONIC DEVICE

Non-Final OA §102§103§DP
Filed
Sep 16, 2024
Examiner
PRABHAKHER, PRITHAM DAVID
Art Unit
2638
Tech Center
2600 — Communications
Assignee
Google LLC
OA Round
1 (Non-Final)
79%
Grant Probability
Favorable
1-2
OA Rounds
2y 3m
To Grant
99%
With Interview

Examiner Intelligence

Grants 79% — above average
79%
Career Allow Rate
511 granted / 650 resolved
+16.6% vs TC avg
Strong +26% interview lift
Without
With
+26.1%
Interview Lift
resolved cases with interview
Typical timeline
2y 3m
Avg Prosecution
14 currently pending
Career history
664
Total Applications
across all art units

Statute-Specific Performance

§101
4.3%
-35.7% vs TC avg
§103
41.6%
+1.6% vs TC avg
§102
31.6%
-8.4% vs TC avg
§112
16.3%
-23.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 650 resolved cases

Office Action

§102 §103 §DP
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on 09/16/2024 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 1-20 rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 3-4, 6-7, 9, 11-17 and 20 of U.S. Patent No. 12094171. Although the claims at issue are not identical, they are not patentably distinct from each other because claims to the instant application are broader than and fully encompassed by claims to the US Patent. Instant application: 18/885884 US Patent No.: 12094171B2 1. A method performed by a server, the method comprising: receiving, from a first computing device, multiple captured images that were captured by a camera, the multiple captured images including instances of a calibration image presented by a display included in a second computing device, the calibration image being a representation of a calibration image file; and based on the multiple captured images, calibrating the camera. 2. The method of claim 1, wherein the first computing device captured the multiple captured images with the camera by executing an image capturing function included in a webpage sent by the server to the first computing device. 5. The method of claim 1, further comprising: receiving, from the second computing device, a request for a second webpage; and sending, to the second computing device, the second webpage, the second webpage including the calibration image file. 1. A method performed by a server, the method comprising: sending a first webpage to a first computing device, the first computing device including a camera, the first webpage including an image-capturing function and including an instruction for a user to obtain a second webpage via a second computing device, the second webpage including a calibration image file; receiving, from the first computing device, multiple captured images that were captured by the camera, the multiple captured images including instances of a calibration image presented by a display included in the second computing device, the calibration image being a representation of the calibration image file; and based on the multiple captured images, calibrating the camera. 5. The method of claim 1, further comprising: receiving, from the second computing device, a request for the second webpage; and sending, to the second computing device, the second webpage. Dependent Claims 3-4 and 6 correspond to dependent claims 3-4 and 6 of the US Patent. Instant application: 18/885884 US Patent No.: 12094171B2 7. A method performed by a server, the method comprising: receiving, from a first computing device, multiple captured images of a calibration image presented by a second computing device, the calibration image being based on a calibration image file, the multiple captured images of the calibration image having been captured by a camera included in the first computing device; and calibrating the camera based on the multiple captured images. 8. The method of claim 7, further comprising sending the calibration image file to the second computing device. 9. The method of claim 7, further comprising sending the calibration image file to the second computing device in response to receiving, from the second computing device, a request for the calibration image file. 7. A method performed by a server, the method comprising: sending, to a first computing device, content including a prompt to cause a second computing device to request a calibration image file; receiving, from the second computing device, a request for the calibration image file; sending, in response to receiving the request for the calibration image file, the calibration image file; receiving, from the first computing device, multiple captured images of a calibration image presented by the second computing device, the calibration image being based on the calibration image file, the multiple captured images of the calibration image having been captured by a camera included in the first computing device; and calibrating the camera based on the multiple captured images. Dependent Claims 10-14 correspond to dependent claims 9 and 11-14 of the US Patent. Instant application: 18/885884 US Patent No.: 12094171B2 15. A method comprising: capturing, by a first computing device via a camera included in the first computing device, multiple captured images of a calibration image, the calibration image being presented by a second computing device after requesting content from a specific resource locator; and sending the multiple captured images to a remote computing device. 16. The method of claim 15, further comprising presenting a webpage including an instruction to request content from the specific resource locator. 17. The method of claim 15, further comprising receiving, from the remote computing device, a webpage including an instruction to request the content from the specific resource locator. 15. A method comprising: receiving, by a first computing device from a remote computing device, a webpage, the webpage including an instruction to request content from a specific Universal Resource Locator (URL); presenting the webpage including presenting the instruction to request the content from the specific URL; capturing, via a camera included in the first computing device, multiple captured images of a calibration image, the calibration image being presented by a second computing device after requesting the content from the specific URL; and sending the multiple captured images to the remote computing device. Dependent Claims 18-20 correspond to dependent claims 16-17 and 20 of the US Patent. [No prior-art was found for dependent claim 11 and it’s dependents 12-13 along with dependent claim 18 and it’s dependents 19-20 as currently written]. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. 3.) Claim(s) 1, 3, 7-9 and 15 is/are rejected under 35 U.S.C. 102 (a1) (a2) as being anticipated by Doganis (US Pub No.: 2020/0005491A1). With regard to Claim 1, Doganis teaches of a method performed by a server (A computer-implemented method of calibrating a camera, Abstract; Figure 2), the method comprising: receiving, from a first computing device (Monitor 102 and camera 101 make up the first computing device, Paragraphs 0064-0066; Figure 2), multiple captured images that were captured by a camera (Camera 101 acquires a series of images, and converts them to a digital video stream. Then, computer 103 acquires the video stream from the camera, Paragraph 0065 and Figure 2), the multiple captured images including instances of a calibration image presented by a display (display screen 100) included in a second computing device (display screen 100 of a hand held device 104) the calibration image being a representation of a calibration image file (This calibration pattern is displayed by a screen 100, such as a liquid crystal display. The screen 100 used to display the dynamic and adaptive calibration pattern is part of a hand-held device 104, such as a tablet computer. A user carries this hand-held device and moves it within the visual field of camera 101, helped by the visual feedback provided by monitor 102. Computer 103 extracts a series of images from the video stream generated by the camera 101 and feeds them to the calibration algorithm. This extraction may be performed automatically by the computer, e.g. at fixed times, or may be triggered by a user e.g. by pressing a key of a keyboard connected to the computer 103, Paragraphs 0064-0066; Figures 1-3); and based on the multiple captured images, calibrating the camera (The camera 101 is calibrated based on the acquired video streams, Paragraphs 0064-0066; Claim 1; Figures 1-3). Regarding Claim 3, Doganis discloses the method of claim 1, wherein the multiple captured images include instances of the calibration image presented by the display at different orientations with respect to the camera (The calibration image is held at different distances (orientations) with respect to the camera, Paragraphs 0069-0070; Claims 1-4). In regard to Claim 7, Doganis teaches of a method performed by a server (A computer-implemented method of calibrating a camera, Abstract; Figure 2), the method comprising: receiving, from a first computing device (Monitor 102 and camera 101 make up the first computing device, Paragraphs 0064-0066; Figure 2), multiple captured images of a calibration image presented by a second computing device (Screen 100 that is part of a hand held device 104), the calibration image being based on a calibration image file (This calibration pattern is displayed by a screen 100, such as a liquid crystal display. The screen 100 used to display the dynamic and adaptive calibration pattern is part of a hand-held device 104, such as a tablet computer. A user carries this hand-held device and moves it within the visual field of camera 101, helped by the visual feedback provided by monitor 102. Computer 103 extracts a series of images from the video stream generated by the camera 101 and feeds them to the calibration algorithm. This extraction may be performed automatically by the computer, e.g. at fixed times, or may be triggered by a user e.g. by pressing a key of a keyboard connected to the computer 103, Paragraphs 0064-0066; Figures 1-3), the multiple captured images of the calibration image having been captured by a camera included in the first computing device (As mentioned above, monitor 102 and camera 101 make up the first computing device, Paragraphs 0064-0066; Figure 2. Camera 101 acquires a series of images, and converts them to a digital video stream. Then, computer 103 acquires the video stream from the camera, Paragraph 0065 and Figure 2); and calibrating the camera based on the multiple captured images (The camera 101 is calibrated based on the acquired video streams, Paragraphs 0064-0066; Claim 1; Figures 1-3). With regard to Claim 8, Doganis teaches of the method of claim 7, further comprising sending the calibration image file to the second computing device (Computer 103 controls the displaying of pattern 1000 by the screen 100 of the hand-held device 104. In other words, computer 103 acts as a “master” and device 104 as a “slave”, Paragraph 0066; Figure 2). Regarding Claim 9, Doganis teaches of the method of claim 7, further comprising sending the calibration image file to the second computing device in response to receiving, from the second computing device (104), a request for the calibration image file (104 carrying the screen 100 could be the master and the computer 103 the slave) (The screen 100 used to display the dynamic and adaptive calibration pattern is part of a hand-held device 104, such as a tablet computer. A user carries this hand-held device and moves it within the visual field of camera 101, helped by the visual feedback provided by monitor 102. Computer 103 extracts a series of images from the video stream generated by the camera 101 and feeds them to the calibration algorithm. This extraction may be performed automatically by the computer, e.g. at fixed times, or may be triggered by a user e.g. by pressing a key of a keyboard connected to the computer 103. Moreover, computer 103 controls the displaying of pattern 1000 by the screen 100 of the hand-held device 104. In other words, computer 103 acts as a “master” and device 104 as a “slave”. However, the device 104 carrying the screen 100 could be the master and computer 103 the slave, Paragraphs 0066-0067 and Figure 2). In regard to Claim 15, Doganis teaches of a method (A computer-implemented method of calibrating a camera, Abstract; Figure 2), comprising: capturing, by a first computing device via a camera included in the first computing device (Monitor 102 and camera 101 make up the first computing device, Paragraphs 0064-0066; Figure 2), multiple captured images of a calibration image (Camera 101 acquires a series of images, and converts them to a digital video stream. Then, computer 103 acquires the video stream from the camera, Paragraph 0065 and Figure 2), the calibration image being presented by a second computing device (104 with screen 100) after requesting content from a specific resource locator (This calibration pattern is displayed by a screen 100, such as a liquid crystal display. The screen 100 used to display the dynamic and adaptive calibration pattern is part of a hand-held device 104, such as a tablet computer. A user carries this hand-held device and moves it within the visual field of camera 101, helped by the visual feedback provided by monitor 102. Computer 103 extracts a series of images from the video stream generated by the camera 101 and feeds them to the calibration algorithm. This extraction may be performed automatically by the computer, e.g. at fixed times, or may be triggered by a user e.g. by pressing a key of a keyboard connected to the computer 103, Paragraphs 0064-0066; Figures 1-3); and sending the multiple captured images to a remote computing device (Then, computer 103 acquires the video stream from the camera, Paragraphs 0065, 0069 and Figure 2). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 4.) Claim(s) 2, 5 and 16-17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Doganis (US Pub No.: 2020/0005491A1) as applied to claims 1 and 15 above, and further in view of Carrafa et al. (US Pub No.: 2017/0202450A1). Regarding Claim 2, Doganis does not explicitly teach of the method of claim 1, wherein the first computing device captured the multiple captured images with the camera by executing an image capturing function included in a webpage sent by the server to the first computing device. Carrafa et al. teach of a first computing device that captures multiple images with a camera by executing an image capturing function included in a webpage sent by a server to the first computing device, (Carrafa et al. teach of a process for conducting an eye examination using a mobile device, the process comprising: capturing a first image of an object using a camera of a mobile device set to a fixed focusing distance; determining, with reference to the first image, an absolute size of the object; capturing a second image of the object using the camera of the mobile device; determining, with reference to the second image, a distance from the mobile device to the object; providing an indication via the mobile device to move the mobile device relative to the object; and receiving input from the mobile device in response to an eye examination program, Abstract and Claim 1 of Carrafa et al. . Calibration of the camera on the mobile device may be carried out according to any methods known to a person of ordinary skill in the art. According to one or more embodiments, calibration requires images of the calibration pattern from multiple angles to determine camera properties, Paragraphs 0026-0028 of Carrafa et al.. In the case where the calibration pattern is displayed on a computer screen, the mobile device can be linked to a web page or application running on the computer such that the mobile device can be used to control the application on the computer. This can be helpful for guiding the user through the calibration process. The server 110 exchanges data with the first and second devices 120 and 130. This data may be exchanged through an installed program in the first or second device 120 or 130, or through a web page loaded on the first or second device 120 or 130. The output display 170 of the second device 130 may be used to display a calibration pattern. The second device 130, as shown in FIG. 1, is Internet-enabled, and the various patterns, images, or testing material displayed may be provided through a webpage, in response to output from the first device 120, Paragraphs 0032-0037, 0040 and Figure 1 of Carrafa et al.. It would have been obvious and well-known to one of ordinary skill in the art before the effective filing date of the claimed invention, to enable the teachings of Doganis to have the first computing device that captures multiple images with a camera by executing an image capturing function included in a webpage sent by a server to the first computing device as taught by Carrafa et al., because it helps serve as a guide to the user during the calibration process, Paragraph 0032 of Carrafa et al.). In regard to Claim 5, Doganis does not explicitly disclose the method of claim 1, further comprising: receiving, from the second computing device (hand held device), a request for a second webpage; and sending, to the second computing device, the second webpage, the second webpage including the calibration image file. Carrafa et al. teach of a hand held device (second computing device) that has a request for a webpage and sending, to the second computing device, the second webpage, the second webpage including the calibration image file, (Carrafa et al. teach of a process for conducting an eye examination using a mobile device, the process comprising: capturing a first image of an object using a camera of a mobile device set to a fixed focusing distance; determining, with reference to the first image, an absolute size of the object; capturing a second image of the object using the camera of the mobile device; determining, with reference to the second image, a distance from the mobile device to the object; providing an indication via the mobile device to move the mobile device relative to the object; and receiving input from the mobile device in response to an eye examination program, Abstract and Claim 1 of Carrafa et al. . Calibration of the camera on the mobile device may be carried out according to any methods known to a person of ordinary skill in the art. According to one or more embodiments, calibration requires images of the calibration pattern from multiple angles to determine camera properties, Paragraphs 0026-0028 of Carrafa et al.. In the case where the calibration pattern is displayed on a computer screen, the mobile device can be linked to a web page or application running on the computer such that the mobile device can be used to control the application on the computer. This can be helpful for guiding the user through the calibration process. The server 110 exchanges data with the first and second devices 120 and 130. This data may be exchanged through an installed program in the first or second device 120 or 130, or through a web page loaded on the first or second device 120 or 130. The output display 170 of the second device 130 may be used to display a calibration pattern. The second device 130, as shown in FIG. 1, is Internet-enabled, and the various patterns, images, or testing material displayed may be provided through a webpage, in response to output from the first device 120, Paragraphs 0032-0037, 0040 and Figure 1 of Carrafa et al.. It would have been obvious and well-known to one of ordinary skill in the art before the effective filing date of the claimed invention, to enable the teachings of Doganis to receive, from the second computing device (hand held device), a request for a second webpage; and sending, to the second computing device, the second webpage, the second webpage including the calibration image file as taught by Carrafa et al., because it helps serve as a guide to the user during the calibration process, Paragraph 0032 of Carrafa et al.). Regarding Claim 16, Doganis does not explicitly disclose the method of claim 15, further comprising presenting a webpage including an instruction to request content from the specific resource locator. Carraffa et al. teach of presenting a webpage including an instruction to request content from a specific resource locator, (Carrafa et al. teach of a process for conducting an eye examination using a mobile device, the process comprising: capturing a first image of an object using a camera of a mobile device set to a fixed focusing distance; determining, with reference to the first image, an absolute size of the object; capturing a second image of the object using the camera of the mobile device; determining, with reference to the second image, a distance from the mobile device to the object; providing an indication via the mobile device to move the mobile device relative to the object; and receiving input from the mobile device in response to an eye examination program, Abstract and Claim 1 of Carrafa et al. . Calibration of the camera on the mobile device may be carried out according to any methods known to a person of ordinary skill in the art. According to one or more embodiments, calibration requires images of the calibration pattern from multiple angles to determine camera properties, Paragraphs 0026-0028 of Carrafa et al.. In the case where the calibration pattern is displayed on a computer screen, the mobile device can be linked to a web page or application running on the computer such that the mobile device can be used to control the application on the computer. This can be helpful for guiding the user through the calibration process. The server 110 exchanges data with the first and second devices 120 and 130. This data may be exchanged through an installed program in the first or second device 120 or 130, or through a web page loaded on the first or second device 120 or 130. The output display 170 of the second device 130 may be used to display a calibration pattern. The second device 130, as shown in FIG. 1, is Internet-enabled, and the various patterns, images, or testing material displayed may be provided through a webpage, in response to output from the first device 120, Paragraphs 0032-0037, 0040 and Figure 1 of Carrafa et al.. It would have been obvious and well-known to one of ordinary skill in the art before the effective filing date of the claimed invention, to enable the teachings of Doganis to present a webpage including an instruction to request content from a specific resource locator as taught by Carrafa et al., because it helps serve as a guide to the user during the calibration process, Paragraph 0032 of Carrafa et al.). Regarding Claim 17, Doganis does not explicitly disclose the method of claim 15, further comprising receiving, from the remote computing device, a webpage including an instruction to request the content from the specific resource locator. Carrafa et al. teach of receiving, from a remote computing device, a webpage including an instruction to request the content from the specific resource locator, (Carrafa et al. teach of a process for conducting an eye examination using a mobile device, the process comprising: capturing a first image of an object using a camera of a mobile device set to a fixed focusing distance; determining, with reference to the first image, an absolute size of the object; capturing a second image of the object using the camera of the mobile device; determining, with reference to the second image, a distance from the mobile device to the object; providing an indication via the mobile device to move the mobile device relative to the object; and receiving input from the mobile device in response to an eye examination program, Abstract and Claim 1 of Carrafa et al. . Calibration of the camera on the mobile device may be carried out according to any methods known to a person of ordinary skill in the art. According to one or more embodiments, calibration requires images of the calibration pattern from multiple angles to determine camera properties, Paragraphs 0026-0028 of Carrafa et al.. In the case where the calibration pattern is displayed on a computer screen, the mobile device can be linked to a web page or application running on the computer such that the mobile device can be used to control the application on the computer. This can be helpful for guiding the user through the calibration process. The server 110 exchanges data with the first and second devices 120 and 130. This data may be exchanged through an installed program in the first or second device 120 or 130, or through a web page loaded on the first or second device 120 or 130. The output display 170 of the second device 130 may be used to display a calibration pattern. The second device 130, as shown in FIG. 1, is Internet-enabled, and the various patterns, images, or testing material displayed may be provided through a webpage, in response to output from the first device 120, Paragraphs 0032-0037, 0040 and Figure 1 of Carrafa et al.. It would have been obvious and well-known to one of ordinary skill in the art before the effective filing date of the claimed invention, to enable the teachings of Doganis to receive, from a remote computing device, a webpage including an instruction to request the content from the specific resource locator as taught by Carrafa et al., because it helps serve as a guide to the user during the calibration process, Paragraph 0032 of Carrafa et al.). 5.) Claim(s) 4 is/are rejected under 35 U.S.C. 103 as being unpatentable over Doganis (US Pub No.: 2020/0005491A1) as applied to claim 1 above, and further in view of Eldar (US Pub No.: 2017/0270654A1). With regard to Claim 4, Doganis does not explicitly disclose the method of claim 1, wherein the calibrating the camera includes determining a focal length, an aspect ratio, a first offset, and a second offset of the camera. Eldar teaches of calibrating the camera includes determining a focal length, an aspect ratio, a first offset, and a second offset of the camera, (Eldar teaches of an apparatus that includes an image capture module to capture depth data and sensor data for a plurality of views and an extraction module to extract a first plurality of features from the depth data and a second plurality of features from the sensor data for each view. Additionally, the apparatus includes a calibration module to calibrate the multiple cameras by matching the generated three dimensional data with the corresponding features in the first plurality of features and the second plurality of features, Abstract of Eldar. The camera calibration results in a linear calibration target model that is defined by eleven parameters. The eleven parameters include camera location (3 parameters: x, y, z), orientation (3 parameters: roll, pitch, and yaw), focal length (1 parameter: distance), pixel scale (1 parameter), pixel aspect ratio (1 parameter), and image plane center offset (2 parameters). The calibration target model, through these parameters projection for each camera that maps a point in the camera image from a three-dimensional point in space to a two-dimensional location in the camera image, Paragraph 0024 of Eldar. Eldar teaches that the calibration target 200A is a checkerboard pattern. Thus, the target 200A includes a plurality of squares, where the plurality of squares include a plurality of black squares 202 and a plurality of white squares 204, Paragraph 0029 and Figure 2A of Eldar. It would have been obvious and well-known to one of ordinary skill in the art before the effective filing date of the claimed invention to enable the teachings of Doganis to calibrate the camera by determining a focal length, an aspect ratio, a first offset, and a second offset of the camera as taught by Eldar, because it helps create correct model parameters for use in the calibration process, Paragraphs 0023, 0033 of Eldar). 6.) Claim(s) 6 and 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Doganis (US Pub No.: 2020/0005491A1) as applied to claims 1 and 7 above, and further in view of Lisin (US Patent No.: 9319666B1). Regarding Claim 6, Doganis does not explicitly disclose the method of claim 1, wherein the calibration image includes M rows of N squares of alternating colors, M being greater than one and N being greater than one. Lisin teaches of a calibration image includes M rows of N squares of alternating colors, M being greater than one and N being greater than one, (Lisin teaches of a device that is configured to receive an image including a calibration pattern and apply a filter to the image based on a first coordinate plane. The device is configured to determine a first set of response peaks associated with the calibration pattern based on applying the filter, the first set of response peaks being associated with a set of control points and a set of boundary points. The device is configured to determine a second set of response peaks associated with the calibration pattern based on a second coordinate plane and a third coordinate plane, the second set of response peaks being associated with the boundary points. The device is configured to determine the control points based on determining the first set of response peaks and the second set of response peaks, and provide information that identifies the control points, Abstract and Figure 1 of Lisin. Lisin teaches that to achieve camera calibration, the user may use the camera device to take one or more images of a calibration pattern. Effective camera calibration may require a camera calibration device to determine control points associated with the calibration pattern. A control point may include a point where a shape (e.g., a square, a circle, a line, etc.) associated with the calibration pattern intersects with another shape associated with the calibration pattern, Column 1, Lines 1-55 of Lisin. As shown in FIG. 1, the camera device may take an image of the calibration pattern (e.g., a black and white checkerboard), Column 1, Lines 60-62 and Figure 1 of Lisin. The calibration pattern may include a diagram (e.g., a figure, a design, a pattern, etc.) useful for calibrating an image. For example, the calibration pattern may include an arrangement of repeating shapes of alternatively light and dark colors. In some implementations, the calibration pattern may include a regular pattern of squares in alternating colors (e.g., a checkerboard), Column 4, Lines 29-49 of Lisin. It would have been obvious and well-known to one of ordinary skill in the art before the effective filing date of the claimed invention to enable the teachings of Doganis to have a calibration image that includes M rows of N squares of alternating colors, M being greater than one and N being greater than one as taught by Lisin because they are useful in solving for the camera parameters by offering a known size and spacing). In regard to Claim 14, Doganis does not explicitly teach of the method of claim 7, wherein the calibration image includes M rows of N squares of alternating colors, M being greater than one and N being greater than one. Lisin teaches of a calibration image includes M rows of N squares of alternating colors, M being greater than one and N being greater than one, (Lisin teaches of a device that is configured to receive an image including a calibration pattern and apply a filter to the image based on a first coordinate plane. The device is configured to determine a first set of response peaks associated with the calibration pattern based on applying the filter, the first set of response peaks being associated with a set of control points and a set of boundary points. The device is configured to determine a second set of response peaks associated with the calibration pattern based on a second coordinate plane and a third coordinate plane, the second set of response peaks being associated with the boundary points. The device is configured to determine the control points based on determining the first set of response peaks and the second set of response peaks, and provide information that identifies the control points, Abstract and Figure 1 of Lisin. Lisin teaches that to achieve camera calibration, the user may use the camera device to take one or more images of a calibration pattern. Effective camera calibration may require a camera calibration device to determine control points associated with the calibration pattern. A control point may include a point where a shape (e.g., a square, a circle, a line, etc.) associated with the calibration pattern intersects with another shape associated with the calibration pattern, Column 1, Lines 1-55 of Lisin. As shown in FIG. 1, the camera device may take an image of the calibration pattern (e.g., a black and white checkerboard), Column 1, Lines 60-62 and Figure 1 of Lisin. The calibration pattern may include a diagram (e.g., a figure, a design, a pattern, etc.) useful for calibrating an image. For example, the calibration pattern may include an arrangement of repeating shapes of alternatively light and dark colors. In some implementations, the calibration pattern may include a regular pattern of squares in alternating colors (e.g., a checkerboard), Column 4, Lines 29-49 of Lisin. It would have been obvious and well-known to one of ordinary skill in the art before the effective filing date of the claimed invention to enable the teachings of Doganis to have a calibration image that includes M rows of N squares of alternating colors, M being greater than one and N being greater than one as taught by Lisin because they are useful in solving for the camera parameters by offering a known size and spacing). 7.) Claim(s) 10 and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Doganis (US Pub No.: 2020/0005491A1) as applied to claims 7 and 15 above, and further in view of Topal et al. (US Pub No.: 2021/0311556A1). Regarding Claim 10, Doganis does not explicitly disclose the method of claim 7, further comprising: receiving, from the first computing device, multiple facial images of a face of a user, the multiple facial images of the face of the user having been captured by the camera; and determining features of the user's face based on the calibration of the camera and the multiple facial images. Topal et al. teach of receiving, from the first computing device, multiple facial images of a face of a user, the multiple facial images of the face of the user having been captured by the camera; and determining features of the user's face based on the calibration of the camera and the multiple facial images, (Topal et al. teach of a method for remotely controlling a computing device comprises repeatedly capturing an image from a video frame, detecting a human face in the captured image, matching the detected human face to a previously detected human face, extracting facial landmarks from the matched detected human face, estimating a 3D head pose of the matched detected human face based on the extracted facial landmarks, the 3D head pose being represented in an egocentric coordinate system by a 3D pose vector which is directed from the human face, the 3D pose vector being free to rotate around x-, y- and z-axes of the egocentric coordinate system using respective rotation matrices and free to translate along these x-, y- and z-axes using a translation vector, and controlling a user interface on a display screen of the computing device according to the estimated 3D head pose, Abstract of Topal et al.. Topal et al. teach of at least one image capturing device 160 may be arranged to repeatedly capture an image from a video frame and to provide the captured image to a processor or computation unit of the computing device 120, Paragraphs 0036-0038 and Figure 1 of Topal et al.. The processor or computation unit may be arranged to match the detected human face to a previously detected human face stored in a storage unit (e.g., a memory) of the computing device 120 using, for example, face recognition algorithms, Paragraph 0039 of Topal et al..Topal et al. teach of continuously acquiring the face image of the user. The acquired images are then directed towards the computing device to be processed by a processor or a computation unit using computer vision algorithms, Paragraph 0056 of Topal et al.. It would have been obvious and well-known to one of ordinary skill in the art before the effective filing date of the claimed invention to enable the teachings of Doganis to capture multiple facial images of a face of a user; and sending the multiple facial images to the remote computing device as taught by Topal et al., because it provides for the benefit of enabling gesture based solutions for display device control through human-computer interaction, Paragraphs 0003, 0023-0024 of Topal et al.). With regard to Claim 18, Doganis does not explicitly disclose the method of claim 15, further comprising: capturing multiple facial images of a face of a user; and sending the multiple facial images to the remote computing device. Topal et al. teach of capturing multiple facial images of a face of a user; and sending the multiple facial images to the remote computing device, (Topal et al. teach of a method for remotely controlling a computing device comprises repeatedly capturing an image from a video frame, detecting a human face in the captured image, matching the detected human face to a previously detected human face, extracting facial landmarks from the matched detected human face, estimating a 3D head pose of the matched detected human face based on the extracted facial landmarks, the 3D head pose being represented in an egocentric coordinate system by a 3D pose vector which is directed from the human face, the 3D pose vector being free to rotate around x-, y- and z-axes of the egocentric coordinate system using respective rotation matrices and free to translate along these x-, y- and z-axes using a translation vector, and controlling a user interface on a display screen of the computing device according to the estimated 3D head pose, Abstract of Topal et al.. Topal et al. teach of at least one image capturing device 160 may be arranged to repeatedly capture an image from a video frame and to provide the captured image to a processor or computation unit of the computing device 120, Paragraphs 0036-0038 and Figure 1 of Topal et al.. Topal et al. teach of continuously acquiring the face image of the user. The acquired images are then directed towards the computing device to be processed by a processor or a computation unit using computer vision algorithms, Paragraph 0056 of Topal et al.. It would have been obvious and well-known to one of ordinary skill in the art before the effective filing date of the claimed invention to enable the teachings of Doganis to receive, from the first computing device, multiple facial images of a face of a user, the multiple facial images of the face of the user having been captured by the camera; and determining features of the user's face based on the calibration of the camera and the multiple facial images as taught by Topal et al., because it provides for the benefit of enabling gesture based solutions for display device control through human-computer interaction, Paragraphs 0003, 0023-0024 of Topal et al.). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to PRITHAM DAVID PRABHAKHER whose telephone number is (571)270-1128. The examiner can normally be reached Monday to Friday 8:00 am to 5:00 pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Lin Ye can be reached at 5712727372. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. Pritham David Prabhakher Patent Examiner Pritham.Prabhakher@uspto.gov /PRITHAM D PRABHAKHER/Primary Examiner, Art Unit 2638
Read full office action

Prosecution Timeline

Sep 16, 2024
Application Filed
Jan 15, 2026
Examiner Interview (Telephonic)
Jan 28, 2026
Non-Final Rejection — §102, §103, §DP
Apr 14, 2026
Interview Requested

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598386
MEMS-based Imaging Devices
2y 5m to grant Granted Apr 07, 2026
Patent 12598373
VIDEO RECORDING METHOD AND APPARATUS, AND ELECTRONIC DEVICE
2y 5m to grant Granted Apr 07, 2026
Patent 12593122
IMAGE PROCESSING METHOD AND APPARATUS, DEVICE, AND STORAGE MEDIUM
2y 5m to grant Granted Mar 31, 2026
Patent 12593151
ANALOG-TO-DIGITAL CONVERTING CIRCUIT FOR OPTIMIZING DUAL CONVERSION GAIN OPERATION AND OPERATION METHOD THEREOF
2y 5m to grant Granted Mar 31, 2026
Patent 12593129
APPARATUS AND METHODS FOR ADJUSTING ZOOM OF A PTZ CAMERA
2y 5m to grant Granted Mar 31, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
79%
Grant Probability
99%
With Interview (+26.1%)
2y 3m
Median Time to Grant
Low
PTA Risk
Based on 650 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month