Prosecution Insights
Last updated: April 19, 2026
Application No. 17/967,409

EXTRINSIC CAMERA CALIBRATION USING CALIBRATION OBJECT

Final Rejection §103§Other
Filed
Oct 17, 2022
Examiner
SALEH, ZAID MUHAMMAD
Art Unit
2668
Tech Center
2600 — Communications
Assignee
Objectvideo Labs LLC
OA Round
4 (Final)
65%
Grant Probability
Favorable
5-6
OA Rounds
3y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 65% — above average
65%
Career Allow Rate
28 granted / 43 resolved
+3.1% vs TC avg
Strong +48% interview lift
Without
With
+48.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
30 currently pending
Career history
73
Total Applications
across all art units

Statute-Specific Performance

§101
5.7%
-34.3% vs TC avg
§103
58.5%
+18.5% vs TC avg
§102
28.0%
-12.0% vs TC avg
§112
4.4%
-35.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 43 resolved cases

Office Action

§103 §Other
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims Claims 1, 10, 12, 13, 18, 20 are amended. Claims 1 – 4, 7, 10 – 13,15 – 18, 20 – 26 remain pending. Claims 5, 6, 8, 9, 14 and 19 are canceled Response to Amendment The amendment filed 11/26/2025 overcomes the following objections/rejections. Objection under 35 U.S.C. 132(a) Response to Arguments Applicant's arguments filed November 26, 2025 with respect to claims 1 – 4, 7, 10 – 13,15 – 18, 20 – 26 have been considered but are moot because the new grounds of rejection do not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1 – 4, 7, 10 – 13, 15 – 18, 20 – 24 are rejected under 35 U.S.C 103 as being unpatentable over Shen et al. US Patent Application Publication No. US-2020074683-A1 (hereinafter Shen) in view of Datta US Patent Application Publication No. US-2014293043-A (hereinafter Datta) and further in view of Konno Patent Application Publication No. JP-2008154188-A (hereinafter Konno) and Fang Patent Application Publication No. CN-108229488-A (hereinafter Fang). Regarding claim 1, Shen discloses about a computer-implemented method comprising: maintaining, in a virtual space and simulating an image capture by a camera, a projection of an image depicting a calibration object onto a field of view of the camera (Shen in [0062] discloses about world coordinate which equates to virtual space. Furthermore, Shen in [0077] discloses, “The images can depict the calibration target at a plurality of different positions and/or orientations” which equates to simulating an image capture); generating, using the projection of the image onto the field of view of the camera in the virtual space, two or more simulated views of the calibration object in the virtual space (Shen in [0021] discloses, ““The plurality of calibration images from each of the imaging devices can be taken at different positions and orientations relative to the calibration target”); detecting, for each of the two or more simulated views in the virtual space, a plurality of interests interest points for the calibration object, at least some of the interest points for different simulated views from the two or more simulated views comprising different data (Shen in [0077 – 0078] discloses, “The images can depict the calibration target at a plurality of different positions and/or orientations. Each image can be processed in order to identify features present in the image and determine their image coordinates ... the corner points of the squares (e.g., as indicated by circles 305 ) serve as the feature points of the calibration target 300 for performing image device calibration” wherein the corner points/features are the interest points); aggregating, for each common interest point included in at least two of the pluralities plurality of interest points for the two or more simulated views in the virtual space and for the calibration object (Shen in [0096] discloses, “In embodiments where a plurality of imaging devices are being calibrated, with each imaging device providing a respective set of images, the step 640 can involve identifying the features that are present in each of the respective image sets, and then determining world coordinates for only these features” wherein selecting ‘feature present in each image set’ equates to interest point and ‘determining world coordinates for only these features’ implies to ‘aggregating, for each common interest point’); data for the common interest point for the calibration object using the different data from the respective interest points that represent the common interest point (Shen in [0011] discloses, “the one or more imaging devices comprise a first camera and a second camera that capture images at substantially the same time. The formulating step can comprise identifying features present in both an image obtained by the first camera and a corresponding image obtained by the second camera”); selecting, from the pluralities plurality of interest points and using the aggregated data for the common interest points, a subset of interest points (Shen in [0096] discloses, “determining world coordinates for all the features visible in the images produced by multiple imaging devices prior to determining which features are common across the different image sets”); determining, using pixel locations of the subset of interest points in the image of the calibration object captured by the camera (Shen in [0094 -0095] discloses, “image coordinates of features in the image data” wherein image coordinates equates to pixel locations and [0096] discloses about determining features are common across different image sets implies to subset of interest points) and physical locations of the subset of interest points of the calibration object in a calibration object centered coordinate system (Shen in [0094 -0095] discloses, “global coordinates (world coordinates) for corresponding features on the calibration target is formulated based on the spatial relationship determined in step 630”), a transformation from the calibration object centered coordinate system to a camera centered coordinate system (Shen in [0094 -0095] discloses, “spatial relationship is determined between the one or more reference markers in each image of the plurality of images ... the spatial relationship (e.g., translation, rotation, flipping) by comparing how the position of the reference markers in the image data differs from the known positions of the reference markers on the actual calibration target”). Shen doesn’t disclose the following limitations as further recited in the claim. Datta discloses about determining, using the transformation, a camera tilt angle and a camera mount height of the camera for use in analyzing images captured by the camera (Datta in [0006] discloses, “to calculate the height of an elevated surveillance camera from knowledge of the viewing/tilt angle of the camera”), It would have been obvious to one of ordinary skill in art before the effective filling date of the claimed invention to integrate the technique of Datta into the system of Shen because calculating the height and angle would allow the system to accurately estimate size, length and distance between objects (disclosed in [0003]). Shen and Datta in the combination doesn’t disclose the following limitations as further recited in the claim. Konno discloses transmitting, to a monitoring system, a target image and data indicating the camera tilt angle and the camera mount height to cause the monitoring system to analyze the target image captured by the camera using the camera tilt angle and the camera mount height (Konno in [Page – 2, Paragraph – 5] discloses, “The present invention is applied to a system in which an image obtained by imaging with a camera is transmitted via a predetermined network, and the transmitted image is displayed on the receiving side. On the camera side, the camera installation position data, the direction data captured by the camera, the tilt angle data captured by the camera, and the camera installation height data are registered”). It would have been obvious to one of ordinary skill in art before the effective filling date of the claimed invention to integrate the technique of Konno into the system of Shen in view of Datta because it would allow the system to correctly interpret where the object are in the real world. Shen, Datta and Konno in the combination doesn’t disclose the following limitations as further recited in the claim. Fang discloses determining, for at least some interest points from the plurality of interest points, one or more of a probability or a frequency of the interest point at a particular location (Fang in [Page – 1, Last Paragraph] discloses, “predict the first key number of the object area candidate block and the detection key points and the respective detection key points each point in the heat map is a key point prediction probability of the key point”. Furthermore, Fang in [Page – 1, Paragraph – 3] discloses, “outputting a key point prediction probability of each position in each of the predicted key point heat maps from the classifier layer”), comprising: selecting, from the plurality of interest points and using one or more of the probability or the frequency of the interest point at a particular location, a subset of interest points that satisfy one or more of a corresponding probability criterion or a corresponding repeatability criterion (Fang in [Page – 2, Paragraph – 4] discloses, “a position where a key point prediction probability in the Mth detection key point heat map of the first number of the detection key point heat map exceeds a probability threshold is used as the candidate position in the any local area candidate box The position of M key points”). It would have been obvious to one of ordinary skill in art before the effective filling date of the claimed invention to integrate the technique of Fang into the system of Shen in view of Datta and Konno because it would allow the system to remove all the unstable or misleading feature points, keeping only the points that can be used for more reliable feature matching later. Summary of Citations (Fang) [Page – 1, Last Paragraph]; “predict the first key number of the object area candidate block and the detection key points and the respective detection key points Each point in the heat map is a key point prediction probability of the key point”. [Page – 2, Paragraph – 4]; “a position where a key point prediction probability in the Mth detection key point heat map of the first number of the detection key point heat map exceeds a probability threshold is used as the candidate position in the any local area candidate box The position of M key points”. [Page – 3, Paragraph – 3]; “outputting a key point prediction probability of each position in each of the predicted key point heat maps from the classifier layer”. Summary of Citations (Shen) Paragraph [0011]; “In some embodiments, the one or more imaging devices comprise a first camera and a second camera that capture images at substantially the same time. The formulating step can comprise identifying features present in both an image obtained by the first camera and a corresponding image obtained by the second camera”. Paragraph [0021]; “The plurality of calibration images from each of the imaging devices can be taken at different positions and orientations relative to the calibration target”. Paragraph [0062]; “The model 100 includes a world coordinate system 102 , a camera coordinate system 104 , an image plane coordinate system 106 , and a pixel coordinate system 108 . The world coordinate system 102 , which may also be referred to herein as a “global coordinate system,” represents the reference frame of the scene captured by the camera and/or the environment in which the camera is operating”. Paragraph [0077]; “The images can depict the calibration target at a plurality of different positions and/or orientations”. Paragraph [0077 – 0078]; “The images can depict the calibration target at a plurality of different positions and/or orientations. Each image can be processed in order to identify features present in the image and determine their image coordinates ... Optionally, in embodiments where multiple imaging devices are used to obtain multiple sets of image data, the correspondences between image coordinates of features across the image data sets can also be determined and used to estimate the parameters for the multiple imaging devices (e.g., position and/orientation of the imaging devices relative to each other) ... FIG. 4 illustrates a calibration target 300 suitable for use in imaging device calibration, in accordance with embodiments ... In some embodiments, the corner points of the squares (e.g., as indicated by circles 305 ) serve as the feature points of the calibration target 300 for performing image device calibration. Corner points may be relatively easy to distinguish from other portions of the image data using machine vision methods”. Paragraph [0094 – 0095]; “In step 630 , a spatial relationship is determined between the one or more reference markers in each image of the plurality of images and the one or more reference markers on the calibration target. As previously described, since the reference markers are configured so as to be uniquely identifiable, the spatial relationship (e.g., translation, rotation, flipping) by comparing how the position of the reference markers in the image data differs from the known positions of the reference markers on the actual calibration target ... In step 640 , a correspondence between image coordinates of features in the image data and global coordinates (world coordinates) for corresponding features on the calibration target is formulated based on the spatial relationship determined in step 630 . For example, the positions of each feature point on the actual calibration target can be determined using the determined spatial relationship. Each feature can then be assigned a set of world coordinates indicating the position of the feature point relative to the world coordinate system. Accordingly, since the image coordinates and the world coordinates of each feature are known, the correspondence or mapping between the image and world coordinates can be formulated”. Paragraph [0096]; “In embodiments where a plurality of imaging devices are being calibrated, with each imaging device providing a respective set of images, the step 640 can involve identifying the features that are present in each of the respective image sets, and then determining world coordinates for only these features ... this process can be reversed, by determining world coordinates for all the features visible in the images produced by multiple imaging devices prior to determining which features are common across the different image sets”. Summary of Citations (Datta) Paragraph [0003]; “By using information associated with the positioning of a camera, including the height of a camera above the surface that it monitors, and the angle of the line-of-view of the camera formed with a perpendicular line to the target surface, accurate estimates of size, length, and distance between objects can be calculated”. Paragraph [0006]; “Techniques are known to calculate the height of an elevated surveillance camera from knowledge of the viewing/tilt angle of the camera, knowledge of the known height of an object, such as a person in the camera viewing area, and the apparent height of the person as imaged by the camera. Other techniques are known that make use of distance detection devices to measure the distance to an object in the camera field of view and the tilt angle of the camera to determine the height of the elevated camera, or use calibration techniques making use of vanishing points and vanishing lines within the camera viewing field”. Summary of Citations (Konno) [Page – 2, Paragraph – 5]; “The present invention is applied to a system in which an image obtained by imaging with a camera is transmitted via a predetermined network, and the transmitted image is displayed on the receiving side. On the camera side, the camera installation position data, the direction data captured by the camera, the tilt angle data captured by the camera, and the camera installation height data are registered”. Regarding claims 2 – 4, 7, 11, Shen in the combination substantiates these claims as set forth in the previous Office Action (Final, 4/17/2025). Regarding claim 10, Datta in the combination substantiates this claim as set forth in the previous Office Action (Final, 4/17/2025) Regarding claim 12, apparatus claim 12 corresponds to method claim 1. Therefore, the rejection analysis of claim 1 is applicable to claim 12. Regarding claim 13, apparatus claim 13 corresponds to method claim 3. Therefore, the rejection analysis of claim 3 is applicable to claim 13. Regarding claim 15, apparatus claim 15 corresponds to method claim 4. Therefore, the rejection analysis of claim 4 is applicable to claim 15. Regarding claim 16 – 17, Shen in the combination substantiates these claims as set forth in the previous Office Action (Final, 04/17/2025). Regarding claim 18 which corresponds to method claim 7 and therefore is rejected for the same rationale. Regarding claim 20, reciting a non-transitory computer storage medium corresponding to claim 1. Therefore, the rejection of claim 1 is applicable to claim 20. Regarding claim 21 – 24, Shen in the combination substantiates these claims as set forth in the previous Office Action (Final, 04/17/2025). Claim 25 and 26 are rejected under 35 U.S.C 103 as being unpatentable over Shen in view of Datta, Konno and Fang and further in view of Daniel ‘ SuperPoint: Self-Supervised Interest Point Detection and Description’ (hereinafter Daniel). Regarding claim 25 and 26, the combined teachings of Shen, Datta, Kondo, and Fang as a whole do not teach the further limitations as recited. However, Daniel does. The grounds for rejections of claims 25 and 26 set forth in the previous Office Action (Final, 04/17/2025) apply here. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Contact Any inquiry concerning this communication or earlier communications from the examiner should be directed to ZAID MUHAMMAD SALEH whose telephone number is (703)756-1684. The examiner can normally be reached M-F 8 am - 5 pm ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vu Le can be reached on (571)272-7332. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272- 1000. /ZAID MUHAMMAD SALEH/ Examiner, Art Unit 2668 02/15/2026 /VU LE/Supervisory Patent Examiner, Art Unit 2668
Read full office action

Prosecution Timeline

Oct 17, 2022
Application Filed
Nov 13, 2024
Non-Final Rejection — §103, §Other
Jan 24, 2025
Response Filed
Apr 09, 2025
Final Rejection — §103, §Other
Jun 13, 2025
Response after Non-Final Action
Jun 24, 2025
Request for Continued Examination
Jun 27, 2025
Response after Non-Final Action
Aug 21, 2025
Non-Final Rejection — §103, §Other
Nov 11, 2025
Interview Requested
Nov 18, 2025
Examiner Interview Summary
Nov 26, 2025
Response Filed
Feb 16, 2026
Final Rejection — §103, §Other (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602944
AUTHENTICATION OF DENDRITIC STRUCTURES
2y 5m to grant Granted Apr 14, 2026
Patent 12586501
DISPLAY DEVICE, DISPLAY METHOD, AND STORAGE MEDIUM
2y 5m to grant Granted Mar 24, 2026
Patent 12586396
INFORMATION PROCESSING APPARATUS AND SYSTEM
2y 5m to grant Granted Mar 24, 2026
Patent 12562535
METHOD FOR DETECTING UNDESIRED CONNECTION ON PRINTED CIRCUIT BOARD
2y 5m to grant Granted Feb 24, 2026
Patent 12555344
METHOD AND APPARATUS FOR IMPROVING VIDEO TARGET DETECTION PERFORMANCE IN SURVEILLANCE EDGE COMPUTING
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
65%
Grant Probability
99%
With Interview (+48.4%)
3y 1m
Median Time to Grant
High
PTA Risk
Based on 43 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month