Prosecution Insights
Last updated: April 19, 2026
Application No. 18/001,568

METHOD FOR CALIBRATING A CAMERA AND ASSOCIATED DEVICE

Final Rejection §102§103
Filed
May 25, 2023
Examiner
WILBURN, MOLLY K
Art Unit
2666
Tech Center
2600 — Communications
Assignee
Renault S A S
OA Round
2 (Final)
90%
Grant Probability
Favorable
3-4
OA Rounds
2y 2m
To Grant
99%
With Interview

Examiner Intelligence

Grants 90% — above average
90%
Career Allow Rate
407 granted / 452 resolved
+28.0% vs TC avg
Moderate +9% lift
Without
With
+8.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 2m
Avg Prosecution
16 currently pending
Career history
468
Total Applications
across all art units

Statute-Specific Performance

§101
15.9%
-24.1% vs TC avg
§103
32.2%
-7.8% vs TC avg
§102
30.6%
-9.4% vs TC avg
§112
9.3%
-30.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 452 resolved cases

Office Action

§102 §103
DETAILED ACTION Claims 11-22 are currently pending. Claims 1-10 have been cancelled. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Information Disclosure Statement The information disclosure statement (IDS) submitted on 03/19/2024 has been considered by the Examiner. Response to Arguments I. 35 U.S.C. 112(b) Examiner agrees the current amendment to claim 16 overcomes the previous rejection under 35 U.S.C. 112(b) and the rejection is withdrawn. II. 35 U.S.C. 102 Applicant Ogale fails to teach alignment by forming pairs of positions between a real position of an object, at a precise moment, and the position of the same object, at the same moment, in an image acquired by the camera. Examiner disagrees. It is clear throughout the Ogale reference that the images and the sensor poses are determined at the same time. For example: Column 18 lines 10-15 “For example, the sensor-based pose may be based on the position of the vehicle, relative to an object, which may also be shown within the images captured by the camera system.” Column 19 lines 27-31, “ In other examples, the autonomous vehicle may determine a sensor-based post based on a cue provided by the computing device in response to determining an image-based pose using camera images.” This was further clearly expressed in the below rejection by establishing the sensor based pose and the camera based pose were determined based upon the same object (See also col 16 lines 13-33, the autonomous vehicle may determine a pose of some object or objects). Further, Ogale teaches forming a pair of poses and “transform functions to transcribe a pose to another coordinate system” in column 20. This is a pair of poses acquired a the same time, used to calibrate the camera system. Therefore, at this time all rejections under 35 U.S.C. 102 are maintained. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: The device The memory unit The image processing nit The computing unit in claim 20. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 11-12, 16 and 20-22 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Ogale (US 9,201,424). Regarding claim 11, Ogale teaches: A method for calibrating a camera on board a motor vehicle using a reference sensor on board said vehicle, (Ogale Fig 1, vehicle with LIDAR/Radar and a camera) wherein provision is made for determining calibration parameters of the camera, the method comprising: a) acquiring, by the reference sensor, a plurality of actual positions of at least one object in an environment of the vehicle, (Ogale col 17 line 65, At block 306 the method 300 includes determining via at least one sensor coupled to the vehicle, a sensor based pose indicative of an orientation and position of the vehicle) b) acquiring, using the camera, a shot at each instant when one of the actual positions is acquired by the reference sensor, (Ogale, col 14 line 18-31, at block 302 the method includes receiving via a camera coupled to the vehicle one or more images…..During operation the camera system may be configured to focus upon and capture images of particular objects within the environment) c) determining a position of an image of each object in the shots acquired by the camera, (Ogale, col 15 lines 50-67, At block 304, the method includes based on the one or more images determining an image based pose indicative of an orientation and position of the camera. See also col 16 lines 13-33, the autonomous vehicle may determine a pose of some object or objects) d) forming position pairs by matching each of the actual positions of each object with the position of the image of said object in the shot acquired by the camera at the instant of acquiring said actual position of the object, (Ogale col 20 lines 5-29, After determining the different poses, a computing device or system associated with the autonomous vehicle may be configured to align the image-based pose with the sensor based pose…. The computing device may use transform function to transcribe a pose to other coordinate system to produce the alignment) and e) determining, using a computing unit, calibration parameters of the camera, from a set of the position pairs formed. (Ogale col 20 lines 40-67, At block 310 the method 300 includes determining an adjustment of the orientation or position of the camera based on the alignment of the image-based pose with the sensor based pose…..adjustments may include refining the camera’s focus or scaling…. determine any variation related to yaw, pitch and poll the camera) Regarding claim 12, Ogale teaches: The method as claimed in claim 11, wherein, in step e), said calibration parameters of the camera are extrinsic parameters formed by a coefficients of a rotation and/or translation matrix describing a switch from a reference frame associated with the vehicle to a reference frame associated with the camera. (Ogale col 20 lines 49-54, The computer device or system associated with an autonomous vehicle may find a transformation that includes a scale factor, rotation, and translation) Regarding claim 16, Ogale teaches: The method as claimed in claim 11, wherein the acquisition steps a) and b) are executed while the vehicle is running along a straight line and on a substantially horizontal and flat roadway.  (Ogale Fig 5, car is traveling along a roadway) Regarding claim 20, Ogale teaches: A device for calibrating a camera on board a motor vehicle, configured to communicate with said camera and with a reference sensor on board the vehicle, (Ogale Fig 1, vehicle with LIDAR/Radar and a camera) said reference sensor being provided to acquire a plurality of actual positions of at least one object in an environment of the vehicle, said device comprising: (Ogale col 17 line 65, At block 306 the method 300 includes determining via at least one sensor coupled to the vehicle, a sensor based pose indicative of an orientation and position of the vehicle) a memory unit (Ogale Fig 1, processor) configured to record the actual position of each object in a reference frame associated with the vehicle at a given instant and a shot acquired by the camera at the given instant, (Ogale, col 14 line 18-31, at block 302 the method includes receiving via a camera coupled to the vehicle one or more images…..During operation the camera system may be configured to focus upon and capture images of particular objects within the environment) an image processing unit(Ogale Fig 1, processor) configured to determine a position of an image of each object in the shots acquired by the camera, (Ogale, col 15 lines 50-67, At block 304, the method includes based on the one or more images determining an image based pose indicative of an orientation and position of the camera. See also col 16 lines 13-33, the autonomous vehicle may determine a pose of some object or objects)and to form position pairs by matching said position of the image of the object in the shot with the actual position of said object at the instant of acquisition of the shot, (Ogale col 20 lines 5-29, After determining the different poses, a computing device or system associated with the autonomous vehicle may be configured to align the image-based pose with the sensor based pose…. The computing device may use transform function to transcribe a pose to other coordinate system to produce the alignment) and a computing unit (Ogale Fig 1, processor) configured to calculate calibration parameters of the camera based on a set of the position pairs formed by the image processing unit. (Ogale col 20 lines 40-67, At block 310 the method 300 includes determining an adjustment of the orientation or position of the camera based on the alignment of the image-based pose with the sensor based pose…..adjustments may include refining the camera’s focus or scaling…. determine any variation related to yaw, pitch and poll the camera) Regarding claim 21, Ogale teaches: The device as claimed in claim 20, wherein the reference sensor is chosen from the following list of sensors: a camera, a stereoscopic camera, a detection system using electromagnetic waves, and a detection system using ultrasonic waves.  (Ogale Fig 1, vehicle with LIDAR/Radar and a camera) Regarding claim 22, Ogale teaches: The device as claimed in claim 21, wherein the detection system using electromagnetic waves is a radar or lidar system. (Ogale Fig 1, vehicle with LIDAR/Radar and a camera) Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim 19 is rejected under 35 U.S.C. 103 as being unpatentable over Ogale as applied to claim 1 above, and further in view of Elangovan (US 2019/0019335). Regarding claim 19, Ogale fails to teach: The method as claimed in claim 11, wherein step c) of determining the position of the image of each object in each shot acquired by the camera is executed by an image processing unit comprising a neural network. Elangovan teaches: The method as claimed in claim 11, wherein step c) of determining the position of the image of each object in each shot acquired by the camera is executed by an image processing unit comprising a neural network. (Elangovan [0079] estimating a camera pose position for the captured image using a neural network) Before the time of filing it would have been obvious to one of ordinary skill in the art to use the neural network of Elangovan to determine the position of the image as taught in Ogale. The rationale for the combination is the combination of known methods to yield the predictable result of a vehicle camera pose calculated through a neural network. Allowable Subject Matter Claims 13-15 and 17-18 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Regarding claim 13, neither the closest known prior art, nor any reasonable combination thereof, teaches: e1) for each of the position pairs formed, a theoretical position of the image of the object is calculated, based on the actual position of said object determined in step a) and on the coefficients of the matrix, and then a difference between the theoretical position calculated and the position determined in step c) of the image of said object in the shot is evaluated; e2) the mean of all the differences evaluated in step e 1) is calculated; e3) the coefficients of the matrix are modified; and e4) substeps el) to e3) are iterated until the mean of the differences calculated in substep e2) is minimized. Claims 14 and 15 depend from claim 13 and would therefore also be allowable. Regarding claim 17, neither the closest known prior art, nor any reasonable combination thereof, teaches: The method as claimed in claim 11, wherein, in step a), the reference sensor acquires at least 5 different actual positions of objects, dispersed in a whole of a field of view of said reference sensor covering a field of view of the camera. Regarding claim 18, neither the closest known prior art, nor any reasonable combination thereof, teaches: The method as claimed in claim 11, wherein step c) of determining the position of the image of each object in each shot acquired by the camera is executed manually by an operator. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Molly K Wilburn whose telephone number is (571)272-3589. The examiner can normally be reached Monday-Friday 8am-4pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Emily Terrell can be reached at (571) 270-3717. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Molly Wilburn/Primary Examiner, Art Unit 2666
Read full office action

Prosecution Timeline

May 25, 2023
Application Filed
Aug 22, 2025
Non-Final Rejection — §102, §103
Nov 26, 2025
Response Filed
Mar 07, 2026
Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586193
TECHNIQUES FOR COMPARING IMAGE CONTOURS OF DIFFERENT HUMAN PARTICIPANTS USING AUTOMATED TOOL
2y 5m to grant Granted Mar 24, 2026
Patent 12586202
SYSTEM AND METHOD FOR AUTOMATIC SEGMENTATION OF TUMOR SUB-COMPARTMENTS IN PEDIATRIC CANCER USING MULTIPARAMETRIC MRI
2y 5m to grant Granted Mar 24, 2026
Patent 12586211
System and Method for Event Detection using an Imager
2y 5m to grant Granted Mar 24, 2026
Patent 12579648
METHODS AND SYSTEMS FOR IDENTIFYING SLICES IN MEDICAL IMAGE DATA SETS
2y 5m to grant Granted Mar 17, 2026
Patent 12573045
CANDIDATE DETERMINATION FOR SPINAL NEUROMODULATION
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
90%
Grant Probability
99%
With Interview (+8.8%)
2y 2m
Median Time to Grant
Moderate
PTA Risk
Based on 452 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month