Prosecution Insights
Last updated: April 19, 2026
Application No. 18/505,747

ELECTRONIC DEVICE AND A CONTROL METHOD THEREOF

Final Rejection §103§112
Filed
Nov 09, 2023
Examiner
YAO, JULIA ZHI-YI
Art Unit
2666
Tech Center
2600 — Communications
Assignee
Kia Corporation
OA Round
2 (Final)
68%
Grant Probability
Favorable
3-4
OA Rounds
3y 4m
To Grant
99%
With Interview

Examiner Intelligence

Grants 68% — above average
68%
Career Allow Rate
47 granted / 69 resolved
+6.1% vs TC avg
Strong +36% interview lift
Without
With
+35.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
29 currently pending
Career history
98
Total Applications
across all art units

Statute-Specific Performance

§101
8.9%
-31.1% vs TC avg
§103
52.6%
+12.6% vs TC avg
§102
11.2%
-28.8% vs TC avg
§112
26.1%
-13.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 69 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Status Claims 1-16 were pending for examination in the Application No. 18/505,747 filed November 9th, 2023. In the remarks and amendments received on January 2nd, 2026, claims 1, 4, 9, and 12 are amended and claims 2-3, 5-6, 10-11, and 13-14 are canceled. Accordingly, claims 1, 4, 7-9, 12, and 15-16 are currently pending for examination in the application. Response to Amendment Applicant’s amendments filed January 2nd, 2026, to the Specification and Claims have overcome each and every objection previously set forth in the Non-Final Office Action mailed October 2nd, 2025. The examiner warmly thanks Applicant for considering the suggested amendments to be made to the disclosure. Additionally, the examiner acknowledges Applicant’s remarks regarding the claim terms interpreted under 35 U.S.C. § 112(f); regardless, the examiner maintains the claim terms interpreted under 35 U.S.C. § 112(f) as set forth in the previous Office Action. Response to Arguments Applicant’s arguments filed January 2nd, 2026, regarding the rejection(s) of independent claim(s) have been fully considered but are not persuasive. The examiner respectfully disagrees with Applicant’s assertion that “Mukherjee and Lee fail to disclose ‘determining a first physical quantity and a second physical quantity for a dynamic object through mutually independent calculations and then comparing a difference between the two physical quantities to determine whether to adjust calibration information” when a dynamic object is determined (see pg. 9 of Applicant’s Remarks). As detailed in the current rejection above, para(s). [0040], [0044-0046], [0053-0054], and [0061] of Mukherjee discloses, when a target is determined as a dynamic object (e.g., a “lead vehicle”), determining a first physical quantity and a second physical quantity in mutually independent calculations as either a “measured” and/or “known” physical quantity of a “vanishing point” (e.g., “tail light width” or “angle of the horizon”) and comparing the physical quantities to determine a difference between them (i.e., difference between the “measured” and “known” physical quantities) to determine whether to adjust the calibration information (e.g., “updated calibration parameters, such as the updated height of the camera with respect to the ground”); wherein para(s). [0066], [0074], and [0064] of Lee further teaches in the same field of endeavor of determining a vanishing point for a detected dynamic object the first physical quantity as a first physical quantity to the target (e.g., a “vanishing point” of the target, such as a distance to a lower end of a center of a bounding box of the target—see Fig. 5A and para(s). [0026] and [0072] of Lee). Further, the examiner respectfully disagrees that “Liu and Zarubin fail to cure the deficiencies of Mukherjee and Lee” because “Liu fails to disclose ‘determining a first physical quantity using an amount of movement of a keypoint, determining a vehicle speed received from a vehicle communication device as a second physical quantity, and determining whether to adjust camera calibration information using a difference between the first physical quantity and the second physical quantity’ as shown in claim 1” and “Zarubin fails to disclose or teach movement-based classification and dual physical quantities as shown in the present application” (see pg. 10 of Applicant’s Remarks). As detailed in the current rejection below, lines 1-10 of col. 5, lines 47-61 of col. 11, and lines 21-43 of col. 14 of Liu discloses, when determining the target is a static object, determining a first physical quantity using an amount of movement of a keypoint as a “LIDAR-derived velocity” of a feature keypoint in a LIDAR frame and a second physical quantity as a “IMU-derived velocity” vehicle speed received from a vehicle communication device (i.e., the “IMU”) and adjusting the calibration information (e.g., “offset time” for “LIDAR-derived longitudinal velocities” obtained from a LIDAR device) in response to determining a difference (e.g., “offset”) between the first and second physical quantity (e.g., “difference between the first set of longitudinal velocities and a second set of longitudinal velocities”, where the “first set” is IMU-derived and the “second set” is LIDAR-derived). Additionally, in response to applicant’s remarks against Zarubin, the test for obviousness is not whether the features of a secondary reference may be bodily incorporated into the structure of the primary reference; nor is it that the claimed invention must be expressly suggested in any one or all of the references. Rather, the test is what the combined teachings of the references would have suggested to those of ordinary skill in the art. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981). As detailed in the current rejection below, Zarubin is merely combined with Mukherjee in view of Lee to teach the second physical quantity when determining the object is a dynamic object as a ratio between actual width information of the target and width information on an image of the target; wherein Mukherjee in view of Lee discloses and/or teaches the movement-based classification and dual physical quantities when the target is determined as a dynamic object as recited in the currently presented claims. Priority (Previously Presented) Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy has been filed as foreign Patent Application No. KR 10-2023-0101656, filed on August 3rd, 2023. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier, as explained in MPEP § 2181, subsection I (note that the list of generic placeholders below is not exhaustive, and other generic placeholders may invoke 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph): A. The Claim Limitation Uses the Term "Means" or "Step" or a Generic Placeholder (A Term That Is Simply A Substitute for "Means") With respect to the first prong of this analysis, a claim element that does not include the term "means" or "step" triggers a rebuttable presumption that 35 U.S.C. 112(f) does not apply. When the claim limitation does not use the term "means," examiners should determine whether the presumption that 35 U.S.C. 112(f) does not apply is overcome. The presumption may be overcome if the claim limitation uses a generic placeholder (a term that is simply a substitute for the term "means"). The following is a list of non-structural generic placeholders that may invoke 35 U.S.C. 112(f): "mechanism for," "module for," "device for," "unit for," "component for," "element for," "member for," "apparatus for," "machine for," or "system for." Welker Bearing Co., v. PHD, Inc., 550 F.3d 1090, 1096, 89 USPQ2d 1289, 1293-94 (Fed. Cir. 2008); Mass. Inst. of Tech. v. Abacus Software, 462 F.3d 1344, 1354, 80 USPQ2d 1225, 1228 (Fed. Cir. 2006); Personalized Media, 161 F.3d at 704, 48 USPQ2d at 1886–87; Mas-Hamilton Group v. LaGard, Inc., 156 F.3d 1206, 1214-1215, 48 USPQ2d 1010, 1017 (Fed. Cir. 1998). Note that there is no fixed list of generic placeholders that always result in 35 U.S.C. 112(f) interpretation, and likewise there is no fixed list of words that always avoid 35 U.S.C. 112(f) interpretation. Every case will turn on its own unique set of facts. Such claim limitation(s) is/are: "communication device configured to receive object information…" in claim 1 implemented on hardware disclosed in paras. [0060-0070]. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1, 4, and 7-8 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Regarding claim 1, the last claim limitation “wherein the processor is configured to adjust the calibration information based on a difference between the first physical quantity and the second physical quantity” in the claim renders the claim indefinite because it is unclear whether the phrase is referring to the “first” and “second” physical quantities “when the target is determine as a dynamic object” or “when the target is determined as a static object”. For examination purposes, this limitation will be interpreted for both determinations. Furthermore, claims 4 and 7-8 inherit this indefiniteness in view of their dependency to claim 1. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-2 and 9, 12 are rejected under 35 U.S.C. 103 as being unpatentable over Mukherjee et al. (Mukherjee; US 2025/0037311 A1) in view of Lee et al. (Lee; US 2022/0383529 A1), further in view of Zarubin et al. (Zarubin; RU 2470376 C2), and further more in view of Liu (US 11,187,793 B1). Regarding claim 1, Mukherjee discloses an electronic device, comprising: a communication device configured to receive object information of a target captured by a camera and recognized based on a deep learning model (para(s). [0021], [0026] and [0034], recite(s) [0021] “In-vehicle computing system 109 may analyze the input received from the camera 160…” [0026] “Camera 160 may include an object detection system 232 for detecting objects. For example, object detection system 232 may receive sensor data from an image sensor of camera 160 and may identify objects in the environment surrounding the vehicle, such as traffic lights, other vehicles, pedestrians, and the like. The outputs of object detection system 232 may be used for a variety of systems, such as for notifying a user of an object.” [0034] “For example, an image of the lead vehicle 330 (e.g., a rear-view image) may input to a network (e.g. neural network) trained to output a classified vehicle type of the lead vehicle 330. …” , where “object in the environment surrounding the vehicle” are targets captured by a camera including a “lead vehicle”); a memory storing calibration information of the camera and physical information of the target (para(s). [0029] and [0049], recite(s) [0029] “Camera 160 further includes a calibration module 234. The calibration module 234 may include instructions stored in memory that are executable by a processor to determine a position and orientation of camera 160 with respect to a road on which the vehicle is traveling. Calibration methods, as described herein, may comprise calculating the orientation of the camera, the height of the camera with respect to the ground, and calculating a region of interest (ROI) within the camera's field of view. The calibration methods disclosed herein may use known geometry (which may depend on the type of vehicle) of a leading vehicle to compute distances within the camera's field of view. Object detection may be used to classify the type of vehicle within the camera's view, e.g. identify whether a vehicle is a sedan, pickup truck, semi-truck, or other type of vehicle. Vehicle classification may be performed, by example, through the use of a trained network which searches for features specific to the various classified vehicle types. Object detection systems may also identify regions within the identified objects, such as the tires or tail lights of a vehicle. The object detection system may, for example, identify the tail lights by their color and ascribe their locations to their geometric centers. The object detection system may also identify tires by the geometric centers of their visible parts. Additional details about calibrating the camera via the calibration module are described below.” [0049] “The ROI may be calibrated using both angle and distance information. For example, the size and shape of the ROI may be computed by drawing a detected region on the camera's view corresponding to all points within a certain range of distance and angles.” , where the camera “orientation”, camera “height” and/or “size and shape of the ROI” is/are calibration information and “a region of interest (ROI)”, “type of vehicle”, and/or “identif[ied] regions within the identified objects” (e.g., “tires or tail lights”) is/are physical information of the target); and a processor configured to determine movement information of the target based on the object information (para(s). [0050], recite(s) [0050] “The ROI may be a fixed portion of the camera's view, representing an area of space directly in front of the ADAS-equipped vehicle. The ROI may be used, for example, to determine that other objects (e.g. vehicles, pedestrians, road hazards, etc.) are in a predefined range of the vehicle. Objects may be detected, for example, when motion is observed within the detection region of interest. …” , where “motion” is movement information based on object information (e.g., “region of interest”)), determine a first physical quantity(para(s). [0044-0046] and [0053-0054], recite(s) [0044] “FIG. 4 shows a scene 400 with a road and one vehicle 412. The scene 400 is shown as an example viewed from the perspective of the camera affixed to an ADAS-equipped vehicle. The scene shows a horizon 404, which represents the greatest distance visible to the camera. …The pitch, yaw, and roll may represent rotations along the X, Y, and Z axes, respectively. From the camera's perspective, parallel lines (such as the sides of the road) stretching into the distance will appear to converge to a single point, herein referred to as the vanishing point 406 of the scene.” [0045] “The pitch of the camera may be resolved by comparing the y-coordinate of the vanishing point 406 to the y-coordinate of the center of the camera's vision 402. If center of the camera's vision 402 is on the horizon, the pitch may be zero. The pitch of the camera may be used to find the horizon line 372 of FIG. 3D, therefore allowing for the angle 376 between the horizon and the rear tires 380 to be measured. Measurement of the camera's pitch may therefore allow for the height of the camera to be calculated.” [0046] “The yaw of the camera may be similarly evaluated by comparing the x-coordinate of the center of the camera's vision 402 to the x-coordinate of the vanishing point 406.” [0053] “…By comparing the apparent tail light width measured at 516 to the known tail light width and the known track width to the measured track width for the given vehicle class, distances in the scene captured by the camera may be resolved, including the distance from the camera to the center of the vehicle's tail lights or the center of the vehicle's rear tires.” [0054] “At 518, a search for the horizon and/or vanishing point of the camera view may be performed. For example, the horizon and/or vanishing point may be identified in the single image that includes the rear view of the lead vehicle. The orientation of the camera (e.g. pitch, roll, and yaw) may also be approximated with information in the camera's view at 520. …At 522, the updated height of the camera is calculated. Using the orientation of the camera calculated at 520, the apparent distances between the tires and/or tail lights, and geometric constructions, such as those detailed in FIGS. 3A through 3D, the updated height of the camera with respect to the ground may be calculated at 522. The updated height may be calculated, for example, by comparing the apparent angle of the horizon to the apparent angle of the tires.” , where the distance based on a “vanishing point”, either the “measured” and/or “known” “tail light width”, and/or the “angle of the horizon” is a first physical quantity of the target (e.g., the “lead vehicle”) based on the movement information (e.g., the identified lead vehicle in motion) and the calibration information (e.g., camera “orientation” including the camera’s position—e.g., “center”, “x-coordinate”, etc.)), determine a second physical quantity to the target based on the movement information and the physical information (para(s). [0044-0046] and [0053-0054]—see citations immediately above—, where para(s). [0040], [0053], and [0061] further recite(s): [0040] “FIG. 3C shows a similar technique FIG. 3B, using the apparent positions of the tires of the lead vehicle 330 to calculate distances. …” [0053] “The tail lights and/or tires of the lead vehicle are identified at 514. For example, the location of cach tail light (e.g., a centermost point of each tail light) and/or a location of each rear tire (e.g., a centermost point of each rear tire) in both the Y and X axes may be identified in a single image acquired by the camera. At 516, the apparent tail light width and/or the measured track width of the lead vehicle is measured. The locations of the rear tires and tail lights may be recorded, allowing the apparent distance (as seen from the camera) to be measured (e.g. using the number of pixels between the objects) at 516. By comparing the apparent tail light width measured at 516 to the known tail light width and the known track width to the measured track width for the given vehicle class, distances in the scene captured by the camera may be resolved, including the distance from the camera to the center of the vehicle's tail lights or the center of the vehicle's rear tires.” [0061] “In another representation, a method for calibrating a camera of a vehicle includes detecting a lead vehicle in an image generated by the camera, classifying the lead vehicle into one of a plurality of vehicle types, measuring a track width of the lead vehicle in the image, comparing the measured track width to a known track width selected based on the classified vehicle type of the lead vehicle, and adjusting a detection region of interest of the camera based on the comparison.” , where the distance based on vehicle dimensions (e.g., “tail lights and/or tires” widths), the other “measured” and/or “known” “tail light width” not selected as the first physical quantity above, and/or the “angle of the tires” is a second physical quantity to the target (e.g., “lead vehicle”) based on the movement information (e.g., the identified lead vehicle in motion) and the physical information (e.g., “tail light widths” or tire widths and/or “locations of the rear tires and tail lights”)), and compare the first physical quantity (and) the second physical quantity to determine whether to adjust the calibration information (para(s). [0044-0046], [0053], and [0054]—see citations in limitation “determine a first physical quantity below…”—, where “updat[ing] the height of the camera” is adjusting the calibration information determined based on at least comparing the first physical quantities and the second physical quantities of: the “angle of the horizon” and the “angle of the tires”; and/or the measured “tail light width” and the known “tail light width” (which is equivalent to comparing the distance based on the measured “tail width” and the distance based on the known “tail width” in order to “resolve[d]” the distance determinations)), wherein the processor is configured to: when the target is determined as a dynamic object based on the movement information (para(s). [0050]—see citation in claim 1 limitation “determine movement information…” above—, where detecting objects in “motion” is determining a dynamic object), determine a distance to(point)(para(s). [0044-0046]—see citation in claim 1 limitation “determine a first physical quantity…”—, where the distance based on the vanishing point is a distance determined to a point (e.g., from the camera to the “vanishing point”) as at least a first physical quantity); and determine the second physical quantity as(para(s). [0040], [0053], and [0061]—see citations in claim 1 limitation “determine a second physical quantity…” above—, where the “known” “tail-light width” is actual width information included in the object information and the “measured” “tail light width” is the width information on an image of the target included in the object) wherein the processor is configured to adjust the calibration information in response to determining a difference between the first physical quantity and the second physical quantity (para(s). [0044-0046], [0053], and [0054]—see limitation “compare the first physical quantity with…” above—, where comparing the distance based on at least the measured “tail width” and the distance based on the known “tail width” in order to “resolve[d]” the distance determinations for generating “updated calibration parameters, such as the updated height of the camera with respect to the ground” is adjusting calibration information (e.g., “updat[ing] the height of the camera”) in response to determining in response that a difference between the first physical quantity and the second physical quantity (e.g., distance difference between the measured “tail width” and a known “tail width”)). Where Mukherjee does not specifically disclose determine a first physical quantity to the target…, …and wherein the processor is configured to: …determine a distance to a lower end of a center of a bounding box included in the object information…; Lee teaches in the same field of endeavor of determining a vanishing point for a determined dynamic object determine a first physical quantity to the target… (para(s). [0066], [0074], and [0064] recite(s) [0066] “In operation 420, the vanishing point estimation apparatus may determine a bounding box (e.g., a bounding box 515 in FIG. 5A) corresponding to a rear side of the target vehicle based on the classified type. For example, the vanishing point estimation apparatus may retrieve width information prestored for each vehicle type. A width of a vehicle may be in a range between 1600 millimeters (mm) and 2500 mm according to a vehicle type. For example, when the type of the target vehicle is determined to be a passenger vehicle, a retrieved full width of the target vehicle may be approximately 1800 mm. For another example, when the type of the target vehicle is determined to be a truck, a retrieved full width of the target vehicle may be approximately 2500 mm. The vanishing point estimation apparatus may generate a bounding box corresponding to the width of the target vehicle based on the width information. The bounding box may be generated by, for example, an artificial intelligence (AI) algorithm including machine learning and deep learning. The vanishing point estimation apparatus may detect the target vehicle, or the rear side of the target vehicle, using the bounding box. A non-limiting example of a result of the detecting performed in operation 420 may be illustrated in FIG. 5A.” [0074] “In an example, the vanishing point estimation apparatus may calculate a distance between each of the objects and the vanishing point estimated for each of the objects based on the vanishing point estimated for each of the objects and output the calculated distance.” [0064] “In operation 410, the vanishing point estimation apparatus may obtain an image of a current time point of objects including a target vehicle. For example, the vanishing point estimation apparatus may capture the image of the objects including the target vehicle in front using a camera or a sensor and obtain an image of vehicles in front at the current time point. The size of the image of the target vehicle obtained through the camera may change based on a distance between the camera and the target vehicle.” , where the “distance between the camera and the target vehicle” calculated using the “vanishing point” is at least a first physical quantity to a target), …and wherein the processor is configured to: …determine a distance to a lower end of a center of a bounding box included in the object information… (Fig. 5A and para(s). [0026] and [0072], recite(s) [0026] “The determining of the lower edge may include determining coordinates of a lower center of the bounding box, and the determining of the vanishing point of the object may include determining a position of the vanishing point in a vertical direction based on the coordinates of the lower center of the bounding box, the position of the object in the world coordinate system, and one or more intrinsic parameters of a camera used to obtain the image.” [0072] “For example, the vanishing point estimation apparatus may estimate a vanishing point of an object among objects based on a relationship among a center position of a lower end of a bounding box corresponding to the object, a height h.sub.c of a camera capturing an image of the objects, and a position of a bottom edge of a bounding box corresponding to the object in the image plane. A non-limiting example method of estimating a vanishing point of each object by the vanishing point estimation apparatus will be described in detail with reference to FIGS. 5A through 5C.” PNG media_image1.png 429 483 media_image1.png Greyscale , where the “lower center of the bounding box” is a lower end of a center of a bounding box; wherein the bounding box is a region of interest). Since Mukherjee and Lee each disclose a vanishing point having a location from a camera to determine the distance from the camera to the vanishing point of a dynamic object (e.g., lead vehicle), a person of ordinary skill in the art before the effective filing date of the claimed invention would have recognized that the first physical quantity of Mukherjee (i.e., a horizon vanishing point) could have been substituted for the first physical quantity of Lee (i.e., target vanishing point comprising of determining a distance to a lower end of a center of a bounding box included in object information) because both the horizon vanishing point and the target vanishing point serve the purpose of providing a vanishing point to determine distances from a camera. Where Mukherjee in view of Lee does not specifically disclose wherein the processor is configured to: …determine the second physical quantity as a ratio between actual width information of the target… and width information on an image of the target…; Zarubin teaches in the same field of endeavor of detecting distances from cameras when a target is determined as a dynamic object wherein the processor is configured to: …determine the second physical quantity as a ratio between actual width information of the target… and width information on an image of the target… (description, para. [0024], recites [0024] “Using the measured dimensions of the license plate image in the video frame (Sx, Sy, αx, and αy), the width-to-height ratio of the license plate image in the video frame is calculated, then compared with the reference value for the given type of recognized license plate. Based on these comparisons, the license plate narrowing factor is calculated, which is used to adjust the measured width of the license plate image in the video frame. Based on the measured value of the focal length of the video camera lens, taking into account the parameters of the video camera matrix and the adjusted width of the image of the license plate, the distance “L” from the video camera to the center of the license plate of the vehicle is determined (see Fig. 15). From the ratio V/f=W/L, the desired distance is determined as the distance “L”, where “V” is the corrected width of the license plate image on the video frame, “W” is the standard width of the recognized license plate, “L” is the desired distance from the video camera matrix to the license plate, “f” is the focal length of the video camera. Thus, the required distance from the video camera to the vehicle is determined as the distance from the video camera to the center of the license plate.” , where the ratio “V/f=W/L” is equivalent to the ratio L=(W*f)/V; that is, “L” (i.e., the distance from camera—which is, in this case, a “video camera matrix”—to the target—which is, in this case, a “recognized license plate”) is equal to the ratio of at least the “W” (i.e., the actual width information of the target—which is, in this case, the “standard width of the license plate image on the video frame”) and “V” (i.e., the width information on an image of the target—which is, in this case, the “corrected width of the license plate image on the video frame”)). It would have been obvious to one of ordinary skill in the art before the effective filing date of the presently filed invention to modify the system of Mukherjee in view of Lee to incorporate the second physical quantity as a ratio between actual width information of the target (the actual width information being included in the physical information) and width information on an image of the target (the width information being included in the object information) to calculate the distance from the target to the camera using the width information for adjusting the calibration information by comparing this second physical quantity with a first physical quantity of a reference value for the captured target and compensate for distortions of the captured target in the image as taught by Zarubin (description, para. [0024]—see citation above—, where para. [0015] further recites: [0015] “Using the relative positions of the video camera and the inspection zone, as well as the geometric dimensions of the license plate image, the plate taper factor is determined. Since the license plate image in the video frame is most often not perpendicular to the camera's line of sight, the apparent dimensions of the license plate are distorted relative to the actual dimensions. The tapering factor is needed to compensate for the distortion of the plate width projection and to calculate the corrected plate width projection value.” ). Where Mukherjee, as modified by Lee and Zarubin, does not specifically disclose wherein the processor is configured to: when the target is determined as a static object based on the movement information, determine the first physical quantity using an amount of movement of a keypoint included in the object information, receive a current speed through the communication device, and determine the current speed as the second physical quantity, and wherein the processor is configured to adjust the calibration information in response to determining that a difference between the first physical quantity and the second physical quantity; Liu teaches in the same field of endeavor of sensor calibration wherein the processor is configured to: when the target is determined as a static object based on the movement information (lines 1-10 of col. 5, recite(s) [lines 1-10 of col. 5] “A LIDAR sensor can output 3D depth images (hereinafter “LIDAR frames”) containing points that represent distances from the LIDAR sensor to points on surfaces in the field of view of the LIDAR sensor (i.e., in the field around the autonomous vehicle). The controller can derive changes in the position and orientation of the LIDAR sensor in real space, the LIDAR sensor's linear and angular speed, and the LIDAR sensor's linear accelerations along multiple axes, etc. from changes in positions of immutable (e.g., static) objects detected in a sequence of LIDAR frames.” , where an “immutable (e.g., static) objects detected” is a static object), determine the first physical quantity using an amount of movement of a keypoint included in the object information (lines 1-10 of col. 5—see citation immediately above—, where lines 47-61 of col. 11 and lines 21-43 of col. 14 further recite(s): [lines 47-61 of col. 11] “Block S130 of the method S100 recites deriving a first set of longitudinal velocities of a reference point on the autonomous vehicle from a second sequence of inertial data received from the IMU over a second period of time between an initial time and the first time; Block S132 of the method S100 recites deriving a second set of longitudinal velocities of the reference point on the autonomous vehicle based on features detected in an initial LIDAR frame received from the LIDAR sensor at the initial time and features detected in a first LIDAR frame received from the LIDAR sensor at the first time; and Block S134 of the method S100 recites calculating a first LIDAR sensor offset time that approximately minimizes a second difference between the first set of longitudinal velocities and the second set of longitudinal velocities.” [lines 21-43 of col. 14] “As described above, the autonomous vehicle can then compare: an IMU-derived velocity of the autonomous vehicle in the longitudinal direction over a period of time (e.g., 200 milliseconds up to the current time); and a LIDAR-derived velocity of the autonomous vehicle in the longitudinal direction over this same period of time (e.g., ten velocity values derived from a contiguous sequence of eleven LIDAR frames recorded by the LIDAR sensor operating at a frame rate of 20 Hz over the most-recent 200-millisecond interval). With the IMU-derived longitudinal velocities as a reference, the autonomous vehicle can: select an offset time test value within a preset range (e.g., −10 milliseconds in a range from −10 milliseconds to +10 milliseconds with l-milliseconds steps); shift the LIDAR-derived longitudinal velocities along the IMU-derived longitudinal velocities by the offset time test value; calculate an area between the IMU-derived longitudinal velocities and the adjusted LIDAR-derived longitudinal velocities; and repeat this process for each other offset time test value in the preset range. The autonomous vehicle can then store an offset time test value—in this preset range—that minimizes the area between the IMU-derived longitudinal velocities and the adjusted LIDAR-derived longitudinal velocities as “d.sub.LIDAR_IMU,i,initial.”” , where the “LIDAR-derived velocity” is determining a first physical quantity using an amount of movement (e.g., “changes in positions”) of a keypoint (e.g., “points” of the “immutable (e.g., static) objects detected in a sequence of LIDAR frames”) when the target is a static object (e.g., “immutable (e.g., static) objects detected”)), receive a current speed through the communication device, and determine the current speed as the second physical quantity (lines 1-10 of col. 5, lines 47-61 of col. 11, and lines 21-43 of col. 14—see citations in claim 5 above—, where the “IMU-derived velocity” is a current speed through a communication device when the target is determined as a static object (e.g., “immutable (e.g., static) objects detected in a sequence of LIDAR frames”)), and wherein the processor is configured to adjust the calibration information in response to determining a difference between the first physical quantity and the second physical quantity (lines 1-10 of col. 5, lines 47-61 of col. 11, and lines 21-43 of col. 14—see citations in claim 5 above—, where comparing the “IMU-derived velocity of the autonomous vehicle” and the “LIDAR-derived velocity of the autonomous vehicle” is determining that a difference (e.g., “difference between the first set of longitudinal velocities and a second set of longitudinal velocities”, where the “first set” is IMU-derived and the “second set” is LIDAR-derived) to adjust calibration information (e.g., “offset time” for “LIDAR-derived longitudinal velocities” obtained from LIDAR devices)). It would have been obvious to one of ordinary skill in the art before the effective filing date of the presently filed invention to modify the system of Mukherjee, as modified by Lee and Zarubin, to incorporate determining the first physical quantity as an amount of movement of a keypoint included in the object information and the second physical quantity as a current speed through the communication device when the target object is a static object to calibrate different sensors (e.g., a camera sensor and a speed sensor) by adjusting calibration information by determining a difference between the first physical quantity using an amount of movement of a keypoint included in the object information (e.g., “LIDAR-derived velocity”) and a second physical quantity of a current speed received through a communication device (e.g., an “IMU-derived velocity”) as taught by Liu above. Regarding claim 4, Mukherjee, as modified by Lee, Zarubin and Liu, discloses the electronic device of claim 3, wherein Zarubin further teaches the processor is configured to divide i) a value obtained by multiplying the actual width information by a focal distance of the camera by ii) the width information on the image to determine the second physical quantity (description, para. [0024]—see citation in claim 3 above—, where the second physical quantity ‘L=(W*f)/V’ is dividing a value obtained by i) multiplying the actual width information (W; a “standard width” of a target) by a focal distance (f; “focal length”) by ii) the width information on the image (V: width depicted in the “video frame”)). Regarding claim 9, the claim is the method performed by the device of claim 1. Therefore, claim 9 recites similar limitations to claim 1 and is rejected for similar rationale and reasoning (see the analysis for claim 1 above). Regarding claim 12, the claim recites similar limitations to claim 4 and is rejected for similar rationale and reasoning (see the analysis for claim 4 above). Claims 7 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Mukherjee, as modified by Lee, Zarubin, and Liu, as applied to claims 1 and 9 above, and further in view of Wang et al. (Wang-236; CN 111862236 A). Regarding claim 7, Mukherjee, as modified by Lee, Zarubin and Liu, discloses the electronic device of claim 1, wherein Wang-236 further teaches in the same field of endeavor of calibration adjustment the processor is configured to adjust the calibration information in response to determining that a difference between the first physical quantity and the second physical quantity is greater than or equal to a predetermined reference value (para(s). [0092] and [0104], recite(s) [0092] “…The entire image is translated and/or rotated based on the estimated correction amounts of the binocular camera's pitch angle deviation Δp, roll angle deviation Δr, height deviation ΔH, and front-to-back deviation ΔD, so that the vertical coordinates of each feature point pair in the left and right images are basically consistent (located in the same row), that is, the average value VErr of the vertical coordinate deviation values approaches 0 (less than the first threshold).” [0104] “7) Obtaining a wheel movement distance based on the wheel movement information, obtaining a three-dimensional distance change value of the static object based on the parallax of the static object, comparing the wheel movement distance with the three-dimensional distance change value of the static object to obtain a distance deviation, and if the distance deviation is greater than a second threshold, performing a correction estimate on the second parameter set, and recalculating the three-dimensional distance of the static object based on the calibrated image. Repeating the iterative correction until the distance deviation is less than the second threshold, updating the second parameter set, and completing the self-calibration of the binocular camera parameters.” , where the “distance deviation” is the difference between the first physical quantity and the second physical quantity; and performing a “correction estimate” when the “distance deviation is greater than a second” is adjusting the calibration information in response to the “distance deviation” being at least “greater” to a predetermined reference value (e.g., a “threshold”)). Since Mukherjee and Wang-236 each disclose a distance deviation, it would have been obvious to one of ordinary skill in the art before the effective filing date of the presently filed invention to modify the system of Mukherjee, as modified by Lee, Zarubin and Liu, to incorporate adjusting the calibration information in response to determining that a difference between the first physical quantity and the second physical quantity is greater than or equal to a predetermined reference value to correct parameters for multiple sets of parameter data as taught by Wang-236 (para(s). [0109], recite(s) [0109] “…The second parameter group B is repeatedly iterated and corrected through multiple sets of data. When the error of the above nonlinear equation is less than the second threshold, the parameter correction is ended, and the current second parameter group B is updated with the new parameters to complete the self-calibration of the binocular camera. The second threshold can be set to a specific value based on actual needs and is not limited to this embodiment.” ). Regarding claim 15, the claim recites similar limitations to claim 7 and is rejected for similar rationale and reasoning (see the analysis for claim 7 above). Claims 8 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Mukherjee, as modified by Lee, Zarubin, and Liu, as applied to claims 1 and 9 above, further in view of Hanniel et al. (Hanniel; US 2019/0376809 A1), and further more in view of Wang et al. (Wang-129; US 2022/0375129 A1). Regarding claim 8, Mukherjee, as modified by Lee, Zarubin, and Liu, discloses the electronic device of claim 1, wherein Hanniel teaches in the same field of endeavor of determining distance estimation physical quantities the processor is configured to: determine an average value of a plurality of the first physical quantities and an average value of a plurality of the second physical quantities during a predetermined reference time (para(s). [0268-0269], recite(s) [0268] “When the physical size of the landmark is known, the distance to the landmark may also be determined based on the following equation: Z=f*W/ω, where f is the focal length, W is the size of the landmark (e.g., height or width), w is the number of pixels when the landmark leaves the image. …A value estimating the physical size of the landmark may be calculated by averaging multiple observations at the server side. The resulting error in distance estimation may be very small. There are two sources of error that may occur when using the formula above, namely ΔW and Δω. Their contribution to the distance error is given by ΔZ=f*W*Δω/ω2+f*ΔW/ω. However, ΔW decays to zero by averaging; hence ΔZ is determined by Δω (e.g., the inaccuracy of the bounding box in the image).” [0269] “For landmarks of unknown dimensions, the distance to the landmark may be estimated by tracking feature points on the landmark between successive frames. For example, certain features appearing on a speed limit sign may be tracked between two or more image frames. Based on these tracked features, a distance distribution per feature point may be generated. The distance estimate may be extracted from the distance distribution. For example, the most frequent distance appearing in the distance distribution may be used as the distance estimate. As another example, the average of the distance distribution may be used as the distance estimate.” , where “averag[ing] multiple observations” and/or “averag[ing] of the distance distribution may be used as the distance element” over “two or more image frames” is determining average values for first and second physical feature quantities during predetermined reference frame since the first physical quantity and the second physical quantity include distance measurements (e.g., distance based on a “vanishing point” and a “distance based on vehicle dimensions”, respectively) as disclosed by Mukherjee in claim 1 above) (para(s). [0081], recite(s) [0081] “4) Counting the vertical coordinate deviations of the left and right images of each feature point pair; if the average value of the vertical coordinate deviations is greater than a first threshold, performing a correction estimate on at least one parameter in the first parameter group; recalibrating and comparing the parameters with the first threshold again; iterating and correcting the parameters repeatedly until the average value of the vertical coordinate deviations is less than the first threshold; and updating the first parameter group.” , where). It would have been obvious to one of ordinary skill in the art before the effective filing date of the presently filed invention to modify the system of Mukherjee, as modified by Lee, Zarubin, and Liu, to incorporate determining an average value of each of a plurality of the first physical quantities and a plurality of the second physical quantities during a predetermined reference time to take into account a distance distribution of physical quantities over a sequence of image frames as taught by Hanniel above. Where Mukherjee, as modified by Lee, Zarubin, Liu, and Hanniel, does not specifically disclose …adjust the calibration information in response to determining that a difference between the average value of the plurality of the first physical quantities and the average value of the plurality of the second physical quantities is greater than or equal to a predetermined reference value; Wang-128 teaches in the same field of endeavor of determining distance estimation physical quantities during a predetermined reference time …adjust the calibration information in response to determining that a difference between the average value of the plurality of the first physical quantities and the average value of the plurality of the second physical quantities is greater than or equal to a predetermined reference value (para(s). [0101-0102], recite(s) [0101] “The second image and the projection image are compared to determine a reprojection error at some or all points of the images (block 610). In some embodiments, the reprojection error (e.g., the reprojection error 532) is determined (e.g., by the optimization engine 516) through a pixel-wise comparison of the projection image and the second image to determine a distance (e.g., a Euclidean distance) between each point of the projection image and a corresponding point of the second image. In some embodiments, the determined distances are combined (e.g., summed, averaged, etc.) to determine the reprojection error.” [0102] “Based on the reprojection error, at least one of the intrinsic parameters of the camera are adjusted (block 612). In some embodiments, adjusting at least one of the intrinsic parameters of the camera includes applying an optimization algorithm (e.g., gradient descent or stochastic gradient descent, among others, and optionally with backpropagation) to a cost function that relates the intrinsic parameters to the reprojection error and is configured to minimize the cost function by adjusting at least one of the intrinsic parameters. In some embodiments, the optimization is continuously applied and the intrinsic parameters (and/or other parameters) adjusted until the reprojection error satisfies a predetermined threshold. When the reprojection error satisfied the predetermined threshold, the camera is determined to be calibrated and the intrinsic parameters (and/or other parameters) can be stored (e.g., in the database 506) and/or provided to other systems or devices described herein, such as the perception system 508 for using in computer vision tasks.” , where adjusting at least one of “the intrinsic parameters (and/or other parameters)… until the reprojection error satisfies a predetermined threshold” is adjusting calibration information in response to determining that a difference (e.g., the “reprojection error”) between the average value of the physical quantities (e.g., “distances”) is at least equal to a predetermined reference value (e.g., “predetermined threshold”)). It would have been obvious to one of ordinary skill in the art before the effective filing date of the presently filed invention to modify the system of Mukherjee, as modified by Lee, Zarubin, Liu, and Hanniel, to incorporate adjusting the calibration information in response to determining that a difference between the average value of the plurality of the first physical quantities and the average value of the plurality of the second physical quantities is greater than or equal to a predetermined reference value to correct for reprojection error between images in a sequence of images as taught by Wang-128 above. Regarding claim 16, the claim recites similar limitations to claim 8 and is rejected for similar rationale and reasoning (see the analysis for claim 8 above). Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JULIA Z YAO whose telephone number is (571)272-2870. The examiner can normally be reached Monday - Friday (8:30AM - 5PM). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Emily Terrell can be reached at (571)270-3717. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /J.Z.Y./Examiner, Art Unit 2666 /EMILY C TERRELL/Supervisory Patent Examiner, Art Unit 2666
Read full office action

Prosecution Timeline

Nov 09, 2023
Application Filed
Sep 30, 2025
Non-Final Rejection — §103, §112
Jan 02, 2026
Response Filed
Feb 20, 2026
Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597169
ACTIVITY PREDICTION USING PORTABLE MULTISPECTRAL LASER SPECKLE IMAGER
2y 5m to grant Granted Apr 07, 2026
Patent 12586219
Fast Kinematic Construct Method for Characterizing Anthropogenic Space Objects
2y 5m to grant Granted Mar 24, 2026
Patent 12579638
IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, AND IMAGE PROCESSING PROGRAM FOR PERFORMING DETERMINATION REGARDING DIAGNOSIS OF LESION ON BASIS OF SYNTHESIZED TWO-DIMENSIONAL IMAGE AND PRIORITY TARGET REGION
2y 5m to grant Granted Mar 17, 2026
Patent 12562063
METHOD FOR DETECTING ROAD USERS
2y 5m to grant Granted Feb 24, 2026
Patent 12561805
METHODS AND SYSTEMS FOR GENERATING DUAL-ENERGY IMAGES FROM A SINGLE-ENERGY IMAGING SYSTEM BASED ON ANATOMICAL SEGMENTATION
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
68%
Grant Probability
99%
With Interview (+35.7%)
3y 4m
Median Time to Grant
Moderate
PTA Risk
Based on 69 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month