Prosecution Insights
Last updated: April 18, 2026
Application No. 18/816,605

INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING SYSTEM, INFORMATION PROCESSING METHOD, AND STORAGE MEDIUM

Final Rejection §103§112
Filed
Aug 27, 2024
Examiner
DANG, PHILIP
Art Unit
2488
Tech Center
2400 — Computer Networks
Assignee
Canon Kabushiki Kaisha
OA Round
2 (Final)
77%
Grant Probability
Favorable
3-4
OA Rounds
2y 10m
To Grant
99%
With Interview

Examiner Intelligence

Grants 77% — above average
77%
Career Allow Rate
363 granted / 470 resolved
+19.2% vs TC avg
Strong +33% interview lift
Without
With
+33.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
49 currently pending
Career history
519
Total Applications
across all art units

Statute-Specific Performance

§101
4.5%
-35.5% vs TC avg
§103
48.6%
+8.6% vs TC avg
§102
11.1%
-28.9% vs TC avg
§112
25.5%
-14.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 470 resolved cases

Office Action

§103 §112
DETAILED ACTIONNotice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Applicant Response to Official Action The response filed on 1/16/2026 has been entered and made of record. Acknowledgment Claims 3-4, 10, 13 canceled on1/16/2026, are acknowledged by the examiner. Claim 16, added on 1/16/2026, is acknowledged by the examiner. Claims 1-2, 5-9, and 14-15, amended on 1/16/2026, are acknowledged by the examiner. Response to Arguments Applicant’s arguments with respect to claims 1, 14, 15 and their dependent claims have been considered but they are moot in view of the new grounds of rejection necessitated by amendments initiated by the applicant. Examiner addresses the main arguments of the Applicant as below. Regarding the drawing objection, the amendment filed on 1/16/2026 addresses the issue. As a result, the drawing objection is withdrawn. Regarding the 35 U.S.C. 112(f) interpretation, the amendment filed on 1/16/2026 addresses the issue. As a result, the 35 U.S.C. 112(f) interpretation is withdrawn. Regarding the 35 U.S.C. 112(b) rejection, the amendment filed on 1/16/2026 addresses the issue. As a result, the 35 U.S.C. 112(b) rejection is withdrawn. Regarding the U.S.C. 102 rejection, the Applicant amended the claims. As a result, the 35 U.S.C. 102 rejection is withdrawn. Claim Rejection – 35 U.S.C. § 112 The following is a quotation of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same and shall set forth the best mode contemplated by the inventor of carrying out his invention. The following is a quotation of 35 U.S.C. 112(b): (B) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of pre-AIA 35 U.S.C. 112, second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 16 are rejected under 35 U.S.C. 112(a) or pre-AIA 35 U.S.C. 112, first paragraph because of a new matter. The new claim recites “wherein the position information indicating the second target position is not displayed”. It is noted that the specification discusses some cases when the target position is not displayed [para. 0194, 0197]; however the specification does not disclose a parameter, such as “the position information” that indicates the second target position is not displayed”. As a result, the claim limitation “wherein the position information indicating the second target position is not displayed” is a new matter, which is not described in the application as originally filed. The new matter is required to be canceled from the claims (Please see MPEP 608.04). Claim 16 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter, which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention. It is not clear to readers from the specification that what is “the position information”, which indicates “the second target position is not displayed”. Therefore, the claim 16 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under pre-AIA 35 U.S.C. 103(a) are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims under pre-AIA 35 U.S.C. 103(a), the examiner presumes that the subject matter of the various claims was commonly owned at the time any inventions covered therein were made absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and invention dates of each claim that was not commonly owned at the time a later invention was made in order for the examiner to consider the applicability of pre-AIA 35 U.S.C. 103(c) and potential pre-AIA 35 U.S.C. 102(e), (f) or (g) prior art under pre-AIA 35 U.S.C. 103(a). Claims 1 and 11-2, 14-15 are rejected under 35 U.S.C. 103 as being unpatentable over Alakarhu (US Patent 11,532,170 B2), (“Alakarhu”), in view of Satoh et al. (US Patent Application Publication 2024/0089589 A1), (“Satoh”). Regarding claim 1, Alakarhu meets the claim limitations, as follows: An information processing apparatus (The processor may be programmed to perform steps of a method of an LPR system) [Alakarhu: col. 2, line 49-50; Fig. 4] comprising: at least one memory (memory) [Alakarhu: col. 2, line 46] storing instructions (One or more computer programs can be configured to perform particular operations or actions by virtue of including instructions that, when executed by data processing apparatus, cause the apparatus to perform the actions) [Alakarhu: col. 2, line 30-34]; and at least one processor that, when executing the stored instructions (One or more computer programs can be configured to perform particular operations or actions by virtue of including instructions that, when executed by data processing apparatus, cause the apparatus to perform the actions) [Alakarhu: col. 2, line 30-34], cooperates with the at least one memory to ((a tangible, non-transitory computer- readable medium or computer memory storing executable instructions that, when executed by a processor) [Alakarhu: col. 5, line 32-34]; (a processor, which is communicatively coupled to the memory, programmed to) [Alakarhu: col. 41, line 36-37]):acquire (receive, by the processor) [Alakarhu: col. 5, line 7] position information indicating a first target position of an object (For example, the processor may receive the first image from the memory, where the first image shows the target vehicle at a first position) [Alakarhu: col. 6, line 56-58]; (The system further including a location tracking device configured to stamp the first image with a first location of the police vehicle at the first time when the first image is captured by the camera device. The system further including a clock configured to timestamp the first image upon capture by the camera device) [Alakarhu: col. 7, line 44-49]) in a captured image captured by an image apparatus (captured by the image sensor in the camera assembly) [Alakarhu: col. 23, line 21]; acquire (receive, by the processor) [Alakarhu: col. 5, line 7] position information indicating a second target position of the object ((the target vehicle at the second position) [Alakarhu: col. 7, line 4-5]; (at a second position in a subsequent image, such that the change in the relative positions of the target vehicle in the images shows relative speed) [Alakarhu: col. 24, line 7-9]; (The system where the camera device attached to the police vehicle includes a plurality of cameras arranged at different locations of the police vehicle and configured to operate in a coordinated manner to capture the first image, and where at least one of the plurality of cameras includes an unmanned aerial vehicle equipped with video capture capabilities. The system including: a wireless circuitry configured to receive a command from an external system, where the command causes the license plate recognition system to capture the image data, where the external system includes at least one of a remote command center, another police vehicle, and a body-camera device. Implementations of the described techniques may include hardware, a method or process, or computer software on a computer-accessible medium) [Alakarhu: col. 7, line 49-63]) from an external apparatus ((license plate is tracked and captured for multiple frames at different times) [Alakarhu: col. 28, line 63-65]; (captured by the image sensor in the camera assembly) [Alakarhu: col. 23, line 21]) controlling the imaging apparatus tracking the object in the captured image ((The system where the camera device attached to the police vehicle includes a plurality of cameras arranged at different locations of the police vehicle and configured to operate in a coordinated manner to capture the first image, and where at least one of the plurality of cameras includes an unmanned aerial vehicle equipped with video capture capabilities. The system including: a wireless circuitry configured to receive a command from an external system, where the command causes the license plate recognition system to capture the image data, where the external system includes at least one of a remote command center, another police vehicle, and a body-camera device. Implementations of the described techniques may include hardware, a method or process, or computer software on a computer-accessible medium) [Alakarhu: col. 7, line 49-63]; (the processor may receive, from the memory, a first long-exposure image and a first short-exposure image. The first long-exposure image may be captured with a first long-exposure setting of the camera device, and the first short-exposure image may be 55 captured with a first short-exposure setting of the camera device. The processor of the LPR system may also detect a first license plate and a second license plate in the first long-exposure image, where the first license plate is in a first portion of the field of view and the second license plate is in a second portion of the field of view, and where the first portion of the field of view is different than the second portion of the field of view.) [Alakarhu: col. 2, line 50-62; Fig. 4]; (One or more computer programs can be configured to perform particular operations or actions by virtue of including instructions that, when executed by data processing apparatus, cause the apparatus to perform the actions) [Alakarhu: col. 2, line 30-34]); transmit the position information indicating the first target position to the external apparatus ((The LPR system may transmit this license plate information and/or other information to other systems for processing) [Alakarhu: col. 29, line 5-7]; (For example, the GPS unit 212 may detect if the subject vehicle is traveling in a city, suburb, or rural area, and adjust the settings in adherence) [Alakarhu: col. 16, line 46-49]; (In addition to location, the GPS unit or other component in the camera apparatus may timestamp the capture of the image. Location and time data may then be embedded, or otherwise securely integrated, into the image ( e.g., metadata of the image) to authenticate the capture of the photograph. Once the image is securely stamped with location and date/time, the image may, in some example, be securely transmitted to a cloud server for storage. In some examples, the image may be stored in an evidence management system provided as a cloud-based service.) [Alakarhu: col. 16, line 24-34]); and cause (One or more computer programs can be configured to perform particular operations or actions by virtue of including instructions that, when executed by data processing apparatus, cause the apparatus to perform the actions) [Alakarhu: col. 2, line 30-34] a display device to display a mark indicating the first target position (Moreover, for report generation purposes, the metadata, e.g., location, date, time, and other information, may be collected into a central data store, indexed, tagged, and displayed in a visually useful format) [Alakarhu: col. 34, line 16-19; Figs. 6A-10], wherein the mark is displayed reqardless of a distance between the first target position and the second target position ((The system further including a location tracking device configured to stamp the first image with a first location of the police vehicle at the first time when the first image is captured by the camera device) [Alakarhu: col. 7, line 44-47]; (the LPR system may further comprise a location tracking device coupled to the camera device. The processor of the LPR system may be programmed to stamp an image with a location of the camera device at the time when the image is captured by the camera device. In addition, the LPR system may also comprise a clock mechanism. The processor of the LPR system may be programmed to timestamp an image upon capture by the camera device. At least one benefit of the aforementioned metadata associated with the captured and processed image is that evidentiary requirements in a legal proceeding or other investigation may be satisfied. Moreover, for report generation purposes, the metadata, e.g., location, date, time, and other information, may be collected into a central data store, indexed, tagged, and displayed in a visually useful format. In addition, the tracking of license plates can also produce other useful information, e.g., how fast the cars are moving) [Alakarhu: col. 34, line 5-21]; (Next, a processor in the LPR system may use one or more object tracking libraries to detect what appears to be the same license plate in one or more subsequently captured images. In one example, the LPR system may use a boundary of (e.g., bounding box around) the license plate to track its position across frames. FIG. 10 illustrates how an edge detection module in the object tracking library may detect and track a parallelogram boundary shape 1004 around a license plate. In step 1206 of FIG. 12, the LPR system detects one or more license plates in the stream of frames with long-exposure and/or short-exposure) [Alakarhu: col. 22, line 1-11; Figs. 6B-6C] – Note: Alakarhu discloses that the marks, such as a tag, a bounding box are displayed without considering the distance); (Moreover, for report generation purposes, the metadata, e.g., location, date, time, and other information, may be collected into a central data store, indexed, tagged, and displayed in a visually useful format) [Alakarhu: col. 34, line 16-19’ Figs 6A-6C] – Note: Please see more details in Figs. 6A-6C); (the micro-controller 404 may receive raw input and calculate the speed delta value and distance value itself) [Alakarhu: col. 37, line 5-6]), and is displayed in a first form or a second form different from the first form according to the distance ((FIG. 6B is a long-exposure image and short-exposure image of a field of view of a street with following traffic and oncoming traffic in accordance with one or more example embodiments. FIG. 6C is a subsequent long-exposure image and short exposure image of a field of view of a street with following traffic and oncoming traffic in accordance with one or more example embodiments. FIG. 7 shows photographs of license plates as originally captured and after aligned and filtered in accordance with one or more example embodiments. FIG. 8 is a comparison of a photograph of an unfiltered image and a filtered image in accordance with one or more example embodiments. FIG. 9A, FIG. 9B, and FIG. 9C show photographs of license plates as originally captured and after aligned and filtered across multiple frames in accordance with one or more example embodiments) [Alakarhu: col. 11, line 25-42; Figs. 6B-10] – Note: Figs. 6B-10 display at least two different forms of the vehicle or its license plate); (Moreover, for report generation purposes, the metadata, e.g., location, date, time, and other information, may be collected into a central data store, indexed, tagged, and displayed in a visually useful format) [Alakarhu: col. 34, line 16-19’ Figs 6A-6C] – Note: Please see more details in Figs. 6A-6C)). Alakarhu does not explicitly disclose the following claim limitations (Emphasis added). a display device. However in the same field of endeavor, Satoh further discloses the claim limitations and the deficient claim limitations as follows: a display device (the display unit) [Satoh: para. 0042; Figs. 1, 8]. display information indicating a target position in a first form or a second form different from the first form (the controller displays the indicator in a first form when in the first mode, and displays the indicator in a second form different from the first form when in the second mode) [Satoh: claim 10]; wherein the mark is displayed reqardless of a distance between the first target position and the second target position ((An example of the guide indicator generated by the correction vector C which is input to the display control unit 180 is shown on the right side of FIG. 3. The guide indicator is composed of, for example, a circular outer edge whose radius is the predetermined value "R" of the shake correction condition, and a zoom position indicator indicated by a black square. On the right side of FIG. 3, for convenience, the center position of the outer edge of the guide indicator is indicated by a gray circle mark) [Satoh: para. 0073, Figs. 3-6]; (On the right side of FIG. 3, for convenience, the center position of the outer edge of the guide indicator is indicated by a gray circle mark) [Satoh: para. 0073, Figs. 3-6]; (Furthermore, the display color of the zoom position indicator may be changed as the zoom position indicator moves away from the center of the outer edge of the guide indicator) [Satoh: para. 180, Figs. 7, 11-12] ; (Next, a processor in the LPR system may use one or more object tracking libraries to detect what appears to be the same license plate in one or more subsequently captured images. In one example, the LPR system may use a boundary of (e.g., bounding box around) the license plate to track its position across frames. FIG. 10 illustrates how an edge detection module in the object tracking library may detect and track a parallelogram boundary shape 1004 around a license plate. In step 1206 of FIG. 12, the LPR system detects one or more license plates in the stream of frames with long-exposure and/or short-exposure) [Alakarhu: col. 22, line 1-11; Figs. 6B-6C] – Note: Alakarhu discloses that the marks, such as a tag, a bounding box are displayed without considering the distance)), and is displayed in a first form or a second form different from the first form (wherein the plurality of modes comprise at least a first mode in which the image blur correction is performed by a first process, and a second mode in which the image blur correction is performed by a second process different from the first process or in which the image blur correction is not performed, and the controller displays the indicator in a first form when in the first mode, and displays the indicator in a second form different from the first form when in the second mode) [Satoh: claim 10]. It would have been obvious to one with an ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Alakarhu and Satoh to program the system to implement of Satoh’s method. Therefore, the combination of Alakarhu and Satoh will enable the system to improving image resolution [Satoh: para. 0028]. Regarding claim 11, Alakarhu meets the claim limitations as set forth in claim 1. Alakarhu further meets the claim limitations as follow. wherein the first form and the second form ((The method where the generating by micro-controller of the illumination command includes: outputting a medium value for the illumination command when the relative speed is below a threshold speed and the approximate distance is above a threshold distance) [Alakarhu: col. 10, line 20-23]; (Moreover, for report generation purposes, the metadata, e.g., location, date, time, and other information, may be collected into a central data store, indexed, tagged, and displayed in a visually useful format) [Alakarhu: col. 34, line 16-19]; (a controller communicatively coupled to the processor and camera device, to adjust the long-exposure setting of the camera device by a first amount and to adjust the short exposure setting of the camera device by a second amount, where the long-exposure setting and short-exposure setting include at least one of a shutter speed setting, ISO setting, zoom setting, and exposure setting of the camera device; capture, using the image sensor, a second long-exposure image with the adjusted long-exposure setting and a second short-exposure image with the adjusted short-exposure setting; detect the license plate in the second long-exposure image and the second short-exposure image) [Alakarhu: col. 5, line 55-67]) are different from each other in shape ((Next, a processor in the LPR system may use one or more object tracking libraries to detect what appears to be the same license plate in one or more subsequently captured images. In one example, the LPR system may use a boundary of (e.g., bounding box around) the license plate to track its position across frames. FIG. 10 illustrates how an edge detection module in the object tracking library may detect and track a parallelogram boundary shape 1004 around a license plate. In step 1206 of FIG. 12, the LPR system detects one or more license plates in the stream of frames with long-exposure and/or short-exposure) [Alakarhu: col. 22, line 1-11; Figs. 6B-6C]; (For example, characteristics of a license plate may include, but are not limited to, detection of a rectangular shape) [Alakarhu: col. 22, line 30-32] ; (the shape and size of the boundary of the moving license plate changes dramatically as it passes the camera car. The LPR system is able to detect, track, and then transform the license plate image by using the fact that the shape of the license plate is known and predefined.) [Alakarhu: col. 30, line 4-9]). In the same field of endeavor Satoh further discloses the deficient claim limitations as follows: the first form and the second form (wherein the plurality of modes comprise at least a first mode in which the image blur correction is performed by a first process, and a second mode in which the image blur correction is performed by a second process different from the first process or in which the image blur correction is not performed, and the controller displays the indicator in a first form when in the first mode, and displays the indicator in a second form different from the first form when in the second mode) [Satoh: claim 10] are different from each other in shape (Moreover, the shape of the zoom position indicator is not limited to a square. The shape of the zoom position indicator may be, for example, a circle, a triangle, or an arrow starting from the center of the guide indicator) [Satoh: para. 0179]. It would have been obvious to one with an ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Alakarhu with Satoh to program the system to implement of Satoh’s method. Therefore, the combination of Alakarhu with Satoh will enable the system to improving image resolution [Satoh: para. 0028]. Regarding claim 12, Alakarhu meets the claim limitations as set forth in claim 1. Alakarhu further meets the claim limitations as follow. wherein the first form and the second form ((The method where the generating by micro-controller of the illumination command includes: outputting a medium value for the illumination command when the relative speed is below a threshold speed and the approximate distance is above a threshold distance) [Alakarhu: col. 10, line 20-23]; (Moreover, for report generation purposes, the metadata, e.g., location, date, time, and other information, may be collected into a central data store, indexed, tagged, and displayed in a visually useful format) [Alakarhu: col. 34, line 16-19]; (a controller communicatively coupled to the processor and camera device, to adjust the long-exposure setting of the camera device by a first amount and to adjust the short exposure setting of the camera device by a second amount, where the long-exposure setting and short-exposure setting include at least one of a shutter speed setting, ISO setting, zoom setting, and exposure setting of the camera device; capture, using the image sensor, a second long-exposure image with the adjusted long-exposure setting and a second short-exposure image with the adjusted short-exposure setting; detect the license plate in the second long-exposure image and the second short-exposure image) [Alakarhu: col. 5, line 55-67]) are different from each other in color ((Next, a processor in the LPR system may use one or more object tracking libraries to detect what appears to be the same license plate in one or more subsequently captured images. In one example, the LPR system may use a boundary of (e.g., bounding box around) the license plate to track its position across frames. FIG. 10 illustrates how an edge detection module in the object tracking library may detect and track a parallelogram boundary shape 1004 around a license plate. In step 1206 of FIG. 12, the LPR system detects one or more license plates in the stream of frames with long-exposure and/or short-exposure) [Alakarhu: col. 22, line 1-11; Figs. 6B-6C]; (other characteristics (e.g., text color, background color, typeface, and the like) of the characters in the license plate, and other factors) [Alakarhu: col. 3, line 67 – col. 4, line 3] – Note: Alakarhu teaches that the color can be used as an indicator). In the same field of endeavor Satoh further discloses the deficient claim limitations as follows: the first form and the second form (wherein the plurality of modes comprise at least a first mode in which the image blur correction is performed by a first process, and a second mode in which the image blur correction is performed by a second process different from the first process or in which the image blur correction is not performed, and the controller displays the indicator in a first form when in the first mode, and displays the indicator in a second form different from the first form when in the second mode) [Satoh: claim 10] are different from each other in color ((Furthermore, the display color of the zoom position indicator may be changed as the zoom position indicator moves away from the center of the outer edge of the guide indicator. For example, the display color of the zoom position indicator may be "blue" near the center, and may become "red" as the zoom position indicator gets closer to the outer edge. Further, the display color of the outer edge of the guide indicator may be changed) [Satoh: para. 0180, Figs. 3-6] (An example of the guide indicator generated by the correction vector C which is input to the display control unit 180 is shown on the right side of FIG. 3. The guide indicator is composed of, for example, a circular outer edge whose radius is the predetermined value "R" of the shake correction condition, and a zoom position indicator indicated by a black square. On the right side of FIG. 3, for convenience, the center position of the outer edge of the guide indicator is indicated by a gray circle mark) [Satoh: para. 0073, Figs. 3-6]). It would have been obvious to one with an ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Alakarhu with Satoh to program the system to implement of Satoh’s method. Therefore, the combination of Alakarhu with Satoh will enable the system to improving image resolution [Satoh: para. 0028]. Regarding claim 14, Alakarhu meets the claim limitations, as follows: An information processing method (a method) [Alakarhu: col. 2, line 49-50; Fig. 4] for an information processing apparatus (The processor may be programmed to perform steps of a method of an LPR system) [Alakarhu: col. 2, line 49-50; Fig. 4], the method (a method) [Alakarhu: col. 2, line 49-50; Fig. 4] comprising: acquiring (receive, by the processor) [Alakarhu: col. 5, line 7] position information indicating a first target position of an object (For example, the processor may receive the first image from the memory, where the first image shows the target vehicle at a first position) [Alakarhu: col. 6, line 56-58]; (The system further including a location tracking device configured to stamp the first image with a first location of the police vehicle at the first time when the first image is captured by the camera device. The system further including a clock configured to timestamp the first image upon capture by the camera device) [Alakarhu: col. 7, line 44-49]) in a captured image captured by an image apparatus (captured by the image sensor in the camera assembly) [Alakarhu: col. 23, line 21]; acquire (receive, by the processor) [Alakarhu: col. 5, line 7] position information indicating a first target position of an object (For example, the processor may receive the first image from the memory, where the first image shows the target vehicle at a first position) [Alakarhu: col. 6, line 56-58]; (The system further including a location tracking device configured to stamp the first image with a first location of the police vehicle at the first time when the first image is captured by the camera device. The system further including a clock configured to timestamp the first image upon capture by the camera device) [Alakarhu: col. 7, line 44-49]) in a captured image acquired by an image apparatus (captured by the image sensor in the camera assembly) [Alakarhu: col. 23, line 21]; acquiring (receive, by the processor) [Alakarhu: col. 5, line 7] position information indicating a second target position of the object ((the target vehicle at the second position) [Alakarhu: col. 7, line 4-5]; (at a second position in a subsequent image, such that the change in the relative positions of the target vehicle in the images shows relative speed) [Alakarhu: col. 24, line 7-9]; (The system where the camera device attached to the police vehicle includes a plurality of cameras arranged at different locations of the police vehicle and configured to operate in a coordinated manner to capture the first image, and where at least one of the plurality of cameras includes an unmanned aerial vehicle equipped with video capture capabilities. The system including: a wireless circuitry configured to receive a command from an external system, where the command causes the license plate recognition system to capture the image data, where the external system includes at least one of a remote command center, another police vehicle, and a body-camera device. Implementations of the described techniques may include hardware, a method or process, or computer software on a computer-accessible medium) [Alakarhu: col. 7, line 49-63]) from an external apparatus ((license plate is tracked and captured for multiple frames at different times) [Alakarhu: col. 28, line 63-65]; (captured by the image sensor in the camera assembly) [Alakarhu: col. 23, line 21]) controlling the imaging apparatus tracking the object in the captured image ((The system where the camera device attached to the police vehicle includes a plurality of cameras arranged at different locations of the police vehicle and configured to operate in a coordinated manner to capture the first image, and where at least one of the plurality of cameras includes an unmanned aerial vehicle equipped with video capture capabilities. The system including: a wireless circuitry configured to receive a command from an external system, where the command causes the license plate recognition system to capture the image data, where the external system includes at least one of a remote command center, another police vehicle, and a body-camera device. Implementations of the described techniques may include hardware, a method or process, or computer software on a computer-accessible medium) [Alakarhu: col. 7, line 49-63]; (the processor may receive, from the memory, a first long-exposure image and a first short-exposure image. The first long-exposure image may be captured with a first long-exposure setting of the camera device, and the first short-exposure image may be 55 captured with a first short-exposure setting of the camera device. The processor of the LPR system may also detect a first license plate and a second license plate in the first long-exposure image, where the first license plate is in a first portion of the field of view and the second license plate is in a second portion of the field of view, and where the first portion of the field of view is different than the second portion of the field of view.) [Alakarhu: col. 2, line 50-62; Fig. 4]; (One or more computer programs can be configured to perform particular operations or actions by virtue of including instructions that, when executed by data processing apparatus, cause the apparatus to perform the actions) [Alakarhu: col. 2, line 30-34]); transmiting the position information indicating the first target position to the external apparatus ((The LPR system may transmit this license plate information and/or other information to other systems for processing) [Alakarhu: col. 29, line 5-7]; (For example, the GPS unit 212 may detect if the subject vehicle is traveling in a city, suburb, or rural area, and adjust the settings in adherence) [Alakarhu: col. 16, line 46-49]; (In addition to location, the GPS unit or other component in the camera apparatus may timestamp the capture of the image. Location and time data may then be embedded, or otherwise securely integrated, into the image ( e.g., metadata of the image) to authenticate the capture of the photograph. Once the image is securely stamped with location and date/time, the image may, in some example, be securely transmitted to a cloud server for storage. In some examples, the image may be stored in an evidence management system provided as a cloud-based service.) [Alakarhu: col. 16, line 24-34]); and causing (One or more computer programs can be configured to perform particular operations or actions by virtue of including instructions that, when executed by data processing apparatus, cause the apparatus to perform the actions) [Alakarhu: col. 2, line 30-34] a display device to display a mark indicating the first target position (Moreover, for report generation purposes, the metadata, e.g., location, date, time, and other information, may be collected into a central data store, indexed, tagged, and displayed in a visually useful format) [Alakarhu: col. 34, line 16-19; Figs. 6A-10], wherein the mark is displayed reqardless of a distance between the first target position and the second target position ((The system further including a location tracking device configured to stamp the first image with a first location of the police vehicle at the first time when the first image is captured by the camera device) [Alakarhu: col. 7, line 44-47]; (the LPR system may further comprise a location tracking device coupled to the camera device. The processor of the LPR system may be programmed to stamp an image with a location of the camera device at the time when the image is captured by the camera device. In addition, the LPR system may also comprise a clock mechanism. The processor of the LPR system may be programmed to timestamp an image upon capture by the camera device. At least one benefit of the aforementioned metadata associated with the captured and processed image is that evidentiary requirements in a legal proceeding or other investigation may be satisfied. Moreover, for report generation purposes, the metadata, e.g., location, date, time, and other information, may be collected into a central data store, indexed, tagged, and displayed in a visually useful format. In addition, the tracking of license plates can also produce other useful information, e.g., how fast the cars are moving) [Alakarhu: col. 34, line 5-21] ; (Moreover, for report generation purposes, the metadata, e.g., location, date, time, and other information, may be collected into a central data store, indexed, tagged, and displayed in a visually useful format) [Alakarhu: col. 34, line 16-19’ Figs 6A-6C] – Note: Please see more details in Figs. 6A-6C); (the micro-controller 404 may receive raw input and calculate the speed delta value and distance value itself) [Alakarhu: col. 37, line 5-6] ; (Next, a processor in the LPR system may use one or more object tracking libraries to detect what appears to be the same license plate in one or more subsequently captured images. In one example, the LPR system may use a boundary of (e.g., bounding box around) the license plate to track its position across frames. FIG. 10 illustrates how an edge detection module in the object tracking library may detect and track a parallelogram boundary shape 1004 around a license plate. In step 1206 of FIG. 12, the LPR system detects one or more license plates in the stream of frames with long-exposure and/or short-exposure) [Alakarhu: col. 22, line 1-11; Figs. 6B-6C] – Note: Alakarhu discloses that the marks, such as a tag, a bounding box are displayed without considering the distance)), and is displayed in a first form or a second form different from the first form according to the distance ((FIG. 6B is a long-exposure image and short-exposure image of a field of view of a street with following traffic and oncoming traffic in accordance with one or more example embodiments. FIG. 6C is a subsequent long-exposure image and short exposure image of a field of view of a street with following traffic and oncoming traffic in accordance with one or more example embodiments. FIG. 7 shows photographs of license plates as originally captured and after aligned and filtered in accordance with one or more example embodiments. FIG. 8 is a comparison of a photograph of an unfiltered image and a filtered image in accordance with one or more example embodiments. FIG. 9A, FIG. 9B, and FIG. 9C show photographs of license plates as originally captured and after aligned and filtered across multiple frames in accordance with one or more example embodiments) [Alakarhu: col. 11, line 25-42; Figs. 6B-10] – Note: Figs. 6B-10 display at least two different forms of the vehicle or its license plate); (Moreover, for report generation purposes, the metadata, e.g., location, date, time, and other information, may be collected into a central data store, indexed, tagged, and displayed in a visually useful format) [Alakarhu: col. 34, line 16-19’ Figs 6A-6C] – Note: Please see more details in Figs. 6A-6C)). Alakarhu does not explicitly disclose the following claim limitations (Emphasis added). a display device. However in the same field of endeavor, Satoh further discloses the claim limitations and the deficient claim limitations as follows: a display device (the display unit) [Satoh: para. 0042; Figs. 1, 8]. display information indicating a target position in a first form or a second form different from the first form (the controller displays the indicator in a first form when in the first mode, and displays the indicator in a second form different from the first form when in the second mode) [Satoh: claim 10]; wherein the mark is displayed reqardless of a distance between the first target position and the second target position ((An example of the guide indicator generated by the correction vector C which is input to the display control unit 180 is shown on the right side of FIG. 3. The guide indicator is composed of, for example, a circular outer edge whose radius is the predetermined value "R" of the shake correction condition, and a zoom position indicator indicated by a black square. On the right side of FIG. 3, for convenience, the center position of the outer edge of the guide indicator is indicated by a gray circle mark) [Satoh: para. 0073, Figs. 3-6]; (On the right side of FIG. 3, for convenience, the center position of the outer edge of the guide indicator is indicated by a gray circle mark) [Satoh: para. 0073, Figs. 3-6]; (Furthermore, the display color of the zoom position indicator may be changed as the zoom position indicator moves away from the center of the outer edge of the guide indicator) [Satoh: para. 180, Figs. 7, 11-12]), and is displayed in a first form or a second form different from the first form (wherein the plurality of modes comprise at least a first mode in which the image blur correction is performed by a first process, and a second mode in which the image blur correction is performed by a second process different from the first process or in which the image blur correction is not performed, and the controller displays the indicator in a first form when in the first mode, and displays the indicator in a second form different from the first form when in the second mode) [Satoh: claim 10]. It would have been obvious to one with an ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Alakarhu and Satoh to program the system to implement of Satoh’s method. Therefore, the combination of Alakarhu and Satoh will enable the system to improving image resolution [Satoh: para. 0028]. Regarding claim 15, Alakarhu meets the claim limitations, as follows: A non-transitory computer-readable storage medium (memory) [Alakarhu: col. 2, line 47; Fig. 4] storing computer-executable instructions that, when executed by a computer, cause the computer to ((One or more computer programs can be configured to perform particular operations or actions by virtue of including instructions that, when executed by data processing apparatus, cause the apparatus to perform the actions) [Alakarhu: col. 2, line 30-34]; (a processor, which is communicatively coupled to the memory, programmed to) [Alakarhu: col. 41, line 36-37]): acquire (receive, by the processor) [Alakarhu: col. 5, line 7] position information indicating a first target position of an object (For example, the processor may receive the first image from the memory, where the first image shows the target vehicle at a first position) [Alakarhu: col. 6, line 56-58]; (The system further including a location tracking device configured to stamp the first image with a first location of the police vehicle at the first time when the first image is captured by the camera device. The system further including a clock configured to timestamp the first image upon capture by the camera device) [Alakarhu: col. 7, line 44-49]) in a captured image acquired by an image apparatus (captured by the image sensor in the camera assembly) [Alakarhu: col. 23, line 21]; acquire (receive, by the processor) [Alakarhu: col. 5, line 7] position information indicating a second target position of the object ((the target vehicle at the second position) [Alakarhu: col. 7, line 4-5]; (at a second position in a subsequent image, such that the change in the relative positions of the target vehicle in the images shows relative speed) [Alakarhu: col. 24, line 7-9]; (The system where the camera device attached to the police vehicle includes a plurality of cameras arranged at different locations of the police vehicle and configured to operate in a coordinated manner to capture the first image, and where at least one of the plurality of cameras includes an unmanned aerial vehicle equipped with video capture capabilities. The system including: a wireless circuitry configured to receive a command from an external system, where the command causes the license plate recognition system to capture the image data, where the external system includes at least one of a remote command center, another police vehicle, and a body-camera device. Implementations of the described techniques may include hardware, a method or process, or computer software on a computer-accessible medium) [Alakarhu: col. 7, line 49-63]) from an external apparatus ((license plate is tracked and captured for multiple frames at different times) [Alakarhu: col. 28, line 63-65]; (captured by the image sensor in the camera assembly) [Alakarhu: col. 23, line 21]) controlling the imaging apparatus tracking the object in the captured image ((The system where the camera device attached to the police vehicle includes a plurality of cameras arranged at different locations of the police vehicle and configured to operate in a coordinated manner to capture the first image, and where at least one of the plurality of cameras includes an unmanned aerial vehicle equipped with video capture capabilities. The system including: a wireless circuitry configured to receive a command from an external system, where the command causes the license plate recognition system to capture the image data, where the external system includes at least one of a remote command center, another police vehicle, and a body-camera device. Implementations of the described techniques may include hardware, a method or process, or computer software on a computer-accessible medium) [Alakarhu: col. 7, line 49-63]; (the processor may receive, from the memory, a first long-exposure image and a first short-exposure image. The first long-exposure image may be captured with a first long-exposure setting of the camera device, and the first short-exposure image may be 55 captured with a first short-exposure setting of the camera device. The processor of the LPR system may also detect a first license plate and a second license plate in the first long-exposure image, where the first license plate is in a first portion of the field of view and the second license plate is in a second portion of the field of view, and where the first portion of the field of view is different than the second portion of the field of view.) [Alakarhu: col. 2, line 50-62; Fig. 4]; (One or more computer programs can be configured to perform particular operations or actions by virtue of including instructions that, when executed by data processing apparatus, cause the apparatus to perform the actions) [Alakarhu: col. 2, line 30-34]); transmit the position information indicating the first target position to the external apparatus ((The LPR system may transmit this license plate information and/or other information to other systems for processing) [Alakarhu: col. 29, line 5-7]; (For example, the GPS unit 212 may detect if the subject vehicle is traveling in a city, suburb, or rural area, and adjust the settings in adherence) [Alakarhu: col. 16, line 46-49]; (In addition to location, the GPS unit or other component in the camera apparatus may timestamp the capture of the image. Location and time data may then be embedded, or otherwise securely integrated, into the image ( e.g., metadata of the image) to authenticate the capture of the photograph. Once the image is securely stamped with location and date/time, the image may, in some example, be securely transmitted to a cloud server for storage. In some examples, the image may be stored in an evidence management system provided as a cloud-based service.) [Alakarhu: col. 16, line 24-34]); and cause (One or more computer programs can be configured to perform particular operations or actions by virtue of including instructions that, when executed by data processing apparatus, cause the apparatus to perform the actions) [Alakarhu: col. 2, line 30-34] a display device to display a mark indicating the first target position (Moreover, for report generation purposes, the metadata, e.g., location, date, time, and other information, may be collected into a central data store, indexed, tagged, and displayed in a visually useful format) [Alakarhu: col. 34, line 16-19; Figs. 6A-10], wherein the mark is displayed reqardless of a distance between the first target position and the second target position ((The system further including a location tracking device configured to stamp the first image with a first location of the police vehicle at the first time when the first image is captured by the camera device) [Alakarhu: col. 7, line 44-47]; (the LPR system may further comprise a location tracking device coupled to the camera device. The processor of the LPR system may be programmed to stamp an image with a location of the camera device at the time when the image is captured by the camera device. In addition, the LPR system may also comprise a clock mechanism. The processor of the LPR system may be programmed to timestamp an image upon capture by the camera device. At least one benefit of the aforementioned metadata associated with the captured and processed image is that evidentiary requirements in a legal proceeding or other investigation may be satisfied. Moreover, for report generation purposes, the metadata, e.g., location, date, time, and other information, may be collected into a central data store, indexed, tagged, and displayed in a visually useful format. In addition, the tracking of license plates can also produce other useful information, e.g., how fast the cars are moving) [Alakarhu: col. 34, line 5-21] ; (Moreover, for report generation purposes, the metadata, e.g., location, date, time, and other information, may be collected into a central data store, indexed, tagged, and displayed in a visually useful format) [Alakarhu: col. 34, line 16-19’ Figs 6A-6C] – Note: Please see more details in Figs. 6A-6C); (the micro-controller 404 may receive raw input and calculate the speed delta value and distance value itself) [Alakarhu: col. 37, line 5-6] ; (Next, a processor in the LPR system may use one or more object tracking libraries to detect what appears to be the same license plate in one or more subsequently captured images. In one example, the LPR system may use a boundary of (e.g., bounding box around) the license plate to track its position across frames. FIG. 10 illustrates how an edge detection module in the object tracking library may detect and track a parallelogram boundary shape 1004 around a license plate. In step 1206 of FIG. 12, the LPR system detects one or more license plates in the stream of frames with long-exposure and/or short-exposure) [Alakarhu: col. 22, line 1-11; Figs. 6B-6C] – Note: Alakarhu discloses that the marks, such as a tag, a bounding box are displayed without considering the distance)), and is displayed in a first form or a second form different from the first form according to the distance ((FIG. 6B is a long-exposure image and short-exposure image of a field of view of a street with following traffic and oncoming traffic in accordance with one or more example embodiments. FIG. 6C is a subsequent long-exposure image and short exposure image of a field of view of a street with following traffic and oncoming traffic in accordance with one or more example embodiments. FIG. 7 shows photographs of license plates as originally captured and after aligned and filtered in accordance with one or more example embodiments. FIG. 8 is a comparison of a photograph of an unfiltered image and a filtered image in accordance with one or more example embodiments. FIG. 9A, FIG. 9B, and FIG. 9C show photographs of license plates as originally captured and after aligned and filtered across multiple frames in accordance with one or more example embodiments) [Alakarhu: col. 11, line 25-42; Figs. 6B-10] – Note: Figs. 6B-10 display at least two different forms of the vehicle or its license plate); (Moreover, for report generation purposes, the metadata, e.g., location, date, time, and other information, may be collected into a central data store, indexed, tagged, and displayed in a visually useful format) [Alakarhu: col. 34, line 16-19’ Figs 6A-6C] – Note: Please see more details in Figs. 6A-6C)). Alakarhu does not explicitly disclose the following claim limitations (Emphasis added). a display device. However in the same field of endeavor, Satoh further discloses the claim limitations and the deficient claim limitations as follows: a display device (the display unit) [Satoh: para. 0042; Figs. 1, 8]. display information indicating a target position in a first form or a second form different from the first form (the controller displays the indicator in a first form when in the first mode, and displays the indicator in a second form different from the first form when in the second mode) [Satoh: claim 10]; wherein the mark is displayed reqardless of a distance between the first target position and the second target position ((An example of the guide indicator generated by the correction vector C which is input to the display control unit 180 is shown on the right side of FIG. 3. The guide indicator is composed of, for example, a circular outer edge whose radius is the predetermined value "R" of the shake correction condition, and a zoom position indicator indicated by a black square. On the right side of FIG. 3, for convenience, the center position of the outer edge of the guide indicator is indicated by a gray circle mark) [Satoh: para. 0073, Figs. 3-6]; (On the right side of FIG. 3, for convenience, the center position of the outer edge of the guide indicator is indicated by a gray circle mark) [Satoh: para. 0073, Figs. 3-6]; (Furthermore, the display color of the zoom position indicator may be changed as the zoom position indicator moves away from the center of the outer edge of the guide indicator) [Satoh: para. 180, Figs. 7, 11-12]), and is displayed in a first form or a second form different from the first form (wherein the plurality of modes comprise at least a first mode in which the image blur correction is performed by a first process, and a second mode in which the image blur correction is performed by a second process different from the first process or in which the image blur correction is not performed, and the controller displays the indicator in a first form when in the first mode, and displays the indicator in a second form different from the first form when in the second mode) [Satoh: claim 10]. It would have been obvious to one with an ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Alakarhu and Satoh to program the system to implement of Satoh’s method. Therefore, the combination of Alakarhu and Satoh will enable the system to improving image resolution [Satoh: para. 0028]. Claims 2 and 6 are rejected under 35 U.S.C. 103 as being unpatentable over Alakarhu (US Patent 11,532,170 B2), (“Alakarhu”), in view of Satoh et al. (US Patent Application Publication 2024/0089589 A1), (“Satoh”), in view of Oshima et al. (US Patent 10,593,063 B2), (“Oshima”). Regarding claim 2, Alakarhu meets the claim limitations as set forth in claim 1. Alakarhu further meets the claim limitations as follow. Wherein, in a case where the distance (the micro-controller 404 may receive raw input and calculate the speed delta value and distance value itself) [Alakarhu: col. 37, line 5-6]) is greater than a predetermined threshold ((compared to the subject vehicle but with maximum recognition distance) [Alakarhu: col. 13, line 38-39]; (the approximate distance is above a threshold distance.) [Alakarhu: col. 10, line 12-13]), the display control (a micro-controller) [Alakarhu: col. 35, line 6], the mark is displayed in the first form ((FIG. 6B is a long-exposure image and short-exposure image of a field of view of a street with following traffic and oncoming traffic in accordance with one or more example embodiments. FIG. 6C is a subsequent long-exposure image and short exposure image of a field of view of a street with following traffic and oncoming traffic in accordance with one or more example embodiments. FIG. 7 shows photographs of license plates as originally captured and after aligned and filtered in accordance with one or more example embodiments. FIG. 8 is a comparison of a photograph of an unfiltered image and a filtered image in accordance with one or more example embodiments. FIG. 9A, FIG. 9B, and FIG. 9C show photographs of license plates as originally captured and after aligned and filtered across multiple frames in accordance with one or more example embodiments) [Alakarhu: col. 11, line 25-42; Figs. 6B-10] – Note: Please see the first form in Fig. 6B); (The method where the generating by micro-controller of the illumination command includes: outputting a medium value for the illumination command when the relative speed is below a threshold speed and the approximate distance is above a threshold distance) [Alakarhu: col. 10, line 20-23]; (Moreover, for report generation purposes, the metadata, e.g., location, date, time, and other information, may be collected into a central data store, indexed, tagged, and displayed in a visually useful format) [Alakarhu: col. 34, line 16-19]; (a controller communicatively coupled to the processor and camera device, to adjust the long-exposure setting of the camera device by a first amount and to adjust the short exposure setting of the camera device by a second amount, where the long-exposure setting and short-exposure setting include at least one of a shutter speed setting, ISO setting, zoom setting, and exposure setting of the camera device; capture, using the image sensor, a second long-exposure image with the adjusted long-exposure setting and a second short-exposure image with the adjusted short-exposure setting; detect the license plate in the second long-exposure image and the second short-exposure image) [Alakarhu: col. 5, line 55-67]), and wherein, in a case where the distance is less than or equal to the predetermined threshold ((the target vehicle is at a short distance) [Alakarhu: col. 14, line 4-5] ; (the approximate distance is below a threshold distance) [Alakarhu: col. 10, line 22-23]), the mark is displayed in the second form ((The method may also include outputting a low value for the illumination command when the relative speed is below a threshold speed and the approximate distance is below a threshold distance) [Alakarhu: col. 10, line 20-23]; (Moreover, for report generation purposes, the metadata, e.g., location, date, time, and other information, may be collected into a central data store, indexed, tagged, and displayed in a visually useful format) [Alakarhu: col. 34, line 16-19]; (a controller communicatively coupled to the processor and camera device, to adjust the long-exposure setting of the camera device by a first amount and to adjust the short exposure setting of the camera device by a second amount, where the long-exposure setting and short-exposure setting include at least one of a shutter speed setting, ISO setting, zoom setting, and exposure setting of the camera device; capture, using the image sensor, a second long-exposure image with the adjusted long-exposure setting and a second short-exposure image with the adjusted short-exposure setting; detect the license plate in the second long-exposure image and the second short-exposure image) [Alakarhu: col. 5, line 55-67]). In the same field of endeavor Oshima further discloses the claim limitations as follows: wherein, in a case where a distance is greater than a predetermined threshold (the camera-side target information correction unit calculates a difference distance indicating a distance between the position of the tracking target specified on the basis of the first target information and the position of the tracking target specified on the basis of the second target information. Preferably, the camera-side target information correction unit corrects the first target information in a case in which the difference distance is equal to or greater than a first threshold value and does not correct the first target information in a case in which the difference distance is less than the first threshold value) [Oshima: col. 3, line 51-61] wherein in a case where the distance is less than or equal to the threshold (the camera-side target information correction unit calculates a difference distance indicating a distance between the position of the tracking target specified on the basis of the first target information and the position of the tracking target specified on the basis of the second target information. Preferably, the camera-side target information correction unit does not correct the first target information in a case in which the difference distance is less than the first threshold value) [Oshima: col. 3, line 51-61], It would have been obvious to one with an ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Alakarhu with Oshima to program the system to implement of Oshima’s method. Therefore, the combination of Alakarhu with Oshima will enable the system to improve the accuracy of the first target information and to reduce the time required for the initial operation for detecting the tracking target [Oshima: col. 3, line 3-5]. Moreover, in the same field of endeavor Satoh further discloses the claim limitations as follows: in the first form (wherein the plurality of modes comprise at least a first mode in which the image blur correction is performed by a first process, and a second mode in which the image blur correction is performed by a second process different from the first process or in which the image blur correction is not performed, and the controller displays the indicator in a first form when in the first mode, and displays the indicator in a second form different from the first form when in the second mode) [Satoh: claim 10], and in the second form (wherein the plurality of modes comprise at least a first mode in which the image blur correction is performed by a first process, and a second mode in which the image blur correction is performed by a second process different from the first process or in which the image blur correction is not performed, and the controller displays the indicator in a first form when in the first mode, and displays the indicator in a second form different from the first form when in the second mode) [Satoh: claim 10]. It would have been obvious to one with an ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Alakarhu and Oshima with Satoh to program the system to implement of Satoh’s method. Therefore, the combination of Alakarhu and Oshima with Satoh will enable the system to improving image resolution [Satoh: para. 0028]. Regarding claim 6, Alakarhu meets the claim limitations as set forth in claim 1. Alakarhu further meets the claim limitations as follow. when executing the stored instructions, the at least one processor further cooperates with the at least one memory to (One or more computer programs can be configured to perform particular operations or actions by virtue of including instructions that, when executed by data processing apparatus, cause the apparatus to perform the actions) [Alakarhu: col. 2, line 30-34] receive an operation of a user ((The method also includes based on the received inputs) [Alakarhu: col. 9, line 59-60]; (operator input) [Alakarhu: col. 20, line 46-47]; (can be turned on and off by the operator as desired) [Alakarhu: col. 20, line 54]), and wherein the mark is moved ((wherein the first image shows the license plate at a first position) [Alakarhu: col. 41, line 38-49]; (motion include the way objects move across time to derive deeper meaning from the scene) [Alakarhu: col. 18, line 63-65]; (Next, a processor in the LPR system may use one or more object tracking libraries to detect what appears to be the same license plate in one or more subsequently captured images. In one example, the LPR system may use a boundary of (e.g., bounding box around) the license plate to track its position across frames. FIG. 10 illustrates how an edge detection module in the object tracking library may detect and track a parallelogram boundary shape 1004 around a license plate. In step 1206 of FIG. 12, the LPR system detects one or more license plates in the stream of frames with long-exposure and/or short-exposure) [Alakarhu: col. 22, line 1-11; Figs. 6B-6C] – Note: The bounding box in Figs. 6B and 6C can be considered as a mark), based on the operation of the user ((The method also includes based on the received inputs) [Alakarhu: col. 9, line 59-60]; (operator input) [Alakarhu: col. 20, line 46-47]; (can be turned on and off by the operator as desired) [Alakarhu: col. 20, line 54]). In the same field of endeavor Oshima further discloses the claim limitations as follows: receive an operation of a user ((a user interface unit for displaying a menu screen and for setting and inputting various parameters, in cooperation with the operation unit) [Oshima: col. 11, line 11-14]; (the input of a character string to an input field of the window through the operation panel) [Oshima: col. 14, line 32-33]). It would have been obvious to one with an ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Alakarhu with Oshima to program the system to implement of Oshima’s method. Therefore, the combination of Alakarhu with Oshima will enable the system to improve the accuracy of the first target information and to reduce the time required for the initial operation for detecting the tracking target [Oshima: col. 3, line 3-5]. In the same field of endeavor Satoh further discloses the claim limitations as follows: receive an operation of a user (For example, if it is selected to end the photography based on input (user operation) to an operation unit) [Satoh: para. 0080];the mark is moved (An example of the guide indicator generated by the correction vector C which is input to the display control unit 180 is shown on the right side of FIG. 3. The guide indicator is composed of, for example, a circular outer edge whose radius is the predetermined value "R" of the shake correction condition, and a zoom position indicator indicated by a black square. On the right side of FIG. 3, for convenience, the center position of the outer edge of the guide indicator is indicated by a gray circle mark) [Satoh: para. 0073, Figs. 3-6]; (On the right side of FIG. 3, for convenience, the center position of the outer edge of the guide indicator is indicated by a gray circle mark) [Satoh: para. 0073, Figs. 3-6]; (Furthermore, the display color of the zoom position indicator may be changed as the zoom position indicator moves away from the center of the outer edge of the guide indicator) [Satoh: para. 180, Figs. 7, 11-12]; (An example of the guide indicator generated by the correction vector C which is input to the display control unit 180 is shown on the right side of FIG. 3. The guide indicator is composed of, for example, a circular outer edge whose radius is the predetermined value "R" of the shake correction condition, and a zoom position indicator indicated by a black square. On the right side of FIG. 3, for convenience, the center position of the outer edge of the guide indicator is indicated by a gray circle mark) [Satoh: para. 0073, Figs. 3-6]). It would have been obvious to one with an ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Alakarhu and Oshima with Satoh to program the system to implement of Satoh’s method. Therefore, the combination of Alakarhu and Oshima with Satoh will enable the system to improving image resolution [Satoh: para. 0028]. Regarding claim 7, Alakarhu meets the claim limitations as set forth in claim 6. Alakarhu further meets the claim limitations as follow. when executing the stored instructions, the at least one processor further cooperates with the at least one memory to (One or more computer programs can be configured to perform particular operations or actions by virtue of including instructions that, when executed by data processing apparatus, cause the apparatus to perform the actions) [Alakarhu: col. 2, line 30-34]: calculate a movement amount of the mark according to an operation amount of the operation of the user (the micro-controller 404 may receive raw input and calculate the speed delta value and distance value itself) [Alakarhu: col. 37, line 5-6], wherein the mark is moved ((wherein the first image shows the license plate at a first position) [Alakarhu: col. 41, line 38-49]; (motion include the way objects move across time to derive deeper meaning from the scene) [Alakarhu: col. 18, line 63-65]; (Next, a processor in the LPR system may use one or more object tracking libraries to detect what appears to be the same license plate in one or more subsequently captured images. In one example, the LPR system may use a boundary of (e.g., bounding box around) the license plate to track its position across frames. FIG. 10 illustrates how an edge detection module in the object tracking library may detect and track a parallelogram boundary shape 1004 around a license plate. In step 1206 of FIG. 12, the LPR system detects one or more license plates in the stream of frames with long-exposure and/or short-exposure) [Alakarhu: col. 22, line 1-11; Figs. 6B-6C] – Note: The bounding box in Figs. 6B and 6C can be considered as a mark) based on the movement amount (the micro-controller 404 may receive raw input and calculate the speed delta value and distance value itself) [Alakarhu: col. 37, line 5-6]. In the same field of endeavor Oshima further discloses the claim limitations as follows: operation of a user ((a user interface unit for displaying a menu screen and for setting and inputting various parameters, in cooperation with the operation unit) [Oshima: col. 11, line 11-14]; (the input of a character string to an input field of the window through the operation panel) [Oshima: col. 14, line 32-33]). It would have been obvious to one with an ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Alakarhu with Oshima to program the system to implement of Oshima’s method. Therefore, the combination of Alakarhu with Oshima will enable the system to improve the accuracy of the first target information and to reduce the time required for the initial operation for detecting the tracking target [Oshima: col. 3, line 3-5]. In the same field of endeavor Satoh further discloses the claim limitations as follows: receive an operation of a user (For example, if it is selected to end the photography based on input (user operation) to an operation unit) [Satoh: para. 0080];the mark is moved (An example of the guide indicator generated by the correction vector C which is input to the display control unit 180 is shown on the right side of FIG. 3. The guide indicator is composed of, for example, a circular outer edge whose radius is the predetermined value "R" of the shake correction condition, and a zoom position indicator indicated by a black square. On the right side of FIG. 3, for convenience, the center position of the outer edge of the guide indicator is indicated by a gray circle mark) [Satoh: para. 0073, Figs. 3-6]; (On the right side of FIG. 3, for convenience, the center position of the outer edge of the guide indicator is indicated by a gray circle mark) [Satoh: para. 0073, Figs. 3-6]; (Furthermore, the display color of the zoom position indicator may be changed as the zoom position indicator moves away from the center of the outer edge of the guide indicator) [Satoh: para. 180, Figs. 7, 11-12]; (An example of the guide indicator generated by the correction vector C which is input to the display control unit 180 is shown on the right side of FIG. 3. The guide indicator is composed of, for example, a circular outer edge whose radius is the predetermined value "R" of the shake correction condition, and a zoom position indicator indicated by a black square. On the right side of FIG. 3, for convenience, the center position of the outer edge of the guide indicator is indicated by a gray circle mark) [Satoh: para. 0073, Figs. 3-6]). It would have been obvious to one with an ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Alakarhu and Oshima with Satoh to program the system to implement of Satoh’s method. Therefore, the combination of Alakarhu and Oshima with Satoh will enable the system to improving image resolution [Satoh: para. 0028]. Claims 5 and 8-9 is rejected under 35 U.S.C. 103 as being unpatentable over Alakarhu (US Patent 11,532,170 B2), (“Alakarhu”), in view of Satoh et al. (US Patent Application Publication 2024/0089589 A1), (“Satoh”), in view of Kurosawa et al. (US Patent 6,822,676 Bl), (“Kurosawa”). Regarding claim 5, Alakarhu meets the claim limitations as set forth in claim 1. Alakarhu further meets the claim limitations as follow. wherein the mark is superimposed on the captured image (Moreover, for report generation purposes, the metadata, e.g., location, date, time, and other information, may be collected into a central data store, indexed, tagged, and displayed in a visually useful format) [Alakarhu: col. 34, line 16-19]. Alakarhu and Satoh does not explicitly disclose the following claim limitations (Emphasis added). superimposed on the captured target. However, in the same field of endeavor Kurosawa further discloses the deficient claim limitations as follows: superimposed (displaying an image picked up by the camera; and displaying a frame showing the display area of an image controlled by the input zoom control command for the camera by superimposing the frame to a display image) [Kurosawa: col. 3, line 2-4]. It would have been obvious to one with an ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Alakarhu and Satoh with Kurosawa to program the system to implement of Kurosawa’s method. Therefore, the combination of Alakarhu and Satoh with Kurosawa will enable the system to provide a camera control system realizing proper remote camera control [Kurosawa: col. 1, line 35-37]. Regarding claim 8, Alakarhu meets the claim limitations as set forth in claim 1. Alakarhu further meets the claim limitations as follow. when executing the stored instructions, the at least one processor further cooperates with the at least one memory to (One or more computer programs can be configured to perform particular operations or actions by virtue of including instructions that, when executed by data processing apparatus, cause the apparatus to perform the actions) [Alakarhu: col. 2, line 30-34]: calculate (One or more computer programs can be configured to perform particular operations or actions by virtue of including instructions that, when executed by data processing apparatus, cause the apparatus to perform the actions) [Alakarhu: col. 2, line 30-34] a communication time between the external apparatus and the information processing apparatus (the micro-controller 404 may receive raw input and calculate the speed delta value and distance value itself) [Alakarhu: col. 37, line 5-6], anddetermine (One or more computer programs can be configured to perform particular operations or actions by virtue of including instructions that, when executed by data processing apparatus, cause the apparatus to perform the actions) [Alakarhu: col. 2, line 30-34], according to the communication time, whether to enable control to change a display form of the mark ((the LPR system may further comprise a location tracking device coupled to the camera device. The processor of the LPR system may be programmed to stamp an image with a location of the camera device at the time when the image is captured by the camera device. In addition, the LPR system may also comprise a clock mechanism. The processor of the LPR system may be programmed to timestamp an image upon capture by the camera device. At least one benefit of the aforementioned metadata associated with the captured and processed image is that evidentiary requirements in a legal proceeding or other investigation may be satisfied. Moreover, for report generation purposes, the metadata, e.g., location, date, time, and other information, may be collected into a central data store, indexed, tagged, and displayed in a visually useful format. In addition, the tracking of license plates can also produce other useful information, e.g., how fast the cars are moving) [Alakarhu: col. 34, line 5-21]; (Moreover, for report generation purposes, the metadata, e.g., location, date, time, and other information, may be collected into a central data store, indexed, tagged, and displayed in a visually useful format) [Alakarhu: col. 34, line 16-19]; (FIG. 6B is a long-exposure image and short-exposure image of a field of view of a street with following traffic and oncoming traffic in accordance with one or more example embodiments. FIG. 6C is a subsequent long-exposure image and short exposure image of a field of view of a street with following traffic and oncoming traffic in accordance with one or more example embodiments. FIG. 7 shows photographs of license plates as originally captured and after aligned and filtered in accordance with one or more example embodiments. FIG. 8 is a comparison of a photograph of an unfiltered image and a filtered image in accordance with one or more example embodiments. FIG. 9A, FIG. 9B, and FIG. 9C show photographs of license plates as originally captured and after aligned and filtered across multiple frames in accordance with one or more example embodiments) [Alakarhu: col. 11, line 25-42; Figs. 6B-10] – Note: Figs. 6B-10 display at least two different forms of the vehicle or its license plate). In the same field of endeavor Satoh further discloses the deficient claim limitations as follows: whether to enable control to change a display form of the mark (wherein the plurality of modes comprise at least a first mode in which the image blur correction is performed by a first process, and a second mode in which the image blur correction is performed by a second process different from the first process or in which the image blur correction is not performed, and the controller displays the indicator in a first form when in the first mode, and displays the indicator in a second form different from the first form when in the second mode) [Satoh: claim 10]. It would have been obvious to one with an ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Alakarhu with Satoh to program the system to implement of Satoh’s method. Therefore, the combination of Alakarhu with Satoh will enable the system to improving image resolution [Satoh: para. 0028]. Alakarhu and Satoh do not explicitly disclose the following claim limitations (Emphasis added). calculate a communication time. However, in the same field of endeavor Kurosawa further discloses the deficient claim limitations as follows: calculate a communication time ((When a predetermined time elapses after the zoom control command is input in step S205, that is, the time equal to the time since the zoom control command was transmitted to the camera server 101 elapses, the display of the image in the video display area 136 is changed from the image undergoing the electronic zoom processing to the latest image transmitted from the camera server 101 in step S204. The predetermined time is assumed as a time longer enough than the time until an image zoom-controlled at the camera server side is received by the client 102 side. Moreover, the predetermined time is clocked by the fact that the time is counted by the CPU 210) [Kurosawa: col. 9, line 50-61]; (the above conventional camera control system is limited in the communication rate of a network. Therefore, a time difference occurs from the point of time of performing camera control by the time when an image formed by undergoing the camera control) [Kurosawa: col. 1, line 24-28]; determine, according to the communication time, whether to enable control to change a display form of the mark (As described above, the image displayed in the video display area 136 is electronic-zoom-processed and displayed until a zoom-controlled image is received from the camera server 101 after a zoom control command is input from the camera 103. Therefore, because it seems that zoom control can be executed as if data is not delayed due to the communication rate of a network, it is possible to provide a camera control system having a high manipulability) [Kurosawa: col. 9, line 62 – col. 10, line 2]. It would have been obvious to one with an ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Alakarhu and Satoh with Kurosawa to program the system to implement of Kurosawa’s method. Therefore, the combination of Alakarhu and Satoh with Kurosawa will enable the system to provide a camera control system realizing proper remote camera control [Kurosawa: col. 1, line 35-37]. Regarding claim 9, Alakarhu meets the claim limitations as set forth in claim 9. Alakarhu further meets the claim limitations as follow. wherein, in a case where the communication time ((In various embodiments, processor 214 may include data buses, output ports, input ports, timers) [Alakarhu: col. 15, line 60-61]; (response time/latency of the LPR system) [Alakarhu: col. 28, line 44]) is greater than a predetermined time (longer exposure time) [Alakarhu: col. 21, line 48] and the distance is greater than a predetermined threshold (the approximate distance is above a threshold distance) [Alakarhu: col. 10, line 12-13], the mark is displayed in the first form ((The method may also include outputting a low value for the illumination command when the relative speed is below a threshold speed and the approximate distance is below a threshold distance) [Alakarhu: col. 10, line 20-23]; (Moreover, for report generation purposes, the metadata, e.g., location, date, time, and other information, may be collected into a central data store, indexed, tagged, and displayed in a visually useful format) [Alakarhu: col. 34, line 16-19]), wherein, in a case (a controller communicatively coupled to the processor and camera device, to adjust the long-exposure setting of the camera device by a first amount and to adjust the short exposure setting of the camera device by a second amount, where the long-exposure setting and short-exposure setting include at least one of a shutter speed setting, ISO setting, zoom setting, and exposure setting of the camera device; capture, using the image sensor, a second long-exposure image with the adjusted long-exposure setting and a second short-exposure image with the adjusted short-exposure setting; detect the license plate in the second long-exposure image and the second short-exposure image) [Alakarhu: col. 5, line 55-67]) where the communication time is greater than the predetermined time (longer exposure time) [Alakarhu: col. 21, line 48] and the distance is less than or equal to the threshold ((the target vehicle is at a short distance) [Alakarhu: col. 14, line 4-5]; (the approximate distance is below a threshold distance.) [Alakarhu: col. 10, line 22-23]), the mark is displayed in the second form ((The method may also include outputting a low value for the illumination command when the relative speed is below a threshold speed and the approximate distance is below a threshold distance) [Alakarhu: col. 10, line 20-23]; (Moreover, for report generation purposes, the metadata, e.g., location, date, time, and other information, may be collected into a central data store, indexed, tagged, and displayed in a visually useful format) [Alakarhu: col. 34, line 16-19]), and wherein, in a case (a controller communicatively coupled to the processor and camera device, to adjust the long-exposure setting of the camera device by a first amount and to adjust the short exposure setting of the camera device by a second amount, where the long-exposure setting and short-exposure setting include at least one of a shutter speed setting, ISO setting, zoom setting, and exposure setting of the camera device; capture, using the image sensor, a second long-exposure image with the adjusted long-exposure setting and a second short-exposure image with the adjusted short-exposure setting; detect the license plate in the second long-exposure image and the second short-exposure image) [Alakarhu: col. 5, line 55-67] where the communication time is less than or equal to the predetermined time (shorter exposure time) [Alakarhu: col. 14, line 54], the mark is displayed is a determined form reqardless of the distance (Next, a processor in the LPR system may use one or more object tracking libraries to detect what appears to be the same license plate in one or more subsequently captured images. In one example, the LPR system may use a boundary of (e.g., bounding box around) the license plate to track its position across frames. FIG. 10 illustrates how an edge detection module in the object tracking library may detect and track a parallelogram boundary shape 1004 around a license plate. In step 1206 of FIG. 12, the LPR system detects one or more license plates in the stream of frames with long-exposure and/or short-exposure) [Alakarhu: col. 22, line 1-11; Figs. 6B-6C] – Note: Alakarhu discloses that the marks, such as a tag, a bounding box are displayed without considering the distance);. Alakarhu does not explicitly disclose the following claim limitations (Emphasis added). communication time. the mark is displayed is a determined form reqardless of the distance. However, in the same field of endeavor Satoh further discloses the claim limitations and the deficient claim limitations as follows: the mark is displayed in the first form (the controller displays the indicator in a first form when in the first mode, and displays the indicator in a second form different from the first form when in the second mode) [Satoh: claim 10]; the mark is displayed in the second form (the controller displays the indicator in a first form when in the first mode, and displays the indicator in a second form different from the first form when in the second mode) [Satoh: claim 10]. the mark is displayed is a determined form reqardless of the distance ((performs various displays based on display signals output from the display control unit) [Satoh: para. 0125]; (the display control unit 180 may not execute the step ofS121, and output the zoom image directly as the through image in the step of S123.) [Satoh: para. 0153]). It would have been obvious to one with an ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Alakarhu with Satoh to program the system to implement of Satoh’s method. Therefore, the combination of Alakarhu with Satoh will enable the system to improving image resolution [Satoh: para. 0028]. Alakarhu and Satoh do not explicitly disclose the following claim limitations (Emphasis added). communication time. However, in the same field of endeavor Kurosawa further discloses the deficient claim limitations as follows: communication time ((When a predetermined time elapses after the zoom control command is input in step S205, that is, the time equal to the time since the zoom control command was transmitted to the camera server 101 elapses, the display of the image in the video display area 136 is changed from the image undergoing the electronic zoom processing to the latest image transmitted from the camera server 101 in step S204. The predetermined time is assumed as a time longer enough than the time until an image zoom-controlled at the camera server side is received by the client 102 side. Moreover, the predetermined time is clocked by the fact that the time is counted by the CPU 210) [Kurosawa: col. 9, line 50-61]; (the above conventional camera control system is limited in the communication rate of a network. Therefore, a time difference occurs from the point of time of performing camera control by the time when an image formed by undergoing the camera control) [Kurosawa: col. 1, line 24-28]. It would have been obvious to one with an ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Alakarhu and Satoh with Kurosawa to program the system to implement of Kurosawa’s method. Therefore, the combination of Alakarhu and Satoh with Kurosawa will enable the system to provide a camera control system realizing proper remote camera control [Kurosawa: col. 1, line 35-37]. Reference Notice Additional prior arts, included in the Notice of Reference Cited, made of record and not relied upon is considered pertinent to applicant's disclosure. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action. Contact Information Any inquiry concerning this communication or earlier communications from the examiner should be directed to Philip Dang whose telephone number is (408) 918-7529. The examiner can normally be reached on Monday-Thursday between 8:30 am - 5:00 pm (PST). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sath Perungavoor can be reached on 571-272-7455. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Philip P. Dang/Primary Examiner, Art Unit 2488
Read full office action

Prosecution Timeline

Aug 27, 2024
Application Filed
Oct 23, 2025
Non-Final Rejection — §103, §112
Jan 16, 2026
Response Filed
Apr 02, 2026
Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602837
ON SUB-DIVISION OF MESH SEQUENCES
2y 5m to grant Granted Apr 14, 2026
Patent 12593116
IMAGING MEASUREMENT DEVICE USING GAS ABSORPTION IN THE MID-INFRARED BAND AND OPERATING METHOD OF IMAGING MEASUREMENT DEVICE
2y 5m to grant Granted Mar 31, 2026
Patent 12581069
METHOD FOR ENCODING/DECODING VIDEO SIGNAL, AND APPARATUS THEREFOR
2y 5m to grant Granted Mar 17, 2026
Patent 12581106
IMAGE DECODING METHOD AND DEVICE THEREFOR
2y 5m to grant Granted Mar 17, 2026
Patent 12574557
SCALABLE VIDEO CODING USING BASE-LAYER HINTS FOR ENHANCEMENT LAYER MOTION PARAMETERS
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
77%
Grant Probability
99%
With Interview (+33.2%)
2y 10m
Median Time to Grant
Moderate
PTA Risk
Based on 470 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month