Prosecution Insights
Last updated: April 19, 2026
Application No. 18/583,479

APPARATUS AND METHOD FOR CONTROLLING POSITION OF MOBILE OBJECT INCLUDING LOW VISIBILITY VIDEO IMPROVEMENT APPARATUS

Non-Final OA §101§103§112
Filed
Feb 21, 2024
Examiner
PHAM, NHUT HUY
Art Unit
2674
Tech Center
2600 — Communications
Assignee
Ltechkorea Inc.
OA Round
1 (Non-Final)
79%
Grant Probability
Favorable
1-2
OA Rounds
3y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 79% — above average
79%
Career Allow Rate
42 granted / 53 resolved
+17.2% vs TC avg
Strong +27% interview lift
Without
With
+26.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
31 currently pending
Career history
84
Total Applications
across all art units

Statute-Specific Performance

§101
9.4%
-30.6% vs TC avg
§103
62.2%
+22.2% vs TC avg
§102
11.9%
-28.1% vs TC avg
§112
14.5%
-25.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 53 resolved cases

Office Action

§101 §103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION The United States Patent & Trademark Office appreciates the application that is submitted by the inventor/assignee. The United States Patent & Trademark Office reviewed the following application and has made the following comments below. Information Disclosure Statement The information disclosure statement (IDS) submitted on 02/21/2024 is considered and attached. Priority This application claims benefit of foreign priority under 35 U.S.C. 119(a)-(d) of: KR10-2023-0116650, filed in Korea on 09/04/2023. Claim Status Claim 16 is rejected under 35 USC § 101. Claims 5-6 are rejected under 35 USC § 112b. Claims 1, 4-5, 7-8, 10-11, 13 and 15-16 are rejected under 35 USC § 103: Claims 1, 4, 7, 10, 16 are rejected over Miller in view of McDonnell in view of Kawata. Claims 5, 11, 13 and 15 are rejected over Miller in view of McDonnell in view of Kawata in view of Suryawanshi. Claim 8 is rejected over Miller in view of McDonnell in view of Kawata in view of Hong. Claims 2-3, 6, 9, 12 and 14 are objected. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “video input unit” in claim 1, line 2. “video correction unit” in claim 1, line 3. “low visibility determination unit” in claim 1, line 5. “position control unit” in claim 1, line 8. “video comparison unit” in claim 5, line 1. The corresponding structure for these nonce terms are “a processor, an application-specific integrated circuit (ASIC), other chipsets, a logic circuit, a register, a communication modem, a data processing device, etc., known in the art to which the present invention pertains to perform calculations and various control logics … the program module may be stored in the memory device and executed by the processor” in SPECIFICATION, paragraph [0112]. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 16 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claim(s) does/do not fall within at least one of the four categories of patent eligible subject matter because signals per se do not fall into one of the four statutory categories. Claim 16 recites, inter alia, “computer-readable recording medium”. After close inspection, the Examiner respectfully notes that the disclosure, as a whole, does not definitively describe what can and cannot be considered the “computer-readable recording medium”. Applicant’s specification discusses the “computer-readable recording medium” in paragraph [0150-0151]. The paragraphs disclose the storage medium “not a medium that stores videos therein for a while, such as a register, a cache, a memory, or the like” but then lists “RAM” as an example of the storage medium. This is contradictory since RAM is memory that stores data for a while. There is no exclusion of transitory signals. The spec never says “non-transitory” excludes “transitory propagating signals,” “carrier waves,” or “signals per se.” Paragraph [0150] discloses the network distribution “media may be distributed in a computer system connected by a network” and “computer-readable code may be stored in a distributed manner.” This language could be read to encompass transmission signals. “Semi-permanently” is ambiguous and doesn’t clearly exclude all transitory media under current USPTO guidance. Thus, the “computer-readable recording medium” could be any form – including a signal. An Examiner must give claims their broadest reasonable interpretation consistent with the specification during examination. The broadest reasonable interpretation of a claim drawn to a computer program product (also called a computer readable medium, machine readable medium and other such variations) typically covers forms of non-transitory tangible media and transitory propagating signals per se in view of the ordinary and customary meaning of computer readable media, particularly when the specification is silent. See MPEP 2111.01. When the broadest reasonable interpretation of a claim covers a signal, per se, the claim must be rejected under 35 U.S.C. § 101 as covering non-statutory subject matter. Therefore, given the non-definitive disclosure and the broadest reasonable interpretation, the machine-readable storage medium of the claim may include transitory propagating signals. As a result, the claim pertains to non-statutory subject matter. However, the Examiner respectfully submits a claim drawn to such a computer program product or computer readable storage medium that covers both transitory and non-transitory embodiments may be amended to narrow the claim to cover only statutory embodiments to avoid a rejection under 35 U.S.C. § 101 by adding the limitation “non-transitory” to the claim. Such an amendment would typically not raise the issue of new matter, even when the specification is silent because the broadest reasonable interpretation relies on the ordinary and customary meaning that includes signals per se. For additional information, please see the Patents’ Official Gazette notice published February 23, 2010 (1351 OG 212). Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 5-6 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. The examiner strongly suggested that appropriate corrections be made to clarify the claim scope. With respect to Claim 5, the claim recites the following, each of which renders the claim indefinite: “the two videos” on line 3th (unclear antecedent basis). The Examiner interprets it as “the captured video and the corrected video” Claim 6 is also rejected for the same reason due to its dependence on rejected claim 5. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 4, 7, 10, 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Miller et al. (US-20210350162-A1, hereinafter Miller) in view of McDonnell (US-20020190162-A1, hereinafter McDonnell) in view of Kawata et al. (US-20200290503-A1, hereinafter Kawata). CLAIM 1 In regards to Claim 1, Miller teaches an apparatus for controlling a position of a mobile object (Miller, Abstract: “an unmanned aerial vehicle (UAV)”, ¶ [0038]: “observation system 100 may generally operate autonomously to ensure safe operation of the one or more UAVs 106”. Miller teaches a system that control a drone autonomously), comprising: a video input unit (Miller, ¶ [0031]: “one or more observer devices 102 … ”) that receives a captured video from capturing equipment (Miller, ¶ [0034]: “each observer device 102 may include a plurality of fixed cameras … a movable camera may be included”, ¶ [0063]: “the observer devices 102 may send video feeds from their cameras to the user computing device”); Miller does not explicitly disclose keeping aircraft hidden in cloud or other atmospheric phenomenon (fog, haze, smoke, etc); a low visibility determination unit that determines whether a mobile object is present in a low visibility section; McDonnell is in the same field of art of unmanned aerial vehicles (UAVs). Further, McDonnell teaches keeping aircraft hidden in cloud or other atmospheric phenomenon (fog, haze, smoke, etc) (McDonnell, ¶ [0006]: “The present invention provides improvements in the survivability of aircraft equipped with sensors/targeting equipment by operating the aircraft in a manner to keep it hidden behind cloud cover or other atmospheric phenomenon that block most sensors”, Claim 9: “aircraft is flown high enough within or above said clouds or other obscurants to stay hidden”); a low visibility determination unit that determines whether a mobile object is present in a low visibility section (McDonnell, ¶ [0028]: “a fixed forward facing camera 49 is shown that could be used to detect the bottom of the cloud layer so as to maintain the proper altitude just below the clouds or detect obstacles or the ground level to maintain ground clearance”, see FIG. 2. McDonnell teaches using an image sensor to detect cloudy environment); Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Miller by incorporating system to keep aircraft hidden in cloud or other atmospheric phenomenon that is taught by McDonnell, to make an UAV for aerial reconnaissance; thus, one of ordinary skilled in the art would be motivated to combine the references since among its several aspect, the present invention recognizes there is a need to improve the survivability of UAVs used in aerial reconnaissance (McDonnell, ¶ [0006]: “The present invention provides improvements in the survivability of aircraft equipped with sensors/targeting equipment”). The combination of Miller and McDonnell does not explicitly disclose a video correction unit that receives the captured video from the video input unit to calculate a transmission rate, and uses the transmission rate to correct the captured video; a low visibility determination unit that receives the transmission rate from the video correction unit, determines whether a mobile object is present in a low visibility section using a lower limit of the transmission rate, and outputs a result value; Kawata is in the same field of art of detecting cloud/haze/fog from images. Further, Kawata teaches a video correction unit (Kawata, ¶ [0088]: “An image processing apparatus 50”, see FIG. 7. Kawata teaches an apparatus that can perform haze detection and correction) that receives the captured video from the video input unit to calculate a transmission rate (Kawata, ¶ [0089]: “The transmittance estimation unit 51 … estimates the transmittance for each pixel from the captured image”), and uses the transmission rate to correct the captured video (Kawata, ¶ [0089]: “The transmittance detection unit 52 uses the transmittance estimated … and outputs the transmittance map … The haze removal unit 53 removes the haze from the captured image acquired by the imaging apparatus on the basis of the transmittance map”); a low visibility determination unit that receives the transmission rate from the video correction unit (Kawata, ¶ [0090]: “a first operation of the haze removal unit 53, the haze is removed by adjusting contrast in accordance with a reciprocal of the transmittance ta(x) of each pixel indicated by the transmittance map”), determines whether a mobile object is present in a low visibility section using a lower limit of the transmission rate (Kawata, ¶ [0092]: “a lower limit value t0 may be set in advance, and if the transmittance ta(x) is smaller than the lower limit value t0, the haze removal may be performed, using the lower limit value t0”. Kawata teaches the haze removal unit detect and perform haze removal if the transmittance smaller than a lower limit), and outputs a result value (Kawata, ¶ [0086]: “… the detected transmittance may be output to an outside. For example, it is applied to an unmanned flying body such as a drone, and imaging is performed from the sky to detect the transmittance during the imaging. Moreover, in a case where the transmittance falls below a threshold value, the unmanned flying body notifies the control side and the like of the detected transmittance”); Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Miller and McDonnell by incorporating system to detect and correct haze that is taught by Kawata, to make a drone-based system that can perform automatic haze detection and correction; thus, one of ordinary skilled in the art would be motivated to combine the references since among its several aspects, the present invention recognizes there is a need for a method to detect haze automatically in real-time (Kawata, ¶ [0007]: “A first aspect of this technology is an information processing apparatus including a transmittance detection unit that detects a transmittance of haze at the time of imaging of a captured image”). The combination of Miller, McDonnell and Kawata then teaches a position control unit that controls a position of the mobile object (Miller, ¶ [0065]: “the UAV may include a vehicle control program 262 that may be executed onboard the UAV such as for enabling the UAV to perform one or more tasks autonomously”, ¶ [0033]: “the UAV may fly to a designated location and may maintain a position at the designated location for performing observer functions. As another example, the UAV may change its position”) so that the mobile object is present in the low visibility section (McDonnell, ¶ [0028]: “a fixed forward facing camera 49 is shown that could be used to detect the bottom of the cloud layer so as to maintain the proper altitude”, Claim 9: “aircraft is flown high enough within or above said clouds or other obscurants to stay hidden”) when a result value of determining that the mobile object is not positioned in the low visibility section is input from the low visibility determination unit (Kawata, ¶ [0086]: “… the detected transmittance may be output to an outside. For example, it is applied to an unmanned flying body such as a drone, and imaging is performed from the sky to detect the transmittance during the imaging. Moreover, in a case where the transmittance falls below a threshold value, the unmanned flying body notifies the control side and the like of the detected transmittance”). (The Examiner notes when combined, Miller, in view of McDonnell, in view of Kawata discloses a system to control drone/UAV to hide in detected cloud/haze/fog) Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention. CLAIM 4 In regards to Claim 4, the combination of Miller, McDonnell and Kawata teaches the apparatus of Claim 1. In addition, the combination of Miller, McDonnell and Kawata teaches wherein the video correction unit divides the captured video into at least one block area, selects the at least one block area from divided videos (Kawata, ¶ [0080]: “The presence or absence of the haze is determined with respect to an entire or partial region of the transmittance map. The partial region may be each divided region when the entire transmittance map is divided into a plurality of regions, or may be one or a plurality of regions set with a predetermined region size at a preset position in the transmittance map”), calculates the transmission rate from the selected area (Kawata, ¶ [0081]: “The body system control unit 12020 compares an average value of the entire or partial region of the transmittance map with a threshold value, and determines whether there is a partial region where the average value of the transmittance is lower than the predetermined threshold value or the transmittance is lower than a predetermined threshold value.”), and outputs the calculated transmission rate to the low visibility determination unit. (Kawata, ¶ [0089]: “outputs the transmittance map indicating the transmittance for each pixel to the haze removal unit 53”) CLAIM 7 In regards to Claim 7, Miller teaches a method of controlling a position of a mobile object (Miller, Abstract: “an unmanned aerial vehicle (UAV)”, ¶ [0038]: “observation system 100 may generally operate autonomously to ensure safe operation of the one or more UAVs 106”. Miller teaches a system that control a drone autonomously), comprising: operation (a) of receiving, by a processor (Miller, ¶ [0039]: “Each observer device 102 may include at least one processor”), a captured video from capturing equipment (Miller, ¶ [0034]: “each observer device 102 may include a plurality of fixed cameras … a movable camera may be included”, ¶ [0063]: “the observer devices 102 may send video feeds from their cameras to the user computing device”); Miller does not explicitly disclose keeping aircraft hidden in cloud or other atmospheric phenomenon (fog, haze, smoke, etc); McDonnell is in the same field of art of unmanned aerial vehicles (UAVs). Further, McDonnell teaches keeping aircraft hidden in cloud or other atmospheric phenomenon (fog, haze, smoke, etc) (McDonnell, ¶ [0006]: “The present invention provides improvements in the survivability of aircraft equipped with sensors/targeting equipment by operating the aircraft in a manner to keep it hidden behind cloud cover or other atmospheric phenomenon that block most sensors”, Claim 9: “aircraft is flown high enough within or above said clouds or other obscurants to stay hidden”); Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Miller by incorporating system to keep aircraft hidden in cloud or other atmospheric phenomenon that is taught by McDonnell, to make an UAV for aerial reconnaissance; thus, one of ordinary skilled in the art would be motivated to combine the references since among its several aspect, the present invention recognizes there is a need to improve the survivability of UAVs used in aerial reconnaissance (McDonnell, ¶ [0006]: “The present invention provides improvements in the survivability of aircraft equipped with sensors/targeting equipment”). The combination of Miller and McDonnell does not explicitly disclose operation (b) of calculating, by the processor, a transmission rate of the captured video; operation (c) of determining, by the processor, whether a mobile object is present in a low visibility section using a lower limit of the transmission rate calculated in operation (b); Kawata is in the same field of art of detecting cloud/haze/fog from images. Further, Kawata teaches operation (b) of calculating, by the processor, a transmission rate of the captured video (Kawata, ¶ [0089]: “The transmittance estimation unit 51 … estimates the transmittance for each pixel from the captured image”); operation (c) of determining, by the processor, whether a mobile object is present in a low visibility section using a lower limit of the transmission rate calculated in operation (b); (Kawata, ¶ [0092]: “a lower limit value t0 may be set in advance, and if the transmittance ta(x) is smaller than the lower limit value t0, the haze removal may be performed, using the lower limit value t0”. Kawata teaches the haze removal unit detect and perform haze removal if the transmittance smaller than a lower limit; ¶ [0086]: “… the detected transmittance may be output to an outside. For example, it is applied to an unmanned flying body such as a drone, and imaging is performed from the sky to detect the transmittance during the imaging. Moreover, in a case where the transmittance falls below a threshold value, the unmanned flying body notifies the control side and the like of the detected transmittance”) The combination of Miller, McDonnell and Kawata then teaches operation (d) of, when it is determined that the mobile object is not positioned in the low visibility section (Kawata, ¶ [0092]: “a lower limit value t0 may be set in advance, and if the transmittance ta(x) is smaller than the lower limit value t0, the haze removal may be performed, using the lower limit value t0”, ¶ [0081]: “for example, puts the fog lamps into the lighting state in a case where the number of pixels whose transmittance is lower than the predetermined threshold is larger than the predetermined threshold value in the entire transmittance map, and puts the fog lamps into the non-lighting state in a case where the number of the relevant pixels is not lower than the predetermined threshold value”. Kawata teaches detecting haze based on transmittance value), controlling, by the processor, a position of the mobile object so that the mobile object is present in the low visibility section. (Miller, ¶ [0065]: “the UAV may include a vehicle control program 262 that may be executed onboard the UAV such as for enabling the UAV to perform one or more tasks autonomously”, ¶ [0033]: “the UAV may fly to a designated location and may maintain a position at the designated location for performing observer functions. As another example, the UAV may change its position”) (McDonnell, ¶ [0028]: “a fixed forward facing camera 49 is shown that could be used to detect the bottom of the cloud layer so as to maintain the proper altitude”, Claim 9: “aircraft is flown high enough within or above said clouds or other obscurants to stay hidden”) (The Examiner notes when combined, Miller, in view of McDonnell, in view of Kawata discloses a system to control drone/UAV to hide in detected cloud/haze/fog) CLAIM 10 In regards to Claim 10, the combination of Miller, McDonnell and Kawata teaches the method of Claim 7. In addition, the combination of Miller, McDonnell and Kawata teaches in operation (b), the processor divides the captured video into at least one block area, and selects the at least one block area from divided videos (Kawata, ¶ [0080]: “The presence or absence of the haze is determined with respect to an entire or partial region of the transmittance map. The partial region may be each divided region when the entire transmittance map is divided into a plurality of regions, or may be one or a plurality of regions set with a predetermined region size at a preset position in the transmittance map”), and calculates the transmission rate. (Kawata, ¶ [0089]: “outputs the transmittance map indicating the transmittance for each pixel to the haze removal unit 53”) CLAIM 16 In regards to Claim 16, the combination of Miller, McDonnell and Kawata teaches the method of Claim 7. In addition, the combination of Miller, McDonnell and Kawata teaches a computer program that is written to perform each operation of the method of controlling a position of a mobile object according to claim 7 (Miller, ¶ [0065]: “the UAV 106 may include a vehicle control program 262 that may be executed onboard the UAV”) and recorded on a computer-readable recording medium. (Miller, ¶ [0112]: “The computer-readable media may include volatile and nonvolatile memory and/or removable and non-removable media implemented in any type of technology for storage of information, such as computer-readable instructions”) (Kawata, ¶ [0097]: “the program can be recorded in advance on a hard disk, a solid state drive (SSD), or a read only memory (ROM) as a recording medium”) Claim(s) 5, 11, 13 and 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Miller, in view of McDonnell in view of Kawata, and further in view of Suryawanshi et al. (Suryawanshi, Sunayana, and Shruti Danve. "Fog Correction Using Exponential Contrast Restoration." IEEE, published 2018, hereinafter Suryawanshi). CLAIM 5 In regards to Claim 5, the combination of Miller, McDonnell and Kawata teaches the apparatus of Claim 1. In addition, the combination of Miller, McDonnell and Kawata teaches dividing the video into at least one block area, selecting the at least one block area from the two divided video. (Kawata, ¶ [0080]: “The presence or absence of the haze is determined with respect to an entire or partial region of the transmittance map. The partial region may be each divided region when the entire transmittance map is divided into a plurality of regions, or may be one or a plurality of regions set with a predetermined region size at a preset position in the transmittance map”) The combination of Miller, McDonnell and Kawata does not explicitly disclose a video comparison unit that receives the captured video from the video input unit and receives the corrected video from the video correction unit to calculates a brightness comparison value, which accumulates a difference in brightness between the two videos, by expression Result = ∑ k - 1 n A k - B k   (n: natural number, Ak: kth area or pixel of the captured video, Bk: kth area or pixel of the corrected video). Suryawanshi is in the same field of art of correction of foggy images. Further, Suryawanshi teaches a video comparison unit (Suryawanshi , page 5, section B. Experimentation: “The work has been carried out on Intel core i3, 2.10 GHz processor using MATLAB version R2014a”) that receives the captured video from the video input unit and receives the corrected video (Suryawanshi , page 5, section B. Experimentation: “Experimentation is also carried out to find the sum of absolute difference in the input and enhanced images”) from the video correction unit to calculates a brightness comparison value, which accumulates a difference in brightness between the two videos, by expression Result = ∑ k - 1 n A k - B k   (n: natural number, Ak: kth area or pixel of the captured video, Bk: kth area or pixel of the corrected video). (Suryawanshi, page 2-3, section C. Sum of Absolute Difference (SAD): “Sum of Absolute difference is the most commonly used metric to determine the best match. It is calculated by taking the absolute difference between each pixel in the original image and the corresponding pixel in the image being used for comparison.”, see reconstructed text below. Suryawanshi use SAD to calculate pixel intensity difference PNG media_image1.png 287 1026 media_image1.png Greyscale between input image and output image) Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Miller, McDonnell and Kawata by incorporating the SAD method of comparing image data that is taught by Suryawanshi, to make a system that not only perform fog correction, but estimate the effectiveness of the correction; thus, one of ordinary skilled in the art would be motivated to combine the references since among its several aspects, the present invention recognizes there is a need for a method to quantify the effectiveness of the fog correction method (Suryawanshi, section V Conclusion). Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention. CLAIM 11 In regards to Claim 11, Miller teaches a method of controlling a position of a mobile object (Miller, Abstract: “an unmanned aerial vehicle (UAV)”, ¶ [0038]: “observation system 100 may generally operate autonomously to ensure safe operation of the one or more UAVs 106”. Miller teaches a system that control a drone autonomously), comprising: operation (a) of receiving, by a processor (Miller, ¶ [0039]: “Each observer device 102 may include at least one processor”), a captured video from capturing equipment (Miller, ¶ [0034]: “each observer device 102 may include a plurality of fixed cameras … a movable camera may be included”, ¶ [0063]: “the observer devices 102 may send video feeds from their cameras to the user computing device”); Miller does not explicitly disclose keeping aircraft hidden in cloud or other atmospheric phenomenon (fog, haze, smoke, etc); McDonnell is in the same field of art of unmanned aerial vehicles (UAVs). Further, McDonnell teaches keeping aircraft hidden in cloud or other atmospheric phenomenon (fog, haze, smoke, etc) (McDonnell, ¶ [0006]: “The present invention provides improvements in the survivability of aircraft equipped with sensors/targeting equipment by operating the aircraft in a manner to keep it hidden behind cloud cover or other atmospheric phenomenon that block most sensors”, Claim 9: “aircraft is flown high enough within or above said clouds or other obscurants to stay hidden”); Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Miller by incorporating system to keep aircraft hidden in cloud or other atmospheric phenomenon that is taught by McDonnell, to make an UAV for aerial reconnaissance; thus, one of ordinary skilled in the art would be motivated to combine the references since among its several aspect, the present invention recognizes there is a need to improve the survivability of UAVs used in aerial reconnaissance (McDonnell, ¶ [0006]: “The present invention provides improvements in the survivability of aircraft equipped with sensors/targeting equipment”). The combination of Miller and McDonnell does not explicitly disclose operation (b) of calculating, by the processor, a transmission rate of the captured video; operation (c′−1) of correcting, by the processor, the captured video using the transmission rate calculated in operation (b); Kawata is in the same field of art of detecting cloud/haze/fog from images. Further, Kawata teaches operation (b) of calculating, by the processor, a transmission rate of the captured video (Kawata, ¶ [0089]: “The transmittance estimation unit 51 … estimates the transmittance for each pixel from the captured image”); operation (c′−1) of correcting, by the processor, the captured video using the transmission rate calculated in operation (b) (Kawata, ¶ [0089]: “The transmittance detection unit 52 uses the transmittance estimated … and outputs the transmittance map … The haze removal unit 53 removes the haze from the captured image acquired by the imaging apparatus on the basis of the transmittance map”); The combination of Miller, McDonnell and Kawata does not explicitly disclose operation (c′−2) of calculating, by the processor, a comparison value of brightness between the captured video and the video corrected. PNG media_image1.png 287 1026 media_image1.png Greyscale Suryawanshi is in the same field of art of correction of foggy images. Further, Suryawanshi teaches operation (c′−2) of calculating, by the processor, a comparison value of brightness between the captured video and the video corrected. (Suryawanshi, page 2-3, section C. Sum of Absolute Difference (SAD): “Sum of Absolute difference is the most commonly used metric to determine the best match. It is calculated by taking the absolute difference between each pixel in the original image and the corresponding pixel in the image being used for comparison.”, see reconstructed text below. Suryawanshi use SAD to calculate pixel intensity difference between input image and output image) Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Miller, McDonnell and Kawata by incorporating the SAD method of comparing image data that is taught by Suryawanshi, to make a system that not only perform fog correction, but estimate the effectiveness of the correction; thus, one of ordinary skilled in the art would be motivated to combine the references since among its several aspects, the present invention recognizes there is a need for a method to quantify the effectiveness of the fog correction method (Suryawanshi, section V Conclusion). The combination of Miller, McDonnell, Kawata and Suryawanshi then teaches operation (c′−3) of determining, by the processor, whether the mobile object is present in a low visibility section using the comparison value calculated (Kawata, ¶ [0081]: “The body system control unit compares an average value of the entire or partial region of the transmittance map with a threshold value, and determines whether there is a partial region where the average value of the transmittance is lower than the predetermined threshold value or the transmittance is lower than a predetermined threshold value”. Kawata teaches detecting haze/fog/cloud by comparing transmittance value of captured image data with a threshold or reference value) (Suryawanshi, page 2-3, section C. Sum of Absolute Difference. Suryawanshi teaches comparing image data between input and output); and operation (d) of, when it is determined that the mobile object is not positioned in the low visibility section (Kawata, ¶ [0092]: “a lower limit value t0 may be set in advance, and if the transmittance ta(x) is smaller than the lower limit value t0, the haze removal may be performed, using the lower limit value t0”, ¶ [0081]: “for example, puts the fog lamps into the lighting state in a case where the number of pixels whose transmittance is lower than the predetermined threshold is larger than the predetermined threshold value in the entire transmittance map, and puts the fog lamps into the non-lighting state in a case where the number of the relevant pixels is not lower than the predetermined threshold value”. Kawata teaches detecting haze based on transmittance value), controlling, by the processor, a position of the mobile object so that the mobile object is present in the low visibility section. (Miller, ¶ [0065]: “the UAV may include a vehicle control program 262 that may be executed onboard the UAV such as for enabling the UAV to perform one or more tasks autonomously”, ¶ [0033]: “the UAV may fly to a designated location and may maintain a position at the designated location for performing observer functions. As another example, the UAV may change its position”) (McDonnell, ¶ [0028]: “a fixed forward facing camera 49 is shown that could be used to detect the bottom of the cloud layer so as to maintain the proper altitude”, Claim 9: “aircraft is flown high enough within or above said clouds or other obscurants to stay hidden”) (The Examiner notes when combined, Miller, in view of McDonnell, in view of Kawata discloses a system to control drone/UAV to hide in detected cloud/haze/fog) CLAIM 13 In regards to Claim 13, the combination of Miller, McDonnell, Kawata and Suryawanshi teaches the method of Claim 11. In addition, the combination of Miller, McDonnell, Kawata and Suryawanshi teaches in operation (c′−2), the processor divides the captured video and the corrected video into at least one block area, selects at least one block area from the two divided videos (Kawata, ¶ [0080]: “The presence or absence of the haze is determined with respect to an entire or partial region of the transmittance map. The partial region may be each divided region when the entire transmittance map is divided into a plurality of regions, or may be one or a plurality of regions set with a predetermined region size at a preset position in the transmittance map”), and calculates a brightness comparison value, which accumulates a difference in brightness between the two videos, by Expression Result = ∑ k - 1 n A k - B k   (n: natural number, Ak: kth area or pixel of the captured video, Bk: kth area or pixel of the corrected video). (Suryawanshi, page 2-3, section C. Sum of Absolute Difference (SAD): “Sum of Absolute difference is the most commonly used metric to determine the best match. It is calculated by taking the absolute difference between each pixel in the original image and the corresponding pixel in the image being used for comparison.”, see reconstructed text below. Suryawanshi use SAD to calculate pixel intensity difference PNG media_image1.png 287 1026 media_image1.png Greyscale between input image and output image) CLAIM 15 In regards to Claim 15, the combination of Miller, McDonnell and Kawata teaches the method of Claim 7. In addition, the combination of Miller, McDonnell and Kawata teaches when it is determined in operation (c) that the mobile object moves forward, after operation (c), operation (c′−1) of correcting, by the processor, the captured video using the transmission rate calculated in operation (b); (Kawata, ¶ [0089]: “The transmittance detection unit 52 uses the transmittance estimated … and outputs the transmittance map … The haze removal unit 53 removes the haze from the captured image acquired by the imaging apparatus on the basis of the transmittance map”) The combination of Miller, McDonnell and Kawata does not explicitly disclose calculating, by the processor, a comparison value of brightness between the captured video and the video corrected. PNG media_image1.png 287 1026 media_image1.png Greyscale Suryawanshi is in the same field of art of correction of foggy images. Further, Suryawanshi teaches calculating, by the processor, a comparison value of brightness between the captured video and the video corrected. (Suryawanshi, page 2-3, section C. Sum of Absolute Difference (SAD): “Sum of Absolute difference is the most commonly used metric to determine the best match. It is calculated by taking the absolute difference between each pixel in the original image and the corresponding pixel in the image being used for comparison.”, see reconstructed text below. Suryawanshi use SAD to calculate pixel intensity difference between input image and output image) Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Miller, McDonnell and Kawata by incorporating the SAD method of comparing image data that is taught by Suryawanshi, to make a system that not only perform fog correction, but estimate the effectiveness of the correction; thus, one of ordinary skilled in the art would be motivated to combine the references since among its several aspects, the present invention recognizes there is a need for a method to quantify the effectiveness of the fog correction method (Suryawanshi, section V Conclusion). The combination of Miller, McDonnell, Kawata and Suryawanshi then teaches determining, by the processor, whether the mobile object is present in a low visibility section using the comparison value calculated. (Kawata, ¶ [0081]: “The body system control unit compares an average value of the entire or partial region of the transmittance map with a threshold value, and determines whether there is a partial region where the average value of the transmittance is lower than the predetermined threshold value or the transmittance is lower than a predetermined threshold value”. Kawata teaches detecting haze/fog/cloud by comparing transmittance value of captured image data with a threshold or reference value) (Suryawanshi, page 2-3, section C. Sum of Absolute Difference. Suryawanshi teaches comparing image data between input and output) Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention. CLAIM 8 Claim(s) 8 is/are rejected under 35 U.S.C. 103 as being unpatentable over Miller, in view of McDonnell in view of Kawata, and further in view of Hong (US-20170316551-A1, hereinafter Hong). In regards to Claim 8, the combination of Miller, McDonnell and Kawata teaches the method of Claim 7. The combination of Miller, McDonnell and Kawata does not explicitly disclose the processor calculates a range of the transmission rate t(x) by expression 1 - I ( x ) A   ≤ t(x) ≤ 1 using a pixel value I(x) of the captured video and atmospheric brightness A in the captured video. PNG media_image2.png 381 736 media_image2.png Greyscale Hong is in the same field of art of correction of foggy/hazed image. Further, Hong teaches the processor calculates a range of the transmission rate t(x) by expression 1 - I ( x ) A   ≤ t(x) ≤ 1. using a pixel value I(x) of the captured video (Hong, ¶ [0072]: “I(x) is a value of the xth pixel of the hazy image obtained by a camera”) and atmospheric brightness A in the captured video. (Hong, ¶ [0072]: “A is the atmospheric brightness value of a pixel in the image”) Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Miller, McDonnell and Kawata by incorporating correction method for foggy/hazed image that is taught by Hong, to make a method to correct foggy/hazed image without halo effect; thus, one of ordinary skilled in the art would be motivated to combine the references since among its several aspects, the present invention recognizes there is a need to correct hazed image without halo effect (Hong, [0029]: “it is possible to maintain dehazing performance without a halo effect”). Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention. Allowable Subject Matter Claims 2-3, 6, 9, 12 and 14 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to NHUT HUY (JEREMY) PHAM whose telephone number is (703)756-5797. The examiner can normally be reached Mo - Fr. 8:30am - 6pm ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, O'Neal Mistry can be reached on (313)446-4912. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /NHUT HUY PHAM/Examiner, Art Unit 2674 /Ross Varndell/Primary Examiner, Art Unit 2674
Read full office action

Prosecution Timeline

Feb 21, 2024
Application Filed
Jan 16, 2026
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598397
DIRT DETECTION METHOD AND DEVICE FOR CAMERA COVER
2y 5m to grant Granted Apr 07, 2026
Patent 12598074
FACIAL RECOGNITION METHOD AND APPARATUS, DEVICE, AND MEDIUM
2y 5m to grant Granted Apr 07, 2026
Patent 12597254
TRACKING OPERATING ROOM PHASE FROM CAPTURED VIDEO OF THE OPERATING ROOM
2y 5m to grant Granted Apr 07, 2026
Patent 12592087
IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, AND STORAGE MEDIUM
2y 5m to grant Granted Mar 31, 2026
Patent 12579622
METHOD AND APPARATUS FOR PROCESSING IMAGE SIGNAL, ELECTRONIC DEVICE, AND COMPUTER-READABLE STORAGE MEDIUM
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
79%
Grant Probability
99%
With Interview (+26.8%)
3y 0m
Median Time to Grant
Low
PTA Risk
Based on 53 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month