Prosecution Insights
Last updated: April 19, 2026
Application No. 18/627,449

IMAGE METHOD AND IMAGE DEVICE

Non-Final OA §102
Filed
Apr 05, 2024
Examiner
SABAH, HARIS
Art Unit
2682
Tech Center
2600 — Communications
Assignee
Sz DJI Technology Co. Ltd.
OA Round
1 (Non-Final)
76%
Grant Probability
Favorable
1-2
OA Rounds
2y 7m
To Grant
93%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
511 granted / 668 resolved
+14.5% vs TC avg
Strong +17% interview lift
Without
With
+16.6%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
19 currently pending
Career history
687
Total Applications
across all art units

Statute-Specific Performance

§101
11.2%
-28.8% vs TC avg
§103
57.1%
+17.1% vs TC avg
§102
20.6%
-19.4% vs TC avg
§112
6.0%
-34.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 668 resolved cases

Office Action

§102
DETAILED ACTION Notice of Pre-AIA or AIA Status 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . 2. Claims 1-20 are pending in this application. Priority 3. Acknowledgment is made that this application is a CON of application no. PCT/CN2021/126852 filed on 10/27/2021. Drawings 4. The drawing has been filed on 04/05/2024 are acceptable for examination purpose. Information Disclosure Statement 5. The information disclosure statement filed on 04/05/2024, 08/12/2024 is in compliance with the provision of the 37 CFR 1.97 and therefore has been considered. Claim Rejections - 35 USC § 102 6. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. 7. Claims 1, 19 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Seo, US Pub 2014/0071264. As to claim 1 [independent], Seo teaches an imaging device comprising [fig. 1, element 150; 0044]: at least one imaging sensor [fig. 2, element 30; 0046-0048] configured to obtain an original image of a current shooting scene [fig. 2, element 30; 0046-0048 Seo teaches that the device 30 obtains the image(s) of scene]; at least one color temperature sensor [fig. 2, element 190; 0046-0048] configured to collect color temperature data of the current shooting scene [fig. 2, element 190; 0016-0019, 0046-0048 Seo teaches that the sensor 190 acquires color temperature data of the captured image(s) of the scene (paras., 0016-0019), to detect whether an image capture condition is an underwater condition, and the controller/processor 100 that determines whether the image capture condition is the underwater condition based on the detection result of the underwater recognition sensor 190]; at least one processor [fig. 2, element 100; 0045, 0075]; and at least one memory [fig. 2, element 60; 0047, 0075] including computer program code, wherein the at least one memory and the computer program code are configured, with the at least one processor, to cause the device to at least [fig. 2, element 60; 0047, 0075 Seo teaches that the memory 60 is storing program data to be executed by the controller/processor 100 for implementing image processing function]: identify a result of identification of the current shooting scene based on the color temperature data, wherein the result of identification associates with scene type including in an underwater scene or not in an underwater scene [figs. 7-9; 0016-0019, 0027-0030, 0062-0067 Seo at least teaches that based on the color temperature data corresponds to the water’s color (paras., 0016-0019, 0027-0030), the controller 100 determines whether the captured image of the scene an underwater scene or not in an underwater scene (paras., 0064-0067), and determined that the captured image of the scene an underwater scene, the message is displayed for the user regarding white balance process to be performed for the captured underwater image of the scene (fig. 8)]; and perform a white balance process on the original image based on the result of identification to obtain a processed image of the current shooting scene [figs. 7-9; 0016-0019, 0027-0030, 0064-0067 Seo at least teaches that based on the color temperature data corresponds to the water’s color (paras., 0016-0019, 0027-0030), the controller 100 determines whether the captured image of the scene an underwater scene or not in an underwater scene (paras., 0064-0067), and determined that the captured image of the scene an underwater scene, the message is displayed for the user regarding white balance process to be performed for the captured underwater image of the scene (fig. 8)]. As to claim 19 [independent], However, the independent claim 19 essentially claimed same subject matter as claimed in the independent claim 1 for/and/with other claim limitations, and are therefore the independent claim 19 would be rejected based on same rationale as applied to the independent claim 1. Allowable Subject Matter 8. Claims 2-18, 20 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. 9. The following is an examiner’s statement of reasons for allowance: The dependent claims 2-3 are allowable over the prior arts of record (or cited or listed above) since the cited references taken individually or in combination fails to particularly anticipate or disclose or suggest the claim limitations recited “wherein to identify the result of identification of the current shooting scene based on the color temperature data, the at least one memory and the computer program code are further configured, with the at least one processor, to cause the device to at least: input the color temperature data into a scene classification model, the scene classification model determining a confidence level that the current shooting scene is the underwater scene., wherein to identify the result of identification of the current shooting scene based on the color temperature data, the at least one memory and the computer program code are further configured, with the at least one processor, to cause the device to at least: determine validity of the color temperature data before inputting the color temperature data into the scene classification model, wherein to determine the validity of the color temperature data, the at least one memory and the computer program code are further configured, with the at least one processor, to cause the device to at least: compare the color temperature data with a preset color temperature range and determine that the color temperature data is valid in response to that the color temperature data is within the preset color temperature range; or calculate a difference between a point in time at which the color temperature data is captured by the color temperature sensor and a point in time at which the color temperature data captured by the color temperature sensor is obtained by the processor, and determine that the color temperature data is valid in response to that the difference is less than or equal to a preset threshold”, in combination with all other limitations as claimed. The dependent claim 4 is allowable over the prior arts of record (or cited or listed above) since the cited references taken individually or in combination fails to particularly anticipate or disclose or suggest the claim limitations recited “wherein the at least one memory and the computer program code are further configured, with the at least one processor, to cause the device to at least: filter the color temperature data to reject anomalous color temperature data points before inputting the color temperature data into the scene classification model”, in combination with all other limitations as claimed. The dependent claim 5 is allowable over the prior arts of record (or cited or listed above) since the cited references taken individually or in combination fails to particularly anticipate or disclose or suggest the claim limitations recited “wherein the at least one memory and the computer program code are further configured, with the at least one processor, to cause the device to at least: before inputting the color temperature data into the scene classification model, normalize values of all channels in the color temperature data using a value of a green band channel or a full band channel in the color temperature data as a base value”, in combination with all other limitations as claimed. The dependent claims 6-12 are allowable over the prior arts of record (or cited or listed above) since the cited references taken individually or in combination fails to particularly anticipate or disclose or suggest the claim limitations recited “wherein to identify the result of identification of the current shooting scene based on the color temperature data, the at least one memory and the computer program code are further configured, with the at least one processor, to cause the device to at least: obtain one or more parameters of the original image, the one or more parameters including at least one of an underwater area percentage, an overwater area percentage, a color temperature value or a luminance value; and input the one or more parameters together with the color temperature data into the scene classification model, the scene classification model determining the confidence level that the current shooting scene is the underwater scene., wherein the underwater area percentage and the overwater area percentage are determined according to a predetermined RGB range of the underwater scene and a predetermined RGB range of an overwater scene, respectively., wherein the one or more parameters further include an energy percentage in different wavelength bands, the energy percentage in different wavelength bands being determined based on the color temperature data., wherein to determine the confidence level that the current shooting scene is the underwater scene by inputting the one or more parameters together with the color temperature data into the scene classification model, the at least one memory and the computer program code are further configured, with the at least one processor, to cause the device to at least: determine a first confidence level that the current shooting scene is the underwater scene based on the color temperature data; determine a second confidence level that the current shooting scene is the underwater scene based on the one or more parameters; and determine the confidence level that the current shooting scene is the underwater scene based on the first confidence level and the second confidence level using the scene classification model., wherein to determine the second confidence level that the current shooting scene is the underwater scene based on the one or more parameters, the at least one memory and the computer program code are further configured, with the at least one processor, to cause the device to at least: determine weights corresponding to the one or more parameters based on values of the one or more parameters, wherein the values of different parameters have different correspondences with the weights; and determine the second confidence level based on the weights corresponding to the one or more parameters., wherein to determine the second confidence level based on weights corresponding to the one or more parameters, the at least one memory and the computer program code are further configured, with the at least one processor, to cause the device to at least: perform cumulative multiplication of the weights corresponding to the one or more parameters; and determine the second confidence level based on a result of the cumulative multiplication., wherein, to determine the second confidence level based on the weights corresponding the one or more parameters, the at least one memory and the computer program code are further configured, with the at least one processor, to cause the device to at least: adjust the second confidence level based on the energy percentage of different bands, wherein in response to that an energy percentage of near-infrared band and/or red band is relatively higher than other bands, decrease the second confidence level; or in response to that an energy percentage of a blue-violet band is relatively higher than other bands, increase the second confidence level”, in combination with all other limitations as claimed. The dependent claim 13 is allowable over the prior arts of record (or cited or listed above) since the cited references taken individually or in combination fails to particularly anticipate or disclose or suggest the claim limitations recited “wherein to determine the confidence level that the current shooting scene is the underwater scene, the at least one memory and the computer program code are further configured, with the at least one processor, to cause the device to at least: perform a time domain filtering on the confidence level that the current shooting scene is the underwater scene based on a confidence level that a previous frame or frames of a shooting scene adjacent to a current frame of the current shooting scene in time is the underwater scene”, in combination with all other limitations as claimed. The dependent claim 14-16 are allowable over the prior arts of record (or cited or listed above) since the cited references taken individually or in combination fails to particularly anticipate or disclose or suggest the claim limitations recited “wherein to perform the white balance process of the original image based on the result of the identification, the at least one memory and the computer program code are further configured, with the at least one processor, to cause the device to at least: obtain a first white balance gain for the original image captured for the current shooting scene based on an overwater scene white balance process; obtain a second white balance gain for the original image captured for the current shooting scene based on an underwater scene white balance process; fuse the first white balance gain and the second gain to obtain a third white balance gain, wherein a proportion of weights of the first white balance gain and the second white balance gain at a time of fusion is determined based on the confidence level; and perform the white balance process on the original image of the current shooting scene based on the third white balance gain to obtain the processed image of the current shooting scene., wherein to perform the white balance process of the original image based on the result of the identification, the at least one memory and the computer program code are further configured, with the at least one processor, to cause the device to at least: perform white balance correction on at least one of the first white balance gain or the second white balance gain before fusing the first white balance gain and the second white balance gain to obtain the third white balance gain., wherein to perform the white balance process of the original image based on the result of the identification, the at least one memory and the computer program code are further configured, with the at least one processor, to cause the device to at least: obtain at least one of a color temperature value, a luminance value or a hue value of the original image; and determine a coefficient for the white balance correction based on the at least one of the color temperature value, the luminance value or the hue value”, in combination with all other limitations as claimed. The dependent claims 17-18 are allowable over the prior arts of record (or cited or listed above) since the cited references taken individually or in combination fails to particularly anticipate or disclose or suggest the claim limitations recited “wherein to perform the white balance process of the original image based on the result of the identification, the at least one memory and the computer program code are further configured, with the at least one processor, to cause the device to at least: determine a confidence level of an underwater scene of a previous frame of a shooting scene that is temporally adjacent to a current frame of the current shooting scene; and based on a difference between the confidence level of the previous frame of the shooting scene and the confidence level of the current frame of the current shooting scene, filter the processed image to adjust a convergence speed of the white balance process., wherein the at least one memory and the computer program code are further configured, with the at least one processor, to cause the device to at least: in response to that the confidence level of the previous frame of the shooting scene is greater than the confidence level of the current frame of the current shooting scene, filter the processed image to increase the convergence speed of the white balance process; or in response to that the confidence level of the previous frame of the shooting scene is less than the confidence level of the current frame of the current shooting scene, filter the processed image to reduce the convergence speed of the white balance process”, in combination with all other limitations as claimed. The dependent claim 20 is allowable over the prior arts of record (or cited or listed above) since the cited references taken individually or in combination fails to particularly anticipate or disclose or suggest the claim limitations recited “wherein the identifying a result of identification of the current shooting scene based on the color temperature data comprises: inputting the color temperature data into a scene classification model, the scene classification model determining a confidence level that the current shooting scene is the underwater scene”, in combination with all other limitations as claimed. Conclusion 10. Any inquiry concerning this communication or earlier communications from the examiner should be directed to HARIS SABAH whose telephone number is (571)270-3917. The examiner can normally be reached on Monday/Thursday from 7:00AM to 5:30PM EST. If attempts to reach the examiner by telephone are unsuccessful, the examiner's supervisor, Benny Tieu, can be reached on (571)272-7490. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. The Examiner’s personal fax number is (571)-270-4917. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://portal.uspto.gov/external/portal. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). /HARIS SABAH/Examiner, Art Unit 2682
Read full office action

Prosecution Timeline

Apr 05, 2024
Application Filed
Jan 26, 2026
Non-Final Rejection — §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602947
Method And System For Extracting Data From Documents And Automatically Modifying Data Item Of The Extracted Data Based On Guidance Retrieved From Feedback File
2y 5m to grant Granted Apr 14, 2026
Patent 12602561
IMAGE PROCESSING DEVICE, PRINTING SYSYTEM, AND IMAGE PROCESSING PROGRAM
2y 5m to grant Granted Apr 14, 2026
Patent 12596510
PRINTING APPARATUS IS CONNECTABLE TO INSPECTION APPARATUS FOR COMPARING A READ IMAGE TO A CORRECT IMAGE TO DETERMINE IF THE READ IMAGE IS FREE OF ABNORMALITIES
2y 5m to grant Granted Apr 07, 2026
Patent 12597131
Children Visual Attention Abnormal Screening Method, Involves Extracting Eye Movement, Facial Expression And Head Movement Multi-mode Characteristic Of Children Based On Multimodal Data Learning
2y 5m to grant Granted Apr 07, 2026
Patent 12591969
MEDICAL INFORMATION PROCESSING APPARATUS, METHOD AND NON-TRANSITORY COMPUTER-RADABLE STORAGE MEDIUM TO ACQUIRE BIOMETRIC DATA REGARDING SKIN IMAGE AND TO CONTINUOSLY MONITOR INTERNAL BODY COMPONENT WITHOUT USING SPECIAL EQUIPMENT
2y 5m to grant Granted Mar 31, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
76%
Grant Probability
93%
With Interview (+16.6%)
2y 7m
Median Time to Grant
Low
PTA Risk
Based on 668 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month