Prosecution Insights
Last updated: April 19, 2026
Application No. 18/411,362

COLOR TEMPERATURE ADJUSTMENT EFFECT DETECTION METHOD AND APPARATUS, COMPUTER DEVICE, MEDIUM, AND PRODUCT

Non-Final OA §103
Filed
Jan 12, 2024
Examiner
LIU, XIAO
Art Unit
2664
Tech Center
2600 — Communications
Assignee
Glenfly Tech Co. Ltd.
OA Round
1 (Non-Final)
89%
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant
99%
With Interview

Examiner Intelligence

Grants 89% — above average
89%
Career Allow Rate
257 granted / 290 resolved
+26.6% vs TC avg
Moderate +12% lift
Without
With
+11.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
44 currently pending
Career history
334
Total Applications
across all art units

Statute-Specific Performance

§101
8.8%
-31.2% vs TC avg
§103
50.9%
+10.9% vs TC avg
§102
17.0%
-23.0% vs TC avg
§112
17.4%
-22.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 290 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on 04/23/2025 and 06/23/2025 has/have been considered by the examiner. Specification The disclosure is objected to because of the following informalities: In paragraph [0066], symbols “102”, “104” and “106” are not shown in FIG. 1. Appropriate correction is required. Applicant is reminded of the proper language and format for an abstract of the disclosure. The abstract should be in narrative form and generally limited to a single paragraph on a separate sheet within the range of 50 to 150 words in length. The abstract should describe the disclosure sufficiently to assist readers in deciding whether there is a need for consulting the full patent text for details. The language should be clear and concise and should not repeat information given in the title. It should avoid using phrases which can be implied, such as, “The disclosure concerns,” “The disclosure defined by this invention,” “The disclosure describes,” etc. In addition, the form and legal phraseology often used in patent claims, such as “means” and “said,” should be avoided. The abstract of the disclosure is objected to because it has phrase “The present disclosure relates to”. A corrected abstract of the disclosure is required and must be presented on a separate sheet, apart from any other text. See MPEP § 608.01(b). Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “a color temperature acquisition module configured to”, “an image acquisition module configured to”, and “an image display module configured to” in claim 10. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 4-7 and 10-13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Chen et al (US 20050275912 A1), hereinafter Chen in view of Kring et al (US 20180288382 A1). -Regarding claim 1, Chen discloses a color temperature adjustment effect detection method, applied to a first terminal (FIG. 1, display device 30), the method comprising (Abstract; FIGS. 1-6; PNG media_image1.png 639 462 media_image1.png Greyscale [0052]-[0053]): acquiring an actual used color temperature obtained based on a real standard color temperature sent by a second terminal (FIG. 4, steps 11-13; FIG. 1, Colorimeter 10); converting the actual used color temperature into a trend vector or tristimulus value or output bases R 1 , G 1 ,   B 1   carrying the actual used color temperature (FIG. 4, steps 14-19; FIG. 5, step 23; FIG. 6; [0010]; [0022]; [0043]), and adjusting output bases R 2 , G 2 ,   B 2   according to the trend vector or scaled tristimulus value or output bases R 1 , G 1 ,   B 1   (FIG. 4, steps 18-19; FIG. 5, step 24; [0038], “repetitive comparisons of the tolerance of known output bases R 2 , G 2 ,   B 2 and the calculated output bases R 1 , G 1 ,   B 1 ”); displaying the adjusted output bases R 2 , G 2 ,   B 2   based on the actual used color temperature (FIG. 1, (loop back from output unit 23 and data writing unit 40 to display device 30); FIGS. 4-5), and sending the adjusted output bases R 2 , G 2 ,   B 2   to the second terminal (FIG. 1, Colorimeter 10, data processing unit; [0012]), so that the second terminal parses the adjusted output bases R 2 , G 2 ,   B 2   to obtain the actual used color temperature (FIGS. 2-3; FIG. 4, step 12; FIG. 5), acquires an actual rendered color temperature based on a display result (FIG. 4, steps 12-13; FIG. 5, step 22) and acquires a color temperature adjustment effect of the first terminal based on the real standard color temperature, the actual used color temperature, and the actual rendered color temperature (FIGS. 2-3; FIG. 4, steps 17-19; FIG. 5, steps 24-26). Chen does not disclose a video frame. Chen does not disclose embedding a target matrix into a video frame to obtain an embedded image. A person of ordinary skill in the art would understand that Chen’s method can be used to display an image with the same color on different brands of display devices (based on Red, Blue and Green (RGB) for color reproduction). In this case, for an image to be displayed, a target matrix is a matrix with trend vectors or tristimulus value or output bases R 1 , G 1 ,   B 1   as elements corresponding to each pixels, and the image with calibrated or adjusted colors is the embedded image. In the same field of endeavor, Kring teaches a method to adjust color temperature and luminance of a display output (Kring: Abstract; FIGS. 1-4; [0053], “the methods … being performed by … a device at the customer premises …, the methods could also be performed “in the cloud,” ”;[0060]-[0063]). Kring teaches a video output signal comprising a plurality of image frames is delivered to a display device ([0003]; [0031]), Kring further teaches embedding the target matrix into a video frame to obtain an embedded image (Kring: FIGS. 3-4; [0043], “Using the reference white point, the image frame can then be converted from the RGB color space to the XYZ color space by pre-multiplying the RGB matrix by a conversion matrix M”; [0045]; [0046], “each source color … in the image frame is remapped to a target color”; equations 1-7). Therefore, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to combine the teaching of Chen with the teaching of Kring by embedding a target matrix into a video frame to obtain an embedded image in order to achieve a desired picture quality (Kring: [0002]). -Regarding claim 4, Chen in view of Kring discloses the method of claim 1. Chen in view of Kring further teaches replacing color light signal output bases R 2 , G 2 ,   B 2 by the reference objective color signal output bases R 1 , G 1 ,   B 1 if there is a difference between the two (Chen: FIGS. 4-5). Chen does not determine an embedding region of the target matrix; replacing image content in the embedding region with the target matrix, to obtain the embedded image. disclose a video frame. Chen does not disclose embedding a target matrix into a video frame to obtain an embedded image. A person of ordinary skill in the art would understand that Chen’s method can be used to display an image with the same color on different brands of display devices (based on Red, Blue and Green (RGB) for color reproduction). In this case, for an image to be displayed, a target matrix is a matrix with trend vectors or tristimulus value or output bases R 1 , G 1 ,   B 1   as elements corresponding to each pixels, and the image with replaced RGBs or image contents is the embedded image. In the same field of endeavor, Kring teaches a method to adjust color temperature and luminance of a display output (Kring: Abstract; FIGS. 1-4; [0053], “the methods … being performed by … a device at the customer premises …, the methods could also be performed “in the cloud,” ”;[0060]-[0063]). Kring teaches a video output signal comprising a plurality of image frames is delivered to a display device ([0003]; [0031]), Kring further teaches determining an embedding region of the target matrix (Kring: FIG. 4, step 308; [0041], “identifies regions of increased blue light emission in the image frame”); replacing image content in the embedding region with the target matrix, to obtain the embedded image (Kring: FIGS. 3-4; [0043], “Using the reference white point, the image frame can then be converted from the RGB color space to the XYZ color space by pre-multiplying the RGB matrix by a conversion matrix M”; [0045]; [0046], “each source color … in the image frame is remapped to a target color”; [0051], “corrections for the regions of increased blue light emission that were identified in step 308”; equations 1-7). Therefore, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to combine the teaching of Chen with the teaching of Kring by embedding a target matrix into a video frame to obtain an embedded image in order to achieve a desired picture quality (Kring: [0002]). -Regarding claim 5, Chen in view of Kring discloses the method of claim 1. Chen in view of Kring further teaches prior to the sending the embedded image to the second terminal, sending a video frame into which an indication matrix is embedded to the second terminal, and displaying the video frame, the indication matrix being configured to prompt the second terminal to prepare to display the embedded image (Chen: FIG. 1; FIG. 4, step 18; FIG. 5, step 25; [0052], “writes the results into the color profile of the display device 30, hence can achieve fast and dynamic calibration”). -Regarding claim 6, Chen discloses a color temperature adjustment effect detection method, applied to a second terminal (FIG. 1, data processing unit 20), the method comprising (Abstract; FIGS. 1-6; [0052]-[0053]): sending a real standard color temperature to the first terminal, so that the first terminal acquires an actual used color temperature obtained based on the real standard color temperature (FIG. 4, steps 11-13; FIG. 1, Colorimeter 10, display device 30; [0022]-[0023]), converts the actual used color temperature into a trend vector or tristimulus value or output bases R 1 , G 1 ,   B 1   carrying the actual used color temperature (FIG. 4, steps 14-19; FIG. 5, step 23; FIG. 6; [0010]; [0022]; [0043]), and adjusts output bases R 2 , G 2 ,   B 2   according to the trend vector or output bases R 1 , G 1 ,   B 1   (FIG. 4, steps 18-19; FIG. 5, step 24; [0038], “repetitive comparisons of the tolerance of known output bases R 2 , G 2 ,   B 2 and the calculated output bases R 1 , G 1 ,   B 1 ”); displays the adjusted output bases R 2 , G 2 ,   B 2   based on the actual used color temperature (FIG. 1, (loop back from output unit 23 and data writing unit 40 to display device 30); FIGS. 4-5); acquiring the adjusted output bases R 2 , G 2 ,   B 2   returned by the first terminal (FIG. 1, Colorimeter 10, display device 30), and parsing the adjusted output bases   R 2 , G 2 ,   B 2 to obtain the actual used color temperature (FIGS. 1-3; FIG. 4, step 12; FIG. 5); acquiring an actual rendered color temperature based on a display result of the first terminal (FIG. 4, steps 12-13; FIG. 5, step 22); acquiring a color temperature adjustment effect of the first terminal based on the real standard color temperature, the actual used color temperature, and the actual rendered color temperature (FIGS. 2-3; FIG. 4, steps 17-19; FIG. 5, steps 24-26). Chen does not disclose a video frame. Chen does not disclose embedding a target matrix into a video frame to obtain an embedded image. A person of ordinary skill in the art would understand that Chen’s method can be used to display an image with the same color on different brands of display devices (based on Red, Blue and Green (RGB) for color reproduction). In this case, for an image to be displayed, a target matrix is a matrix with trend vectors or tristimulus values or output bases R 1 , G 1 ,   B 1   as elements corresponding to each pixels, and the image with calibrated or adjusted colors is the embedded image. In the same field of endeavor, Kring teaches a method to adjust color temperature and luminance of a display output (Kring: Abstract; FIGS. 1-4; [0060]-[0063]). Kring teaches a video output signal comprising a plurality of image frames is delivered to a display device ([0003]; [0031]), Kring further teaches embedding the target matrix into a video frame to obtain an embedded image (Kring: FIGS. 3-4; [0043], “Using the reference white point, the image frame can then be converted from the RGB color space to the XYZ color space by pre-multiplying the RGB matrix by a conversion matrix M”; [0045]; [0046], “each source color … in the image frame is remapped to a target color”; equations 1-7). Therefore, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to combine the teaching of Chen with the teaching of Kring by embedding a target matrix into a video frame to obtain an embedded image in order to achieve a desired picture quality (Kring: [0002]). -Regarding claim 7, Chen in view of Kring teaches the method of claim 6. Chen discloses a storage unit (Chen: FIG. 1; [0024], “such as memory, hard disk/driver, diskettes, or the like”) to store the standard reference color gamut data and results of the process (Chen: [0025]; [0029], “record … in a storage unit 21 retrievable by a computer to become a current color temperature record”). Chen does not disclose selecting, in order of the timestamps, the embedded image corresponding to the earliest timestamp for parsing. In the same field of endeavor, Kring teaches a method to adjust color temperature and luminance of a display output (Kring: Abstract; FIGS. 1-4; [0060]-[0063]). Kring teaches a video output signal comprising a plurality of image frames is delivered to a display device ([0003]; [0031]), Kring further teaches selecting, in order of the timestamps, the embedded image corresponding to the earliest timestamp for parsing (Kring: [0039], “image frames are selected in sequence (e.g., in order of time stamp) starting from the time at which the adjustment begins”). Therefore, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to combine the teaching of Chen with the teaching of Kring by embedding a target matrix into a video frame to obtain an embedded image in order to achieve a desired picture quality (Kring: [0002]). -Regarding claim 10, Chen discloses a color temperature adjustment effect detection apparatus, comprising (Abstract; FIGS. 1-6; [0052]-[0053]): a color temperature acquisition module (FIG. 1, Colorimeter 10) configured to acquire an actual used color temperature obtained based on a real standard color temperature sent by a second terminal (FIG. 4, steps 11-13); an acquisition module (FIG. 1, data processing unit 20) configured to convert the actual used color temperature into a trend vector or tristimulus value or output bases R 1 , G 1 ,   B 1   carrying the actual used color temperature (FIG. 4, steps 14-19; FIG. 5, step 23; FIG. 6; [0010]; [0022]; [0043]), and adjust output bases R 2 , G 2 ,   B 2   according to the trend vector (FIG. 4, steps 18-19; FIG. 5, step 24; [0038], “repetitive comparisons of the tolerance of known output bases R 2 , G 2 ,   B 2 and the calculated output bases R 1 , G 1 ,   B 1 ”); an image display module (FIG. 1, display device 30) configured to display the adjusted output bases R 2 , G 2 ,   B 2 based on the actual used color temperature (FIG. 1, (loop back from output unit 23 and data writing unit 40 to display device 30); FIGS. 4-5), and send the adjusted output bases R 2 , G 2 ,   B 2 to the second terminal (FIG. 1, Colorimeter 10, data processing unit; [0012]), so that the second terminal analyzes the adjusted output bases R 2 , G 2 ,   B 2   to obtain the actual used color temperature (FIGS. 2-3; FIG. 4, step 12; FIG. 5), acquires an actual rendered color temperature based on a display result (FIG. 4, steps 12-13; FIG. 5, step 22) and acquires a color temperature adjustment effect of the first terminal based on the real standard color temperature, the actual used color temperature, and the actual rendered color temperature (FIGS. 2-3; FIG. 4, steps 17-19; FIG. 5, steps 24-26). Chen does not disclose a video frame. Chen does not disclose embedding a target matrix into a video frame to obtain an embedded image. A person of ordinary skill in the art would understand that Chen’s method can be used to display an image with the same color on different brands of display devices (based on Red, Blue and Green (RGB) for color reproduction). In this case, for an image to be displayed, a target matrix is a matrix with trend vectors or tristimulus value or output bases R 1 , G 1 ,   B 1   as elements corresponding to each pixels, and the image with calibrated or adjusted colors is the embedded image. In the same field of endeavor, Kring teaches a method to adjust color temperature and luminance of a display output (Kring: Abstract; FIGS. 1-4; [0060]-[0063]). Kring teaches a video output signal comprising a plurality of image frames is delivered to a display device ([0003]; [0031]), Kring further teaches embedding the target matrix into a video frame to obtain an embedded image (Kring: FIGS. 3-4; [0043], “Using the reference white point, the image frame can then be converted from the RGB color space to the XYZ color space by pre-multiplying the RGB matrix by a conversion matrix M”; [0045]; [0046], “each source color … in the image frame is remapped to a target color”; equations 1-7). Therefore, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to combine the teaching of Chen with the teaching of Kring by embedding a target matrix into a video frame to obtain an embedded image in order to achieve a desired picture quality (Kring: [0002]). -Regarding claim 11, Chen in view of Kring discloses the method of claim 1. The combination further teaches comprising a memory and a processor, the memory storing a computer program, wherein the processor, when executing the computer program, implements steps in the method according to claim 1 (Chen: FIG. 1; [0024]; [0052]; See also Kring: FIG. 4; [0004]; [0028]; [0061]). -Regarding claim 12, Chen in view of Kring discloses the method of claim 1. The combination further teaches storing a computer program, wherein, when the computer program is executed by a processor, steps in the method according to claim 1 are implemented (Chen: FIG. 1; [0024]; [0052]; See also Kring: FIG. 4; [0004]; [0028]; [0061]). -Regarding claim 13, Chen in view of Kring discloses the method of claim 1. The combination further teaches comprising a computer program, wherein, when the computer program is executed by a processor, steps in the method according to claim 1 are implemented (Chen: FIG. 1; [0024]; [0052]; See also Kring: FIG. 4; [0004]; [0028]; [0061]). Claim(s) 9 is/are rejected under 35 U.S.C. 103 as being unpatentable over Chen et al (US 20050275912 A1), hereinafter Chen in view of Kring et al (US 20180288382 A1), and further in view of Yang et al (CN 109547779 A), hereinafter Yang. -Regarding claim 9, Chen in view of Kring teaches the method of claim 6. Chen in view of Kring does not teach determining that the adjustment is abnormal when at least one of the actual used color temperature and the actual rendered color temperature is inconsistent with the real standard color temperature; acquiring a device number of the first terminal when the adjustment of the first terminal is abnormal; taking the embedded image as abnormal information, and saving the abnormal information and the device number to a target database as the color temperature adjustment effect. However, Yang is an analogous art pertinent to the problem to be solved in this application and teaches a white balance debugging method. Yang further teaches determining that the adjustment is abnormal when at least one of the actual used color temperature and the actual rendered color temperature is inconsistent with the real standard color temperature (Yang: FIGS. 1-3); acquiring a device number of the first terminal when the adjustment of the first terminal is abnormal (Yang: FIG. 3; Page 3, step 4); taking the embedded image as abnormal information, and saving the abnormal information and the device number to a target database as the color temperature adjustment effect (Yang: FIG. 3; Page 2-3, steps 5-8); Therefore, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to modify the teaching of Chen in view of Kring with the teaching of Yang by determining abnormal adjustment associated with device number in order to provide low cost and high speed by performing the white balance adjustment for product production (Yang: Abstract; Page 2, 1st paragraph – 2nd paragraph). Allowable Subject Matter Claims 2-3 and 8 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to XIAO LIU whose telephone number is (571)272-4539. The examiner can normally be reached Monday-Thursday and Alternate Fridays 8:30-4:30. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jennifer Mehmood can be reached at (571) 272-2976. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /XIAO LIU/Primary Examiner, Art Unit 2664
Read full office action

Prosecution Timeline

Jan 12, 2024
Application Filed
Dec 30, 2025
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603972
WIRELESS TRANSMITTER IDENTIFICATION IN VISUAL SCENES
2y 5m to grant Granted Apr 14, 2026
Patent 12592069
OBJECT RECOGNITION METHOD AND APPARATUS, AND DEVICE AND MEDIUM
2y 5m to grant Granted Mar 31, 2026
Patent 12579834
Information Extraction Method and Apparatus for Text With Layout
2y 5m to grant Granted Mar 17, 2026
Patent 12576873
SYSTEM AND METHOD OF CAPTIONS FOR TRIGGERS
2y 5m to grant Granted Mar 17, 2026
Patent 12573175
TARGET TRACKING METHOD, TARGET TRACKING SYSTEM AND ELECTRONIC DEVICE
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
89%
Grant Probability
99%
With Interview (+11.5%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 290 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month