Prosecution Insights
Last updated: April 19, 2026
Application No. 18/441,500

DISPLAY DEVICE AND METHOD OF IMPROVING VISIBILITY OF IMAGE THEREFOR

Non-Final OA §102§103§DP
Filed
Feb 14, 2024
Examiner
SETH, MANAV
Art Unit
2672
Tech Center
2600 — Communications
Assignee
LX SEMICON CO., LTD.
OA Round
1 (Non-Final)
91%
Grant Probability
Favorable
1-2
OA Rounds
2y 11m
To Grant
98%
With Interview

Examiner Intelligence

Grants 91% — above average
91%
Career Allow Rate
716 granted / 789 resolved
+28.7% vs TC avg
Moderate +8% lift
Without
With
+7.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
13 currently pending
Career history
802
Total Applications
across all art units

Statute-Specific Performance

§101
19.5%
-20.5% vs TC avg
§103
29.0%
-11.0% vs TC avg
§102
21.5%
-18.5% vs TC avg
§112
15.0%
-25.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 789 resolved cases

Office Action

§102 §103 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement 1. The information disclosure statement (IDS) submitted on 10/08/2024 has been considered by the examiner. Claim Interpretation 2. The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. 3. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. 4. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “a global visibility improvement unit”, “a local visibility improvement”, “an image characteristic information extractor”, “a class analysis unit”, “’an average brightness operation unit”, “a global compensation graph data generator”, “a global compensation graph generator”, “a global compensation operator”, “visibility reduction modeling unit”, “local compensation unit”, “a local contrast map provision unit”, “a local contrast weighted map provision unit”, “an image blender” in claims 1-12 and 18-20. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Double Patenting 5. The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. 6. Claims 1, 8, 9, 13, and 16-18 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims of copending Application No. 18/431,074 (reference application). Although the claims at issue are not identical, they are not patentably distinct from each other because the claims of the instant application are anticipated by the claims of the co-pending application, as noted below. Regarding claim 1, claim 1 has been analyzed and rejected as per claim 1 of copending Application No. 18/431,074 (reference application). Regarding claim 8, claim 8 has been analyzed and rejected as per claim 1 of copending Application No. 18/431,074 (reference application). Regarding claim 9, claim 9 has been analyzed and rejected as per claim 12 (claim 12 is a combination of claims 12, 10 and 9) of copending Application No. 18/431,074 (reference application). Regarding claim 13, claim 13 has been analyzed and rejected as per claim 13 of copending Application No. 18/431,074 (reference application). Regarding claim 16, claim 16 has been analyzed and rejected as per claim 13 of copending Application No. 18/431,074 (reference application). Regarding claim 17, claim 17 has been analyzed and rejected as per claim 19 (claim 19 is a combination of claims 19, 17 and 16) of copending Application No. 18/431,074 (reference application). Regarding claim 18, claim 18 has been analyzed and rejected as per claim 13 of copending Application No. 18/431,074 (reference application). This is a provisional nonstatutory double patenting rejection because the patentably indistinct claims have not in fact been patented. Claim Rejections - 35 USC § 102 7. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. 8. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. 9. Claim(s) 1, 12, 13 and 18 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Glen, U.S. Patent Publication No. 2008/0055228 A1. Regarding claim 1, Glen discloses A display device comprising: a global visibility improvement unit 100 configured to generate content information corresponding to content that is represented in an image by using input frame data IS for the image (see paras 0021 and 0024, and element #304 of figure 3 – Glen discloses that each time a new image is to be updated/displayed, the types of contents of the image is determined; wherein the “types of content” include “video images”, “3D graphics” and “document files”) and to generate global compensation information IGC by performing a global compensation using the input frame data, the content information, and an illumination signal of the image (see paras 0020, 0023, 0025 and element #306 of figure 3 – Glen discloses whether any of the content being included in an updated display requires an adjusted brightness relative to the current settings of the display. Glen explicitly discloses comparing intensity of the new image content relative to the current display intensity. Glen also explicitly discloses that the comparisons take into account ambient light which is detected via a light sensor. Note, since Glen’s updated content intensity comparison are relative to the current entire display intensity, the Examiner interprets such as functionality equivalent to “global” compensation/information type); and a local visibility improvement unit 200 configured to generate local compensation information ILC by performing a local compensation on the input frame data (see para 27 and elements 402 and 4040 of figure 4 – Glen discloses portions of the display image other than the identified content region, is processed to accommodate the adjusted brightness/intensity required by the region. Glen discloses performing such processing using adjustments to those remaining regions via contrast, brightness, color temperature or white point adjustments. Note, it is clear that such processing Glen is functionally equivalent to Applicant’s “local compensation information”); and to generate output frame data OS by blending the global compensation information and the local compensation information (see paras 0026-0028 and elements #310 and #312 of figure 3, and element #404 of figure 4 – Glen discloses performing processing on the other regions of the display not identified as the adjust content region, to accommodate the adjusted brightness required for the identified content region. Glen discloses performing such processing using adjustments to those remaining regions via contrast, brightness, color temperature or white point adjustments. Glen discloses displaying images via the display whether they require adjustments or not. Note, it is clear that in order for the technique of Glen to achieve the desired output and purpose of the invention, the “accommodation” processing of the other image regions in combination with the content adjusted brightness techniques of the image must perform functionally equivalent to produce “blended” output in order to produce a visually appealing output as per the problems solved by the techniques of Glen described in para 0004 of Glen). Regarding claim 12, claim 12 has been similarly analyzed and rejected as per citations made in the rejection of claim 1. Glen further discloses allowing for an adjusted intensity of the display by processing any portions of the image that are also not affected by the determined content thus identifying those portions for adjustment. Glen explicitly gives the example that other portions of the image not identified as video image content must also be processed to account for an increase in intensity of the video image content portion (see paragraphs 0026-0027 and element #402 of Figure 4). Glen discloses portions of the display image other than the identified content region, is processed to accommodate the adjusted brightness/intensity required by the region. Glen discloses performing such processing using adjustments to those remaining regions via contrast, brightness, color temperature or white point adjustments (see paragraph 0027 and elements #402, #404 of Figure 4). Note, it is clear that such processing Glen is functionally equivalent to Applicant's "local compensation information." Glen discloses performing processing on the other regions of the display not identified as the adjust content region, to accommodate the adjusted brightness required for the identified content region. Glen discloses performing such processing using adjustments to those remaining regions via contrast, brightness, color temperature or white point adjustments. Glen discloses displaying images via the display whether they require adjustments or not (see paragraphs 0026-0028, elements #310, #312 of Figure 3 and element #404 of Figure 4). Note, it is clear that in order for the techniques of Glen to achieve the desired output and purpose of the invention, the "accommodation" processing of other image regions in combination with the content adjusted brightness techniques of the image must perform functionally equivalent to produce "blended" output in order to produce a visually appealing output as per the problems solved by the techniques of Glen described in paragraph 4 of Glen. Regarding claim 13, Glen discloses “A method of improving a visibility of an image for a display device, the method comprising: generating content information comprising class information (see paras 0021 and 0024, and element #304 of figure 3 – Glen discloses that each time a new image is to be updated/displayed, the types of contents of the image is determined; wherein the “types of content” include “video images”, “3D graphics” and “document files” – where the different types of contents are considered here different classes), and image brightness level information corresponding to content that is represented in an image by using input frame data for the image (see paragraph 0022 and element #306 of Figure 3 - Glen discloses that a plurality of intensity settings based on corresponding plurality of content types are obtained. Glen further discloses that a table of intensity settings indexed by content type may be pre-stored in memory associated with the host or co-processors; para 0014 – determining a region of the displayed image corresponding to the content requiring the adjusted brightness, where to adjust image brightness information is required); and generating global compensation information by performing a global compensation on the input frame data, the class information, the image brightness level information, and an illumination signal of the image” (see paras 0020, 0023, 0025 and element #306 of figure 3 – Glen discloses whether any of the content being included in an updated display requires an adjusted brightness relative to the current settings of the display. Glen explicitly discloses comparing intensity of the new image content relative to the current display intensity. Glen also explicitly discloses that the comparisons take into account ambient light which is detected via a light sensor. Note, since Glen’s updated content intensity comparison are relative to the current entire display intensity, the Examiner interprets such as functionality equivalent to “global” compensation/information type)). Regarding claim 18, claim 18 has been similarly analyzed and rejected as per citations made in the rejection of claim 13. Claim Rejections - 35 USC § 103 10. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. 11. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 12. Claims 2-4, 14 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Glen, U.S. Patent Publication No. 2008/0055228 A1, and further in view of Goodsitt et al., U.S. Patent No. 10,665,204 B1. Regarding claim 2, claim 2 recites “The display device of claim 1, wherein the global visibility improvement unit 100 comprises an image characteristic information extractor 110 comprising a first deep neural network (DNN) that has been modeled to identify the content and configured to generate the content information corresponding to the content of the input frame data by the first DNN using the input frame data as an input”. As cited in the rejection of claim 1, Glen teaches identifying the content and configured to generate/determine the content information corresponding to the content of the input frame data, but does not explicitly teach of doing so by using a deep neural network (DNN). However, Goodsitt teaches in col. 1, lines 34-40 “Screen content may be evaluated utilizing a first machine learning algorithm. The screen content being presented on the mobile device display may be categorized based on an output of the first machine learning algorithm. Based on a category of the screen content, a screen brightness adjustment may be determined to be appropriate. In response to determining that the screen brightness adjustment is appropriate, a degree of the screen brightness adjustment may be determined”; further discloses col. 6, lines 16-20 – “based on an output of the first machine learning algorithm, the screen content being presented on the mobile device display may be categorized. For example, categories of content may be video, photographs, news or book content, social media pages, a QR code, or the like”; and further discloses in col. 15, lines 15-33 - “ Examples of the machine learning algorithms or models may include a neural network classifier, an example of which may be a convolutional neural network, a supervised machine learning algorithm (such as a regression algorithm, linear regression algorithm, or the like), or an unsupervised machine learning algorithm (such as a k-means, Gaussian mixtures, or the like), to determine a category of screen content, thresholds for a screen brightness adjustment, or a degree of screen brightness adjustment. The various machine learning algorithms that may be used in the foregoing examples of FIGS. 1-4 may use one or more different data sets to train the respective machine learning algorithm. In the screen content categorization examples, machine learning algorithms, such as convolutional neural networks, that provide image classification are known and may have already been trained. Others may have rudimentary, general training and require a specific training data set representative of the specific results that the machine learning algorithm is intended to produce” – where convolutional neural network (CNN) is a DNN. Therefore, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to use DNN to determine content type as taught by Goodsitt in the invention of Glen. A person having ordinary skill in the art would have been motivated before the effective filing date of the claimed invention to use DNN to determine content type as taught by Goodsitt in the invention of Glen, as DNN enables machine learning, which can train a computer to identify contents with high accuracy and efficiency. Regarding claim 3, the combined invention of Glen and Goodsitt discloses “The display device of claim 2, wherein the image characteristic information extractor 100 generates the content information comprising class information CL corresponding to the content of the input frame data and image brightness level information BL of the input frame data” (see Glen as cited in the rejection of claim 1; further see Goodsitt – col. 6, lines 16-20 – “based on an output of the first machine learning algorithm, the screen content being presented on the mobile device display may be categorized. For example, categories of content may be video, photographs, news or book content, social media pages, a QR code, or the like”; col. 6, lines 32-45 – “Based on the category of the screen content, the screen brightness adjustment application may determine that a screen brightness adjustment is appropriate (240). In response to determining that the screen brightness adjustment is appropriate, a degree of the screen brightness adjustment may be determined (250). The degree of the screen brightness adjustment may be determined in several ways. For example, the degree of the screen brightness adjustment may be determined using a look-up tables, a second machine learning algorithm, a user preference setting related to a category of the screen content, a setting related to a category of the screen content provided by an external server, or the like”; col. 6, lines 9-15 – “Based on the type of scene correlated to the video and ambient light having a value of X, the screen brightness adjustment application may determine that the degree of screen adjustment is a value YY, which may be in an approximate range of Z (the degree of screen brightness adjustment made by the user when video is presented on the mobile device screen)”). Regarding claim 4, the combined invention of Glen and Goodsitt discloses “The display device of claim 3, wherein the image characteristic information extractor 100 comprises: a class analysis unit 112 comprising the first DNN and configured to generate the class information corresponding to the content of the input frame data by the first DNN” (see the citations made in the rejection of claims 2 and 3). Regarding “an average brightness operation unit 114 configured to operate average brightness of the input frame data and to generate the image brightness level information as results of the operation”. Although Glen does disclose that a plurality of intensity settings based on a corresponding plurality of content types are obtained, Glen does not explicitly disclose utilizing an average calculation to produce the brightness/intensity. At the time the invention was filed, it would have been obvious to one of ordinary skill in the art to perform a multitude of different mathematical equations to produce image brightness values including performing an average or mean computation. Applicant has not disclosed that explicitly utilizing such an average/mean computation provides an advantage, is used for a particular purpose, or solves a stated problem. One of ordinary skill in the art, furthermore, would have expected Applicant's invention to perform equally well with the brightness/intensity computations of Glen because the exact calculation chosen to derive the image brightness values in this context is a matter of engineering design choice as preferred by the inventor and/or to which best suits the application at hand. Further, the Examiner sees no immediate to the criticality of utilizing specifically an average vs. other type of mathematical computation to derive image brightness in this context in so much that the techniques of Glen would provide equivalent output. Therefore, it would have been obvious to one of ordinary skill in this art to modify the combined invention of Glen and Goodsitt to obtain the invention as specified in claim 4. Regarding claim 14, claim 14 has been analyzed and rejected as per citations made in the rejection of claims 2-4. Regarding claim 19, claim 19 has been analyzed and rejected as per citations made in the rejection of claims 2-4. None of the closest prior art(s) of the record teach subject matter as recited in claims 5-11, 15-17 and 20. Claims 5-6, 10-11, 15 and 20 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Manav Seth whose telephone number is (571) 272-7456. The examiner can normally be reached on Monday to Friday from 8:30 am to 5:00 pm. If attempts to reach the examiner by telephone are unsuccessful, the examiner's supervisor, Sumati Lefkowitz, can be reached on (571) 272-3638. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https:/Awww.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000 /Manav Seth/ Primary Examiner, Art Unit 2672 February 7, 2026
Read full office action

Prosecution Timeline

Feb 14, 2024
Application Filed
Feb 07, 2026
Non-Final Rejection — §102, §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597243
INFORMATION PROCESSING SYSTEM, INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND STORAGE MEDIUM
2y 5m to grant Granted Apr 07, 2026
Patent 12579633
PERIODIC-PATTERN BACKGROUND REMOVAL
2y 5m to grant Granted Mar 17, 2026
Patent 12567269
METHOD OF TRAINING IMAGE CAPTIONING MODEL AND COMPUTER-READABLE RECORDING MEDIUM
2y 5m to grant Granted Mar 03, 2026
Patent 12561969
Object Re-Identification Apparatus and Method Thereof
2y 5m to grant Granted Feb 24, 2026
Patent 12555368
Method for Temporal Correction of Multimodal Data
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
91%
Grant Probability
98%
With Interview (+7.8%)
2y 11m
Median Time to Grant
Low
PTA Risk
Based on 789 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month