Prosecution Insights
Last updated: April 19, 2026
Application No. 18/749,108

DISPLAY APPARATUS AND OPERATING METHOD OF THE SAME

Final Rejection §103
Filed
Jun 20, 2024
Examiner
ITSKOVICH, MIKHAIL
Art Unit
2483
Tech Center
2400 — Computer Networks
Assignee
Samsung Electronics Co., Ltd.
OA Round
4 (Final)
35%
Grant Probability
At Risk
5-6
OA Rounds
4y 0m
To Grant
59%
With Interview

Examiner Intelligence

Grants only 35% of cases
35%
Career Allow Rate
206 granted / 585 resolved
-22.8% vs TC avg
Strong +24% interview lift
Without
With
+23.8%
Interview Lift
resolved cases with interview
Typical timeline
4y 0m
Avg Prosecution
62 currently pending
Career history
647
Total Applications
across all art units

Statute-Specific Performance

§101
11.5%
-28.5% vs TC avg
§103
53.5%
+13.5% vs TC avg
§102
12.3%
-27.7% vs TC avg
§112
20.4%
-19.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 585 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant's arguments filed on 11/04/2025 have been fully considered but they are not persuasive. Applicant argues: “In rejecting claim 7 (now canceled), the Office Action acknowledged that Lee fails to describe or suggest claimed features of "wherein the determining of the weight kernel comprises: estimating a depth between a 30 virtual object represented by the display panel and the left eye and the right eye; and determining the weight kernel according to the confidence value and the depth," as currently incorporated into independent claim 1” Examiner notes that the Office Action did not make such an acknowledgement. Lee indicates determining an eye location including depth, Pitts teaches estimating the depth rather than determining it precisely. Since neither the claims nor the Specification elaborate on the method of estimation, both references provide features that read on the claim. See updated reasons for rejection of Claim 1 below. Applicant argues: “In particular, Pitts provides the following description as reads: [0015] In at least one embodiment, … In other words, Pitts merely describes that the depth map is a grayscale image depicting the estimated depth of objects in the captured environment, from the camera. In contrast, independent claim 1 of the present application sets forth, for example, that a method of operating a display apparatus, the method comprising: calculating a confidence value obtained by performing light field rendering on each of pixels corresponding to a view area mapped to positions of a left eye and a right eye looking at a display panel; determining a weight kernel of a corresponding pixel based on the confidence value, including estimating􀀫 depth between a 3D virtual obiect represented by the display panel and the left eye and the right eye and determining the weight kernel according to the confidence value and the depth; determining whether the positions of the left eye and the right eye are changed; adjusting a brightness of a pixel corresponding to each of the left eye and the right eye of a margin area of the view area by changing the weight kernel based on a movement speed of the eyes calculated in response to the positions of the left eye and the right eye are determined to be changed, which are different from, and not disclosed or suggested by the above-mentioned teaching of Lee.” Examiner notes that Applicant copies three paragraphs from Pitts and half of the claim language of Claim 1 followed by a statement that prior art is different. This does not address the particular reasons for rejection of these elements in the Office Action, citing a combination of Lee and Pitts. Claim 1 appears to broadly read on a conventional method of determining whether a pixel should be assigned to a right or a left eye and with what weight value based on the geometry of the display and the location of each eye with respect to the pixel. See reasons for rejection below. If estimating is an important element of the invention, Examiner suggests elaborating on the steps of estimation in the claim and why they create or solve a particular problem in the art when combined with other elements of the claim. Applicant argues: “Accordingly, it is respectfully submitted that Pitts does not, and would/could not, estimate[ing] a depth between a 3D virtual obiect represented by the display panel and the left eye and the right eye and determine[ing] the weight kernel according to the confidence value and the depth, as set forth in claim 1, for example. Therefore, it is respectfully submitted that Lee and Pitts does not and would/could not, describe, …” Examiner notes that Applicant has failed to address teachings of Lee that were cited for determining the weight kernel based on substantively identical considerations. Using estimating rather than determining does not appear to materially differentiate the claims from Lee, particularly in view of Pitts. See reasons for rejection below. Applicant asserts substantively the same argument for Claim 16 as for Claim 1 above. See responses to arguments above. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-6, 8-20 are rejected under 35 U.S.C. 103 as being unpatentable over US 20190082168 to Lee (“Lee”) in view of US 20170237971 to Pitts (“Pitts”) and in view of US 20210174768 to Jarvenpaa (“Jarvenpaa”). Regarding Claim 1: “A method of operating a display apparatus, the method comprising: calculating a [confidence] value obtained by performing light field rendering on each of pixels corresponding to a view area mapped to positions of a left eye and a right eye looking at a display panel; (For example: “The first reference value may correspond to a difference between the ray direction and a direction of a line from the display pixel towards the location of the left eye, and the second reference value may correspond to a difference between the ray direction and a direction of a line from the display pixel towards the location of the right eye” Lee, Paragraph 11.) determining a weight kernel of a corresponding pixel based on the [confidence] value, … determining the weight kernel according to the [confidence] value and the depth; (Under the broadest reasonable interpretation consistent with the specification and ordinary skill in the art, (a) the kernel is determined based on the confidence value, and (b) the confidence value is determined based on the depth value(s), in this manner the kernel is based on the depth values. See Specification, Paragraph 9. Lee teaches that the weight kernel is calculated based on a reference value which is based on the location of the eye that includes depth distance: “luminance weight based on the at least one of the first reference value or the second reference value,” each indicates a measure of confidence that a pixel is to be allocated to a left eye or a right eye respectively. Lee, Paragraphs 7, 55 and similarly in Jarvenpaa, Paragraph 72. For example, “a reference value may correspond to a difference in distance ldR-dLI between a distance from a ray direction and a location to a left eye from a ray direction and a distance to a right eye from the ray.” Lee, Paragraphs 6, 60. As noted below the required distances are calculated based on a determined depth from the location of each of the eyes to the pixel, as illustrated in Figs. 6A and 6B. See further treatment of confidence values below.) including estimating a depth between a 3D virtual object represented by the display panel and the left eye and the right eye and (“method including detecting a right eye location and a left eye location of a viewer, and providing a 3D image based on the detected right eye location and the detected left eye location,” Lee, Paragraph 24. Note that the relevant location of each eye of the viewer (620, 625) is a three-dimensional location relative to the pixel (representing an object) on the display panel, and it is particularly concerned with horizontal and depth directions required for determining the angles and horizontal distances between rays at the same depth as the eyes. See Lee, Figs. 6A, 6B. Further, Pitts teaches that depth property can be estimated (rather than determined) and “when world properties are estimated, the estimated properties also include an error metric and/or confidence value of the estimated property” Pitts, Paragraphs 15, 57, and statement below.) determining whether the positions of the left eye and the right eye are changed; (“the 3D image providing apparatus may track a viewpoint or an eye location” Lee, Paragraph 47. See similarly in Jarvenpaa, Paragraph 35.) adjusting a brightness of a pixel corresponding to each of the left eye and the right eye of a margin area of the view area (“the display pixel may be adjusted based on the luminance weight to be applied to the image pixel value.” Lee, Paragraph 13. Note that the adjusted pixel can be a margin pixel or a non-margin pixel: “Such a scaling may be performed to secure a margin of image pixel values in a following process of adjusting an image pixel value.” Lee, Paragraph 66. See similarly in Jarvenpaa, Paragraph 34.) Lee does not teach that the calculated reference value can be “a confidence value.” However, as noted above, Lee uses the reference value as a measure of confidence that a particular pixels should be assigned to an image of a particular eye, right or left. Cumulatively, Pitts teaches the above claim feature in the context of generating and displaying 3D images: “In a step 140, a confidence level in the estimated world properties ascertained in the step 130 may be calculated. The confidence level may represent the level of confidence that the estimated world properties are accurate.” Pitts, Paragraph 48. Note that this confidence value is used to generate an influence value that is used in the same manner as the weight value in the Claims and Lee. See Pitts, Paragraphs 116-117. See similarly in Jarvenpaa, Paragraph 72. Therefore, before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to supplement the teachings of Pitts to calculate a confidence value as a reference value as taught in Lee, in order to generate pixel weighing for 3D display. Pitts, Paragraphs 116-117. Finally, in reviewing the present application, there does not seem to be objective evidence that the claim limitations are particularly directed to: addressing a particular problem which was recognized but unsolved in the art, producing unexpected results at the level of the ordinary skill in the art, or any other objective indicators of non-obviousness. “[adjusting a brightness of a pixel] … by changing the weight kernel based on a movement speed of the eyes calculated in response to the positions of the left eye and the right eye are changed.” As noted above, Lee determines the weight kernel based on eye positions and Pitts indicates that determination of the kernel is subject to a confidence level which changes how the variable influences/weights on the output. Lee and Pitts do not teach that such a confidence can be calculated using the movement speed of the eyes. Jarvenpaa teaches the above claim feature in the context of gaze dependent foveated rendering of images: “The confidence value may be dependent upon and/or determined based on various factors not least: … fast eye motion (e.g. eye movement at a rate that is too fast for eye tracker to track/precisely determine gaze position in real-time) … determining a rate at which gaze positions are determined and determining the rate crossing a threshold value,” thus calculating that the eye movement speed is faster than a tracking or processing threshold. Jarvenpaa, Paragraphs 72-74, 80. Therefore, before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to supplement the teachings of Lee and Pitts to determine a confidence of the eye positions based on eye movement speed as taught in Jarvenpaa, in order to change the calculation (weight kernel) based on the confidence/reliability of the variable (determined eye position). See Jarvenpaa, Paragraph 72 and Pitts, Paragraphs 107-112 and 116-117. Note that different variable determinations can have different confidences and impact. Finally, in reviewing the present application, there does not seem to be objective evidence that the claim limitations are particularly directed to: addressing a particular problem which was recognized but unsolved in the art, producing unexpected results at the level of the ordinary skill in the art, or any other objective indicators of non-obviousness. Regarding Claim 2: “The method of claim 1, wherein the weight kernel is determined to reduce a brightness of the corresponding pixel in response to a decrease in the confidence value.” (Lee teaches that a smaller reference may indicate a smaller luminance weight value which when multiplied by luminance will produce a smaller luminance / brightness value: “luminance weight based on the at least one of the first reference value or the second reference value. … the display pixel may be adjusted based on the luminance weight to be applied to the image pixel value.” Lee, Paragraphs 7, 13. See a similar relationship in Pitts, Paragraphs 116-117 and statement of motivation in Claim 1.) Regarding Claim 3: “The method of claim 1, wherein the weight kernel is determined to maintain a brightness of each of pixels having a confidence value greater than or equal to a reference value.” ( “For example, a reference [confidence] value may correspond to a difference in distance ldR-dLI … a corresponding luminance weight may be from a minimum value 50°/c,, when it is O and gradually increase to 100°/c, from point where the distance is IPD/2 … When a 100% luminance weight is allocated, an image pixel value with an original luminance may be output through a corresponding display pixel without reducing the luminance of the image pixel value.” Thus luminance / brightness is maintained once the reference / confidence value is greater than or equal to a particular threshold. Lee, Paragraph 60. Note similarly, “if (confidence>High_C) …influence=1” in Pitts, Paragraphs 107-108. See statement of motivation in Claim 1.) Regarding Claim 4: “The method of claim 1, wherein the determining of the weight kernel comprises: obtaining a characteristic of content to be represented by the display panel; and determining the weight kernel according to the confidence value and the characteristic of content.” (Note that confidence and resulting weight values can be based on world properties determined from an image content (i.e. characteristics of content). Prior art teaches: “In at least one embodiment, when world properties are estimated, the estimated properties also include an error metric and/or confidence value of the estimated property. For example, such an error metric may be a measure of the photometric consistency of an estimated 3D patch in the world.” Piits, Paragraph 15. Thus weight value kernel is a function the characteristic of content and the confidence in its determination.) Regarding Claim 5: “The method of claim 1, wherein the determining of the weight kernel comprises: increasing a brightness of a corresponding pixel, in response to the confidence value being greater than or equal to a reference value; and … reducing the brightness of the corresponding pixel, in response to the confidence value being less than the reference value.” (Lee teaches that a smaller reference may indicate a smaller luminance weight value which when multiplied by luminance will produce a smaller luminance / brightness value, and the reverse: “luminance weight based on the at least one of the first reference value or the second reference value. … the display pixel may be adjusted based on the luminance weight to be applied to the image pixel value.” Lee, Paragraphs 7, 13. See a similar relationship in Pitts, Paragraphs 116-117 and statement of motivation in Claim 1.) Regarding Claim 6: “The method of claim 1, wherein the determining of the weight kernel comprises determining the weight kernel for each three-dimensional (3D) virtual object represented by the display panel.” (“such an error metric may be a measure of the photometric consistency of an estimated 3D patch in the world,” which is an example 3D virtual object represented by the display panel, where “a virtual view is to be rendered …”. Ptts, Paragraphs 15-16. See statement of motivation in Claim 1.) Regarding Claim 8: “The method of claim 1, further comprising: determining the weight kernel to increase a brightness of a corresponding pixel, in response to the depth being greater than a reference depth; and … determine the weight kernel to reduce the brightness of the corresponding pixel, in response to the depth being less than or equal to the reference depth.” (“In the visual representation, darker colors represent nearer distances, while lighter colors represent further distances.” Pitts, Paragraph 99. See Statement of motivation in Claim 1.) Regarding Claim 9: “The method of claim 1, wherein the adjusting of the brightness of the pixel comprises adjusting intensities of subpixels included in the pixel based on the weight kernel.” (See reasons for rejection in Claim 2. Further, under the broadest reasonable interpretation consistent with the specification and ordinary skill in the art, a subpixel can be a color component of a pixel, such as RGB pixel elements. Prior art teaches adjusting intensities of such subpixels based on weighted luminance: “The 3D image providing apparatus may then convert again, from the luminance space to the RGB color space,” Lee, Paragraph 67.) Regarding Claim 10: “The method of claim 1, wherein the adjusting of the brightness of the pixel comprises: obtaining a first value of a corresponding pixel from an image corresponding to the left eye; … obtaining a second value of the corresponding pixel from an image corresponding to the right eye; … determining intensities of subpixels included in both the first value and the second value, based on intensities of subpixels included in the first value and intensities of subpixels included in the second value; … and adjusting the brightness of the pixel, based on the determined intensities of the subpixels.” (Under the broadest reasonable interpretation consistent with the specification and ordinary skill in the art, (1) value of the pixel is related to brightness (or luminance) of the pixel, (2) intensities of the subpixels relate to values of the pixel colors such as RGB, and (3) the claim is directed to converting from color values to luminance and back in determining the weighted pixel values of Claim 1. Prior Art teaches this: “The 3D image providing apparatus may extract a luminance value by converting an image pixel value of each of the image pixels in a red, green, blue (RGB) color space to a luminance space … The 3D image providing apparatus may then convert again, from the luminance space to the RGB color space, … Through operations 510 and 520, the luminance range of the image pixels of the left-view image and the right-view image may be scaled, and the luminance value may be adjusted to correct a crosstalk.” See Lee, Paragraph 67. Therefore, before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to convert between color space and luminance space for purposes of luminance adjustment. Regarding Claim 11: “The method of claim 1, wherein the adjusting of the brightness of the pixel comprises, for each of the pixels: obtaining a value of a corresponding pixel from an image corresponding to one of the left eye and the right eye based on a result obtained by performing light field rendering on the corresponding pixel; and applying a weight kernel of the corresponding pixel to the obtained value of the pixel.” (See reasons for rejection in Claim 2. Further, prior art teaches that “The rays output from the display pixels 215 may form a lightfield.” Lee, Paragraph 45.) Regarding Claim 12: “The method of claim 1, further comprising: tracking the positions of the left eye and the right eye, (“emit images of different viewpoints to both eyes of a viewer, for example, a left eye 51 and a right eye 52 … the 3D image providing apparatus may track a viewpoint or an eye location” Lee, Paragraphs 46 - 47.) determining whether the positions of any of the left eye and the right eye are changed, based on a result of the tracking.” (“the 3D image providing apparatus may track a viewpoint or an eye location” Lee, Paragraph 47.) Regarding Claim 13: “The method of claim 1, further comprising: obtaining a parameter of the display apparatus, wherein the determining of the weight kernel comprises determining the weight kernel according to the confidence and the parameter of the display apparatus.” (See reasons for rejection of Claim 2. Also note an example where pixel weighting are based on display parameters such as “on a ray direction of a ray output from each display pixel.” Lee, Paragraph 53.) Regarding Claim 14: “The method of claim 1, wherein the performing of the light field rendering comprises determining whether each of the pixels of the display panel is to provide an image for one or more of the left eye and the right eye.” (“by allowing the viewer 120 to view different images with a left eye and a right eye of the viewer 120, respectively. … 3D image providing apparatus may output light, or a ray of light, that is output from each of display pixels 215 of a display panel 210 to a 3D space in a plurality of viewing directions through a 3D optical device 220. The rays output from the display pixels 215 may form a lightfield.” Lee, Paragraph 43-45.) Regarding Claim 15: “The method of claim 1, wherein the adjusting of the brightness comprises: in response to the confidence of a pixel corresponding to a ray being less than a threshold, assigning a minimum value among color values of an image for the left eye and an image for the right eye to the pixel corresponding to the ray.” ( “For example, a reference [confidence] value may correspond to a difference in distance ldR-dLI … a corresponding luminance weight may be from a minimum value 50°/c,, when it is O and gradually increase to 100°/c, from point where the distance is IPD/2 … When a 100% luminance weight is allocated, an image pixel value with an original luminance may be output through a corresponding display pixel without reducing the luminance of the image pixel value.” Thus luminance / brightness is maintained once the reference / confidence value is greater than or equal to a particular threshold. Lee, Paragraph 60. Note similarly, “if (confidence>High_C) …influence=1 … else … influence = 0” in Pitts, Paragraphs 107-112. See statement of motivation in Claim 1.) Claim 16: “A display apparatus comprising: …” is rejected for reasons stated for Claim 1 and because prior art teaches: a display panel; and (“The display panel 1140 may convert the panel image generated by the processor 1120 to a 3D image” Lee, Paragraph 98.) a processor configured to …” (“A processing device may be implemented using one or more general-purpose or special purpose computers, such as, for example, a processor,” Lee, Paragraph 104.) Claim 17 is rejected for reasons stated for Claim 10 in view of the Claim 16 rejection. Claim 18 is rejected for reasons stated for Claim 11 in view of the Claim 16 rejection. Claim 19 is rejected for reasons stated for Claim 12 in view of the Claim 16 rejection. Regarding Claim 20: “The display apparatus of claim 16, wherein the display apparatus is comprised in any one or any combination of a head-up display (HUD) device, a 3D digital information display (DID), a navigation device, a 3D mobile device, a smartphone, a smart television (TV), a tablet, a smart vehicle, and an Internet of things (loT) device.” (First, this claim appears to be directed to the use of an apparatus of Claim 16 without limitation to the structure of the claimed apparatus itself; for this reason, the apparatus of Claim 20 is rejected for reasons stated for Claim 16. Cumulatively, the cited embodiments of the prior art apparatus (a processor and a display panel) are capable of being comprised in the claimed devices: “for example, a 3D television (TV), a glass-type wearable device, a 3D head-up display (HUD), a monitor, a tablet computer, a smartphone, a mobile device, a smart home appliance, etc.” Lee, Paragraph 43.) Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. US 20190073460 to Lubetkin (“Lubetkin”) describes changing operation based on measured eye speed movements. Note that, for purposes of compact prosecution, multiple reasons for rejection may be provided for a claim or a part of the claim. The rejection reasons are cumulative, and Applicant should review all the stated reasons as guides to improving the claim language. The referenced citations made in the rejections above are intended to exemplify areas in the prior art documents in which the examiner believed are the most relevant to the claimed subject matter. However, it is incumbent upon the applicant to analyze each prior art document in its entirety since other areas of the document may be relied upon at a later time to substantiate examiner's rationale of record. See W.L. Gore & associates, Inc. v. Garlock, Inc., 721 F.2d 1540, 220 USPQ 303 (Fed. Cir. 1983), cert. denied, 469 U.S. 851 (1984). However, "the prior art's mere disclosure of more than one alternative does not constitute a teaching away from any of these alternatives because such disclosure does not criticize, discredit, or otherwise discourage the solution claimed ...." In re Fulton, 391 F.3d 1195, 1201,73 USPQ2d 1141, 1146 (Fed. Cir. 2004). THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MIKHAIL ITSKOVICH whose telephone number is (571)270-7940. The examiner can normally be reached Mon. - Thu. 9am - 8pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Joseph Ustaris can be reached at (571)272-7383. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MIKHAIL ITSKOVICH/Primary Examiner, Art Unit 2483
Read full office action

Prosecution Timeline

Jun 20, 2024
Application Filed
Feb 22, 2025
Non-Final Rejection — §103
Apr 09, 2025
Response Filed
May 02, 2025
Final Rejection — §103
May 08, 2025
Examiner Interview Summary
May 08, 2025
Applicant Interview (Telephonic)
Jun 13, 2025
Response after Non-Final Action
Aug 01, 2025
Request for Continued Examination
Aug 06, 2025
Response after Non-Final Action
Aug 08, 2025
Non-Final Rejection — §103
Nov 04, 2025
Response Filed
Nov 10, 2025
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12548733
Automating cryo-electron microscopy data collection
2y 5m to grant Granted Feb 10, 2026
Patent 12489911
IMAGE CODING METHOD, IMAGE DECODING METHOD, IMAGE CODING APPARATUS, RECEIVING APPARATUS, AND TRANSMITTING APPARATUS
2y 5m to grant Granted Dec 02, 2025
Patent 12477146
ENCODING AND DECODING METHOD, DEVICE AND APPARATUS
2y 5m to grant Granted Nov 18, 2025
Patent 12452404
METHOD FOR DETERMINING SPECIFIC LINEAR MODEL AND VIDEO PROCESSING DEVICE
2y 5m to grant Granted Oct 21, 2025
Patent 12432328
SYSTEM AND METHOD FOR RENDERING THREE-DIMENSIONAL IMAGE CONTENT
2y 5m to grant Granted Sep 30, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
35%
Grant Probability
59%
With Interview (+23.8%)
4y 0m
Median Time to Grant
High
PTA Risk
Based on 585 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month