Prosecution Insights
Last updated: April 19, 2026
Application No. 18/444,719

COLOR PERCEPTION TUNING FOR A WEARABLE DEVICE IN VARIOUS LIGHT CONDITIONS

Non-Final OA §103
Filed
Feb 18, 2024
Examiner
HE, YINGCHUN
Art Unit
2613
Tech Center
2600 — Communications
Assignee
Meta Platforms Technologies, LLC
OA Round
1 (Non-Final)
82%
Grant Probability
Favorable
1-2
OA Rounds
2y 5m
To Grant
96%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
529 granted / 644 resolved
+20.1% vs TC avg
Moderate +14% lift
Without
With
+14.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
27 currently pending
Career history
671
Total Applications
across all art units

Statute-Specific Performance

§101
8.4%
-31.6% vs TC avg
§103
54.0%
+14.0% vs TC avg
§102
5.4%
-34.6% vs TC avg
§112
17.9%
-22.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 644 resolved cases

Office Action

§103
DETAILED ACTION *Note in the following document: 1. Texts in italic bold format are limitations quoted either directly or conceptually from claims/descriptions disclosed in the instant application. 2. Texts in regular italic format are quoted directly from cited reference or Applicant’s arguments. 3. Texts with underlining are added by the Examiner for emphasis. 4. Texts with 5. Acronym “PHOSITA” stands for “Person Having Ordinary Skill In The Art”. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: a color perception module configured to generate the adjusted image … ; a hue module configured to apply one of a plurality of 3x3 matrices … and a tone module configured to apply one of a plurality of tone settings … in Claim 17 and 20. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-10 and 12-13 are rejected under 35 U.S.C. 103 as being unpatentable over Kane et al. (US 2011/0175925 A1). Regarding Claim 1, Kane discloses a method comprising: determining an intensity level of light in an operating environment ([0026]: Light sensors 110 are used to detect the level or intensity of ambient visible light) of a mobile device ([0028]: Projector 100 is considered to be a small, portable, electronic display); storing image data on the mobile device, wherein the image data includes pixel data for a plurality of pixels, wherein the pixel data for each of the plurality of pixels includes color characteristics ([0014]: determining changes in color appearance to be applied to the displayed images based on the low luminance conditions, a model of photopic vision of the human eye, and a model of mesopic vision of the human eye; and applying the determined changes in the color appearance to image data using an image processor that alters the image data for the projected images. Kane does not explicitly recite storing image data on the mobile device. However from Fig.2, it would have been obvious to a PHOSITA before the effective filing date of the claimed invention that image data received in the mobile projector before to process the color image data according to viewing environment); PNG media_image1.png 462 663 media_image1.png Greyscale adjusting the color characteristics of the pixel data for each of the plurality of pixels based on the intensity level of the light, wherein adjusting the color characteristics includes applying a color perception model to the image data for each of the plurality of pixels to generate adjusted image data (Fig.3: step 280), wherein the color perception model is configured to adjust hue ([0036]: Their color appearance model is based on perceptual experiments carried out using a limited set of color chips, relative to chroma, lightness, and hue, for different luminance conditions) color characteristics and tone ([0038]: The first is a tonescale transformation step in which the input image code values are mapped from the nonlinear tonescale of the input color space (e.g., gamma of 2.2 for sRGB) and RGB intensities are computed (step 410) which are linear with the luminance output of the display device) color characteristics of the image data; and displaying the adjusted image data on a display of the mobile device (Fig.3: step 260). PNG media_image2.png 465 539 media_image2.png Greyscale Regarding Claim 2, Kane discloses It will be apparent to those skilled in the art that the precomputed modifications to color appearance parameter values can also be stored as other tables of correction values, or transformative matrices ([0078]). Since Kane discloses the color image comprises red, green and blue three primary colors ([0038]: The image input color space can be any of a number of color encoding spaces appropriate to the image source, for example the sRGB color standard for still images). Therefore it would have been obvious to a PHOSITA before the effective filing date to incorporate the teaching of Kane and to include the limitation of wherein the color perception model includes a plurality of 3x3 matrices, wherein each 3x3 matrix of the plurality of 3x3 matrices is configured to tune the hue color characteristics of the image data based on the intensity level of light in order to transform the three primary inputs to three primary outputs. Regarding Claim 3, as explained in Claim 1-2 above, Kane further teaches or suggests selecting one of the plurality of 3x3 matrices based on intensity level of light; and applying the selected one of the plurality 3x3 matrices to the pixel data for each of the plurality pixels ([0058]: It can be considered that the output RGB intensities step 575 effectively concludes the luminance adaptive color correction method 500, as exemplified in FIG. 7, by returning color appearance changes. The operative low luminance display correction method 250 of FIG. 3 can then continue to apply color appearance changes step 280. The determined color appearance changes can be applied to subsequent image content by a variety of calculative means, using correction values, transformative matrices, or look up tables (LUTs). Also see Fig.7). Regarding Claim 4, Kane further teaches or suggests wherein the color perception model includes a plurality of tone settings, wherein each of the plurality of tone settings is paired with a corresponding one of a plurality of ranges for the intensity level of light ([0058] and [0038]: The first is a tonescale transformation step in which the input image code values are mapped from the nonlinear tonescale of the input color space (e.g., gamma of 2.2 for sRGB) and RGB intensities are computed (step 410) which are linear with the luminance output of the display device). Regarding Claim 5, Kane teaches or suggests wherein the color perception model includes a plurality of brightness settings, wherein each of the plurality of brightness settings is paired with a corresponding one of a plurality of ranges for the intensity level of light ([0041]: That resulting data can be analyzed to estimate and track (step 270) the brightness adaptation of any viewers 10 using a degree of adaptation factor (FL) or other appropriate metrics). Regarding Claim 6, Kane teaches or suggests wherein the color perception model includes a look up table (LUT) that determines a hue adjustment matrix and a tone setting for application to the image data based on a range of values for the intensity level of light of the operating environment of the mobile device ([0058]: the determined color appearance changes can be applied to subsequent image content by a variety of calculative means, using correction values, transformative matrices, or look up tables (LUTs)). Regarding Claim 7, Kane further teaches or suggests wherein the LUT includes brightness settings for the display of the mobile device, wherein the brightness settings are determined from a brightness model, wherein the brightness model is trained on an image data set that is hue ([0036]) adjusted and tone ([0038]) adjusted for color perception consistency ([0037]: Given that as the eye adapts to increasing dimness, that brightness sensitivity shifts to the blue, while sensitivity to yellow, orange and red light (and therefore colors) diminishes, it can be desirable to alter image content in a compensatory way, to provide a color perception experience closer to the original content. Also see Fig.7). Regarding Claim 8, Kane discloses wherein the pixel data for each of the plurality of pixels includes a red value (R), a green value (G), and a blue value (B) ([0038]: wherein the pixel data for each of the plurality of pixels includes a red value (R), a green value (G), and a blue value (B)). Regarding Claim 9, Kane teaches or suggests wherein determining the intensity level of light includes at least one of: analyzing the image data to estimate the intensity level of light; or reading ambient light intensity from a light detector coupled to the mobile device ([0014]: detecting ambient light conditions and displayed image brightness). Regarding Claim 10, Kane discloses receiving input image signal in step 505 as shown in Fig.7. Kane does not explicitly recite receiving the image data from a camera coupled to the mobile device; receiving the image data from a wireless communications device of the mobile device; or receiving the image data from a wired connection to the mobile device. However before the effective filing date of the claimed invention a camera wired or wireless connected to a mobile display device had been widely seen due to the available of digital camera. Therefore it would have been obvious to a PHOSITA before the effective filing date to include the above limitation so that a camera user can immediately observe captured image result. Regarding Claim 12, Kane teaches or suggests wherein the color perception model includes a hue model and a tone model, wherein the hue model includes settings for the hue color characteristics, wherein each of the settings for the hue color characteristics includes one of a plurality of 3x3 matrices, wherein each of the plurality of 3x3 matrices is associated with a corresponding one of a plurality of ambient light ranges ([0036], [0078], [0038]). Regarding Claim 13, Kane further teaches or suggests wherein the tone model includes a plurality of settings for the tone color characteristics ([0038]), wherein each of the plurality of settings for the tone color characteristic is associated with a corresponding one of the plurality of ambient light ranges ([0059]: The change color correction determination step 272 can measure or test changes in illumination conditions, including ambient or display brightness, and changes in viewer conditions, including brightness adaptation, against various metrics). Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over Kane et al. (US 2011/0175925 A1) as applied to Claim 1 above, and further in view of Greenebaum et al. (US 2020/0105225A1). Regarding Claim 11, Kane fails to disclose the mobile device is a smart watch or a head mounted device. However Greenebaum discloses techniques use a display device, in conjunction with various optical sensors, e.g., an ambient light sensor or image sensors, to collect information about the ambient lighting conditions in the environment of the display device. Use of this information—and information regarding characteristics of the display device—can provide a more accurate determination of unintended light being added to light driven by the display device (Abstract). Greenebaum discloses he techniques disclosed herein are applicable to any number of electronic devices: such as digital cameras; digital video cameras; mobile phones; personal data assistants (PDAs); head-mounted display (HMD) devices; digital and analog monitors such as liquid crystal displays (LCDs) and cathode ray tube (CRT) displays; televisions; desktop computers; laptop computers; tablet devices; billboards and stadium displays; automotive, nautical, aeronautic or similar instrument panels, gauges and displays; and the like ([0016]). Therefore it would have been obvious to a PHOSITA before the effective filing date to incorporate the teaching of Greenebaum into that of Kane and to include the limitation of the mobile device is a smart watch or a head mounted device in order to allow a user to view images under variable lighting environment. Claims 14 are rejected under 35 U.S.C. 103 as being unpatentable over Greenebaum et al. (US 2020/0105225A1). Regarding Claim 14, Greenebaum discloses a wearable mobile device ([0016]: The techniques disclosed herein are applicable to any number of electronic devices: such as digital cameras; digital video cameras; mobile phones; personal data assistants (PDAs); head-mounted display (HMD) devices; …) comprising: a display configured to display adjusted image data (Fig.3 Display 340. [0015]: The output of the saturation model may determine adjustments to light driven by the display device to display source content, such that the resulting color, perceived on screen and incorporating the unintended light, remains true to the rendering intent of the source content author); PNG media_image3.png 541 755 media_image3.png Greyscale memory storing instructions (Fig.9: Memory 960 and Storage 965); and processing logic (Fig.9: Processor 905) coupled to the memory, wherein the processing logic is configured to execute the instructions to perform a process that includes: determine an ambient light intensity ([0030]: As illustrated within dashed line box 310, saturation model 320 may use various factors and sources of information in its calculation, e.g.: information indicative of ambient light conditions obtained from one or more optical sensors 104 (e.g., ambient light sensors)) for a wearable mobile device ([0042]: Referring now to FIG. 9, a block diagram of a representative electronic device possessing a display is shown, in accordance with some embodiments. Electronic device 900 could be, for example, a mobile telephone, personal media device, HMD, portable camera, or a tablet, notebook or desktop computer system); receive image data having a plurality of pixels, wherein each of the plurality of pixels in the image data includes color characteristics (Fig.4: step 410: receive encoded source color space data ($’G’B’)source); tune the color characteristics of the plurality of pixels with a color perception model (Fig.3: 320 Saturation model) that is based on the ambient light intensity (Notice in Fig.3: the saturation model depends on information regarding ambient and display profile) to generate the adjusted image data, wherein the color perception model is configured to tune hue color characteristics and tone color characteristics to reduce perceptual inconsistencies in colors of image data across a range of ambient light intensities ([0032]: Color appearance models may be used to perform chromatic adaptation transforms and/or for calculating mathematical correlates for the six technically defined dimensions of color appears: brightness (luminance), lightness, colorfulness, chroma, saturation, and hue. Note Greenebaum does not explicitly use the phrase tone color characteristics. However a skilled person would have known that tone represents a lightness of color. Therefore Greenebaum indirectly discloses the color perception model is configured to tune tone color characteristics); and display the adjusted image data on the display (Fig.3: 340). Claims 15-16 are rejected under 35 U.S.C. 103 as being unpatentable over Greenebaum et al. (US 2020/0105225 A1) as applied to Claim 14 above, and further in view of Kane et al. (US 2011/0175925 A1). Regarding Claim 15, Greenebaum teaches or suggests wherein the color perception model includes a hue model configured to tune the hue color characteristics, wherein the color perception model includes a tone model configured to tune the tone color characteristics ([0032]: Color appearance models may be used to perform chromatic adaptation transforms and/or for calculating mathematical correlates for the six technically defined dimensions of color appears: brightness (luminance), lightness, colorfulness, chroma, saturation, and hue). Greenebaum further discloses In one embodiment, the gamut mapping may use color adaptation matrices ([0034]). In addition, Kane also discloses It will be apparent to those skilled in the art that the precomputed modifications to color appearance parameter values can also be stored as other tables of correction values, or transformative matrices ([0078]). Since Kane discloses the color image comprises red, green and blue three primary colors ([0038]: The image input color space can be any of a number of color encoding spaces appropriate to the image source, for example the sRGB color standard for still images). Therefore it would have been obvious to a PHOSITA before the effective filing date to incorporate the teaching of Kane into that of Greenebaum and to include the limitation of wherein each of the plurality of 3x3 matrices is paired with a corresponding one of a plurality of ranges of ambient light intensity in order to transform the three primary inputs to three primary outputs. Regarding Claim 16, Kane further teaches or suggests wherein the hue model is trained using a pseudo-inverse method that is trained on a data set of observer perception data (Fig.7 step 545-570: notice Inverse CAM step 560). PNG media_image1.png 462 663 media_image1.png Greyscale Claims 17-20 are rejected under 35 U.S.C. 103 as being unpatentable over Greenebaum et al. (US 2020/0105225 A1). Regarding Claim 17, Greenebaum discloses a wearable mobile device ([0016]: The techniques disclosed herein are applicable to any number of electronic devices: such as digital cameras; digital video cameras; mobile phones; personal data assistants (PDAs); head-mounted display (HMD) devices; …) comprising: a display configured to display adjusted image data (Fig.3 Display 340. [0015]: The output of the saturation model may determine adjustments to light driven by the display device to display source content, such that the resulting color, perceived on screen and incorporating the unintended light, remains true to the rendering intent of the source content author); memory storing instructions (Fig.9: Memory 960 and Storage 965); and processing logic (Fig.9: Processor 905) coupled to the memory and configured to execute the instructions, wherein the instructions include: a color perception module configured to generate the adjusted image based on image data (a skilled person would have known that an adjusted image depends on original image) and based on a plurality of ranges of ambient light intensity (Fig.3: the saturation model depends on information regarding ambient and display profile), wherein the adjusted image data is modified by the color perception module to improve perceptual consistency of color in the image data across the plurality of ranges of ambient light intensity ([0020]: Thus, measuring and accounting for the unintended light resulting from these various phenomenon may help to achieve a more consistent and content-accurate experience for a user viewing the display), wherein the color perception module includes: a hue module configured to apply one of a plurality of 3x3 matrices ([0034]: In one embodiment, the gamut mapping may use color adaptation matrices. Greenebaum discloses the image data is encoded in RGB three primary colors. Therefore it would have been obvious to a PHOSITA to apply a 3x3 matrix to the three inputs in order to generate a new set of three outputs) to the image data based on one of the plurality of ranges of ambient light intensity; and a tone module configured to apply one of a plurality of tone settings to the image data ([0032]: Color appearance models may be used to perform chromatic adaptation transforms and/or for calculating mathematical correlates for the six technically defined dimensions of color appears: brightness (luminance), lightness, colorfulness, chroma, saturation, and hue. Note Greenebaum does not explicitly recite using a hue module and a tone module. However since Greenebaum teaches the adjustment can be achieved via adjusting brightness, lightness, colorfulness, chroma, saturation, and hue. It just takes a PHOSITA routine skill to apply Greenebaum’s teaching and to design two modules to adjust lightness and hue). Regarding Claim 18, Greenebaum teaches or suggests wherein each 3x3 matrix of the plurality of 3x3 matrices is configured to tune hue color characteristics of the image data, wherein each of the plurality of 3x3 matrices is paired with a corresponding one of the plurality of ranges of ambient light intensity ([0034] and [0032]). Regarding Claim 19, Greenebaum further discloses wherein the wearable mobile device is a smart watch or a head-mounted device ([0016]: The techniques disclosed herein are applicable to any number of electronic devices: such as digital cameras; digital video cameras; mobile phones; personal data assistants (PDAs); head-mounted display (HMD) devices). Regarding Claim 20, Greenebaum further teaches or suggests wherein the color perception module includes a look up table (LUT) that includes plurality of ranges of ambient light intensity, wherein the plurality of ranges are paired with a corresponding one of a plurality of tone settings, wherein the plurality of ranges are paired with a corresponding one of a plurality of screen brightness settings ([0031]: According to some embodiments, the adjustments to light driven from pixels in the display device to compensate for unintended light may be implemented through shaders, modifications to one or more LUTs, such as three-dimensional LUTs, three distinct ID LUTs, and the like. A skilled person would have known a lookup table always pairs one input to one output). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to YINGCHUN HE whose telephone number is (571)270-7218. The examiner can normally be reached M-F 8:00-5:00 MT. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Xiao M Wu can be reached at 571-272-7761. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /YINGCHUN HE/Primary Examiner, Art Unit 2613
Read full office action

Prosecution Timeline

Feb 18, 2024
Application Filed
Nov 15, 2025
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602886
LOW LATENCY HAND-TRACKING IN AUGMENTED REALITY SYSTEMS
2y 5m to grant Granted Apr 14, 2026
Patent 12588711
METHOD AND APPARATUS FOR OUTPUTTING IMAGE FOR VIRTUAL REALITY OR AUGMENTED REALITY
2y 5m to grant Granted Mar 31, 2026
Patent 12586247
IMAGE DISTORTION CALIBRATION DEVICE, DISPLAY DEVICE AND DISTORTION CALIBRATION METHOD
2y 5m to grant Granted Mar 24, 2026
Patent 12586491
Display Device and Method for Driving the Same
2y 5m to grant Granted Mar 24, 2026
Patent 12579949
IMAGE PROCESSING APPARATUS
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
82%
Grant Probability
96%
With Interview (+14.4%)
2y 5m
Median Time to Grant
Low
PTA Risk
Based on 644 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month