Prosecution Insights
Last updated: April 19, 2026
Application No. 18/480,623

SYSTEMS AND METHODS FOR GENERATING IMAGES OF LOCATIONS AFFECTED BY WEATHER CONDITIONS

Non-Final OA §103§112
Filed
Oct 04, 2023
Examiner
CASCHERA, ANTONIO A
Art Unit
2612
Tech Center
2600 — Communications
Assignee
Yahoo Assets LLC
OA Round
3 (Non-Final)
87%
Grant Probability
Favorable
3-4
OA Rounds
2y 7m
To Grant
95%
With Interview

Examiner Intelligence

Grants 87% — above average
87%
Career Allow Rate
889 granted / 1019 resolved
+25.2% vs TC avg
Moderate +8% lift
Without
With
+7.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
21 currently pending
Career history
1040
Total Applications
across all art units

Statute-Specific Performance

§101
18.4%
-21.6% vs TC avg
§103
34.2%
-5.8% vs TC avg
§102
17.8%
-22.2% vs TC avg
§112
21.2%
-18.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1019 resolved cases

Office Action

§103 §112
DETAILED ACTION Preliminary Remarks The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 Receipt is acknowledged of a request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e) and a submission, filed on 02/10/2026. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 12 and 13 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. In reference to claim 12, this claim has been amended to incorrectly depend upon itself since claim 11, from which claim 12 previously depended from, has now been cancelled. The Examiner believes claim 12 should instead depend upon claim 10 and which interpret the claim as such however, as the claim currently sits, the claim is indefinite as it fails to particularly point out and distinctly claim that which Applicant regards as the invention since it’s dependency is not particularly pointed out and distinctly claimed. An appropriate correction is required. Note claim 13 is inherently included in this rejection since it has been amended to directly depend upon claim 12. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 3-8, 10, 12-17 and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Wilson et al. (U.S. Patent 11,756,567) and Yang et al. (U.S. Publication 2019/0392596). In reference to claim 1, Wilson et al. discloses a method (see column 1, lines 53-56 wherein Figures 2-3 wherein Wilson et al. discloses a computer-implemented method, program product and system for generating conversational image representations.) comprising: identifying, by a processor, a geographic location and a current time (see column 3, lines 51-34, column 7, lines 6-15, column 9, lines 51-65, column 10, lines 55-57, column 14, lines 4-7, 24-34, Figure 1 #400-401 of Figure 4 wherein Wilson et al. discloses the computer device executing a software program via a processor. Wilson et al. discloses the program receiving a location of a user from GPS of a user’s computing device along with metadata which comprises information regarding a time of viewing of an article or action taken by the user on the computing device.); retrieving, by the processor from a database, a weather condition of the geographic location at the current time (see column 5, lines 26-38, column 9, lines 60-65, columns 10-11, lines 55-11 and Figures 2-3 wherein Wilson et al. discloses the computing device comprising a database or a repository for data used by the program. Wilson et al. gives examples of what types of data are stored in the database, one being “historical generated image representations.” Wilson et al. goes on to disclose the program after having determined the user’s location via GPS, retrieving and determining associated weather data based on a date of a user’s conversation or a date that a topic was discussed in a user’s conversation/chat. Note, it is clear that the date as utilized in this context must inherently comprise “time” information additionally however as seen above, Wilson et al. also explicitly discloses time information as being retrieved via metadata.); retrieving, by the processor from an image database, a photographic image of a real-world scene based on the geographic location (see column 5, lines 26-38, column 11, lines 12-13 and Figures 2-3 wherein Wilson et al. explicitly discloses the program next generating image representations of conversation based on identified locations. Again, Wilson et al. discloses the computing device comprising a database or a repository for data used by the program. Wilson et al. gives examples of what types of data are stored in the database, one being “historical generated image representations.” Lastly, Wilson et al. explicitly discloses a database for storing GIS (geographic information system) data which at least inherently includes “imagery” (e.g. satellite and/or aerial) as would be clear to one of ordinary skill in the art.); creating, via a generative machine learning model executed by the processor that takes the photographic image of the geographic location and the weather condition as input, a digital image depicting the geographic location being visibly affected by the weather condition, wherein the generative machine learning model is configured to receive structed input that comprises the weather condition (see column 5, lines 21-46, column 6, lines 32-37, column 11, lines 12-24 and Figures 1-3 wherein Wilson et al. discloses the computing device comprising an image generator model in the form of a generative adversarial network and the program generating an image representation of the conversation using the image generator model. Wilson et al. further discloses the model generating a backdrop for the image from the identified location utilizing a trained location model which incorporates determined associated weather into the location resulting in an accurate generated backdrop. Wilson et al. explicitly gives the examples of showing weather such as snowing, hot, raining for the location. Wilson et al. discloses the database, from which information is retrieved by the program and utilized in the image generation techniques thereof, storing a multitude of information some of which is formatted with metadata while other information comprising “configuration files” of which the Examiner interprets both as functionally equivalent to Applicant’s “structured input” since at least the “configuration file” at least inherently follows a known layout that defines at least parameters and associated values.), and wherein the generative machine learning model modifies visual characteristics of physical structures and natural features present in the photographic image to reflect effects of the weather condition; and causing display, by the processor, of the digital image in an application (see columns 11-12, lines 65-6 wherein Wilson et al. discloses the program further presenting via display of the generated image representations on a display of the computing device.). Although Wilson et al. does disclose utilizing a database to store and retrieve information therefrom for use in processing by the program and further discloses retrieving weather and date/time information for the user’s location, Wilson et al. does not explicitly disclose retrieving the weather information explicitly from a database. It is well known in the art of image data processing to retrieve information from databases. Using databases to store information and in this case weather information, can more easily allow for efficient querying and retrieval of such data while also allowing data sharing and integration into multiple systems (Official Notice). It would have been obvious to one of ordinary skill in the art for Wilson et al. who already teaches utilizing a database to store information and further teaches retrieving weather information, to use a database to store and retrieve such weather information because it is well known in the art that using databases to store weather information can more easily allow for efficient querying and retrieval of such data while also allowing data sharing and integration into multiple systems. Further, although it can be interpreted that Wilson et al. does disclose a database to store “photographic images” via at least GIS type database-stored data (see column 5, lines 25-38) and utilizing a generative adversarial network and program for generating an image representation of a conversation, Wilson et al. does not explicitly disclose the model modifying visual characteristics of physical structures and natural features present in the photographic images to reflect weather conditions. Yang et al. discloses methods, apparatus and computer-readable media for detecting and removing noise caused by transient obstructions such as clouds from high elevation digital images of geographic areas (see paragraph 3). Yang et al. discloses utilizing a GAN to train a generator model to retrieve one or more high elevation digital images and apply them as input across the generator model for, in one embodiment removing clouds from the images (see paragraphs 35-36, 54-55 and Figure 3). Yang et al. further discloses an alternate embodiment wherein the invention actually instead adds in synthetic cloud/weather data into the high elevation images (see paragraph 58 and Figure 4). Yang et al. lastly also discloses generating obstruction-free versions of the digital images in which those pixels that depict clouds, snow or other transient obstructions are replaced with replacement data that estimates/predicts the actual terrain that underlies those pixels, the terrain such as buildings, roads, forest, vegetation, sand, etc. (see paragraphs 36, 39-40 and Figures 3-5). It would have been obvious to one of ordinary skill in the art at the time of filing of the invention to implement the digital image replacement and synthesis techniques of Yang et al. with the image generation techniques of Wilson et al. in order to create a more realistic conversational image generation system in Wilson et al. by implementing real-world imagery and the replacement/synthesis techniques of Yang et al. thereby outputting a more visually appealing output. (further see Response to Arguments below) In reference to claims 3 and 12, Wilson et al. and Yang et al. disclose all of the claim limitations as applied to claims 1 and 10 respectively above. Wilson et al. goes on to disclose the program after having determined the user’s location via GPS, retrieving and determining associated weather data based on a date of a user’s conversation or a date that a topic was discussed in a user’s conversation/chat (see column 5, lines 26-38). Wilson et al. also explicitly discloses time information as being retrieved via metadata (see column 9, lines 60-65). In reference to claims 4 and 13, Wilson et al. and Yang et al. disclose all of the claim limitations as applied to claims 1 and 10 respectively above. Wilson et al. further discloses the program allowing for stylistic transforms of the image to be executed including adjusting multiple elements colors for presentation in the generated/displayed image (see column 12, lines 15-21). Wilson et al. explicitly discloses the database, from which information is retrieved by the program and utilized in the image generation techniques thereof, storing a multitude of information some of which is formatted with metadata while other information comprising “configuration files” (see column 5, lines 21-46) of which the Examiner interprets both as functionally equivalent to Applicant’s “structured input.” Neither Wilson et al. or Yang et al. explicitly disclose however, utilizing a “color palette.” At the time the invention was filed, it would have been obvious to one of ordinary skill in the art to modify the stylistic color and image adjustment techniques of Wilson et al. and Yang et al. to include input and usage of a color palette. Applicant has not disclosed that explicitly utilizing a “color palette” instead of color specified for particular elements in Wilson et al. provides an advantage, is used for a particular purpose, or solves a stated problem. One of ordinary skill in the art, furthermore, would have expected Applicant’s invention to perform equally well with the teachings of Wilson et al. and Yang et al. because the exact amount and configuration of colors is a matter of engineering design choice as preferred by the inventor and/or to which best suits the application at hand. Therefore, it would have been obvious to one of ordinary skill in this art to modify Wilson et al. and Yang et al. to obtain the invention as specified in claims 4 and 13 respectively. In reference to claims 5 and 14, Wilson et al. and Yang et al. disclose all of the claim limitations as applied to claims 1 and 10 respectively. Wilson et al. explicitly discloses the program detecting one or more utterances input by a user and using the utterances as inputs to the image generator model to produce the generated image (see column 1, lines 60-65, columns 6-7, lines 53-15 and Figure 3). Wilson et al. further explicitly discloses utilizing NLP and linguistic analysis techniques which involves tagging parts of speech (see column 8, lines 12-29) of which the Examiner interprets as functionally equivalent to defining a “predefined structured” data format. In reference to claims 6, 7, 15 and 16, Wilson et al. and Yang et al. disclose all of the claim limitations as applied to claims 1 and 10 respectively. Although Wilson et al. discloses the program presenting via display of the generated image representations on a display of the computing device (see columns 11-12, lines 65-6 and Figures 2-3), neither Wilson et al. or Yang et al. explicitly disclose the generated image comprising a cartoon version or an animated version of the image. At the time the invention was filed, it would have been obvious to one of ordinary skill in the art to generate the image as in Wilson et al. and Yang et al. with any type of “style” or “artistic rendering” including cartoon or animations especially since Wilson et al. already teaches adjusting “stylistic” characteristics of the image (see above rejection of claims 4 and 13). Applicant has not disclosed that explicitly producing a “cartoon” or “animation” provides an advantage, is used for a particular purpose, or solves a stated problem. One of ordinary skill in the art, furthermore, would have expected Applicant’s invention to perform equally well with image generation techniques of Wilson et al. and Yang et al. because the exact “style” or “artistic rendering” of the output image is a matter of design choice as preferred by the inventor and/or to which best suits the Application at hand. Therefore, it would have been obvious to one of ordinary skill in this art to modify Wilson et al. and Yang et al. to obtain the invention as specified in claims 6, 7, 15 and 16 respectively. In reference to claims 8 and 17, Wilson et al. and Yang et al. disclose all of the claim limitations as applied to claims 1 and 10 respectively. Again, Wilson et al. discloses the computing device comprising a database or a repository for data used by the program while Wilson et al. gives examples of what types of data are stored in the database, one being “historical generated image representations” (see column 5, lines 26-38, column 11, lines 12-13 and Figures 2-3). Wilson et al. further explicitly discloses database stored on a server computing device however also states that the computing device could also comprise of persistent storage to store such database (see column 4, lines 47-61, column 5, lines 21-23, column 14-4-7, 32-51 and Figures 1 and 4). Wilson et al. explicitly discloses a database for storing GIS (geographic information system) data which at least inherently includes “imagery” (e.g. satellite and/or aerial) as would be clear to one of ordinary skill in the art (see column 5, lines 25-38). Note, it is clear that in order to access the database, a “query” must at least inherently be made thereto, by the processing Wilson et al.. In reference to claim 10, claim 10 is similar in scope to claim 1 and is therefore rejected under like rationale. In addition to the rationale as applied in the rejection of claim 1 above, claim 10 further recites, “A non-transitory computer-readable storage medium for tangibly storin computer program instructions capable of being executed by a computer processor…” As indicated above, Wilson et al. discloses the computer device storing a software program in a memory and executed via a processor (see column 3, lines 51-34 and column 14, lines 24-34). Yang et al. discloses the invention implemented via a computer system comprising at least one processor, and software modules executed by the processor which are stored in memory (see paragraphs 67, 70 and Figure 7). In reference to claim 19, claim 19 is similar in scope to claim 1 and is therefore rejected under like rationale. In addition to the rationale as applied in the rejection of claim 1 above, claim 19 further recites, “A device comprising: a processor; and a storage medium for tangibly storing instructions thereon logic for execution by the processor, the logic comprising instructions for…” As indicated above, Wilson et al. discloses the computer device storing a software program in a memory and executed via a processor (see column 3, lines 51-34 and column 14, lines 24-34). Yang et al. discloses the invention implemented via a computer system comprising at least one processor, and software modules executed by the processor which are stored in memory (see paragraphs 67, 70 and Figure 7). Claim(s) 9 and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Wilson et al. (U.S. Patent 11,756,567), Yang et al. (U.S. Publication 2019/0392596) and further in view of and Schaaf et al. (U.S. Patent 9,405,741). In reference to claims 9 and 18, Wilson et al. and Yang et al. disclose all of the claim limitations as applied to claims 1 and 10 respectively. Although Wilson et al. discloses the program detecting one or more utterances input by a user and using the utterances as inputs to the image generator model to produce the generated image, the utterance detection involving NLP and linguistic analysis techniques (see column 1, lines 60-65, columns 6-7, lines 53-15, column 8, lines 12-29 and Figure 3), neither Wilson et al. or Yang et al. explicitly disclose constraining the machine learning model to avoid offensive output. Schaaf et al. discloses a computing device for detecting utterances that can be processed by a spoken language processing system to recognize speech (see column 2, lines 17-20). Schaaf et al. discloses the device utilizing a variety of machine models specifically for spoken language processing (see column 2, lines 35-49). Schaaf et al. discloses the device comprising a natural language understanding module and an output generator that generates a response to a user’s utterance by processing the detected speech using a global output filter model (see column 8, lines 18-21 and column 9, lines 25-34). Schaaf et al. discloses such a filter model to determine offensive content by “scoring” a level of profanity for offensive words and determine whether to modify a portion of the output based upon such a score (see column 9, lines 34-52). It would have been obvious to one of ordinary skill in the art at the time of filing of the invention to implement offensive content model filtering processing techniques of Schaaf et al. with the conversational image generation techniques of Wilson et al. and Yang et al. in order to determine a level of potentially offensive content to be allowed or introduced in an output thereby creating a customizable processing solution on a per user level (see for example column 11, lines 26-36, columns 11-12, lines 60-11 of Schaaf et al.). Response to Arguments The cancellation of claims 2, 11 and 20 is noted. Applicant's arguments filed 02/10/2026 have been fully considered but they are not persuasive. In reference to claims 1, 3-10 and 12-19, Applicant argues the 35 USC 103 rejections of the claims based upon Wilson et al., Yang et al. and Schaaf et al. (see pages 7-11 of Applicant’s Remarks). In particular, Applicant argues that Wilson does not describe retrieving a photographic image of a real-world scene and then modifying visual characteristics of physical structures and natural features in the photographic image to reflect effects of a weather condition but instead, Wilson et al. generates an entirely synthetic, stylized comic image without taking an existing photographic image and transforming it (see 1st paragraph, page 8 of Applicant’s Remarks). Applicant goes on to summarize Wilson et al. as solely rendering a cartoon-like scene with weather elements and not modifying the visual characteristics of physical structures and natural features present in a photographic image (see 1st paragraph, page 8 of Applicant’s Remarks). In response, the Examiner is not persuaded. Wilson et al. discloses determining the user’s location using video/image recognition and/or GPS (see at least columns 10-11, lines 55-11). Wilson et al. explicitly discloses a database for storing GIS (geographic information system) data which at least inherently includes “imagery” (e.g. satellite and/or aerial) (see column5, lines 26-46). Wilson et al. further discloses the model generating a backdrop for an image from the identified location (see at least column 11, 25-28). The independent claims specifically recite, “identifying…a geographic location…retrieving…a photographic image of a real-world scene based on the geographic location; creating, via a generative machine learning model…that takes the photographic image of the geographic location…as input, a digital image depicting the geographic location being visibly affected by the weather condition…and…modifies visual characteristics of physical structures and natural features present in the photographic image…” (see for example, claim 1). Applicant’s arguments seem to be based on the interpretation that the digital image created must be a replica of sorts of the photographic image modified with weather affects however the claims do not prohibit any sort of “stylized comic images” from reading thereupon. In other words, even assuming Applicant’s summarization of Wilson et al. as solely creating “stylized comic images” such interpretation would indeed read about the claim language of the claims. In particular, it is clear that the image generated by Wilson et al. clearly comprises a “backdrop” which is based upon a location of the user, the location which is further indicated as based upon videos/images and/or GPS data. Even further, Wilson et al. explicitly discloses a database for storing GIS (geographic information system) data which at least inherently includes “imagery” (e.g. satellite and/or aerial) (see column5, lines 26-46) which would clearly disclose the “photographic image of real-world scene” limitations as argued. The arguments regarding the limitations of “modifies visual characteristics of physical structures and natural features present in the photographic image” can be seen as taught by Yang et al. therefore, the Examiner will defer those remarks to later in this response however per the arguments against the references individually, one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986). Further, Applicant argues that Wilson et al. does not describe the structured input claim limitations indicating that the inputs into the generative model in Wilson et al. are derived from natural language processing and not weather information from structed input database sources (see 2nd paragraph, page 8 of Applicant’s Remarks). In response, the Examiner is not persuaded. Wilson et al. does disclose a database to store “photographic images” via at least GIS type database-stored data and utilizing a generative adversarial network and program for generating an image. Wilson et al. discloses the database, from which information is retrieved by the program and utilized in the image generation techniques thereof, storing a multitude of information some of which is formatted with metadata while other information comprising “configuration files” of which the Examiner interprets both as functionally equivalent to Applicant’s “structured input” since at least the “configuration file” at least inherently follows a known layout that defines at least parameters and associated values. It is well known in the art of image data processing to retrieve information from databases. Using databases to store information and in this case weather information, can more easily allow for efficient querying and retrieval of such data while also allowing data sharing and integration into multiple systems (Official Notice). It would have been obvious to one of ordinary skill in the art for Wilson et al. who already teaches utilizing a database to store information and further teaches retrieving weather information, to use a database to store and retrieve such weather information while finally also teaching the use of “structured inputs” via at least database storage techniques and/or configuration files, because it is well known in the art that using databases to store weather information can more easily allow for efficient querying and retrieval of such data while also allowing data sharing and integration into multiple systems. Therefore, taking this rationale in combination with that which is taught by Wilson et al, the Examiner believes the claim limitations as argued by Applicant, are not novel over the prior art. Additionally, Applicant argues that Yang et al.’s addition of synthetic obstructions to image data is simply performed by placing pixels over unchanged terrain data while further indicating such processing solely takes place in a training stage of processing without output (see pages 8-9 of Applicant’s Remarks). In response, the Examiner is not persuaded. The Examiner points to the specific language of the independent claims which recite, “…modifies visual characteristics of physical stricture and natural features present in the photographic image to reflect effects of the weather condition…” (see claim 1 for example). Yang et al. discloses generating obstruction-free versions of the digital images in which those pixels that depict clouds, snow or other transient obstructions are replaced with replacement data that estimates/predicts the actual terrain that underlies those pixels, the terrain such as buildings, roads, forest, vegetation, sand, etc.. Even assuming Yang et al. simply adds such clouds or snow to the image data, the language of the claim again simply “modifies visual characteristics of physical structures and natural features.” In other words, it is clear that by overlaying clouds or snow the “visual characteristic” named “occlusion” of such “buildings, roads” and “forest, vegetation” is modified. It seems perhaps Applicant desires the claim language to signify more than that is actually recited when for example, Applicant further argues specifics of such modifications via language found solely within the specification (see explicitly last 2 sentences in paragraph 2, page 9 of Applicant’s Remarks). It is noted that such features are not recited in the rejected claim(s). Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993). Further, as per the argument that the techniques are solely available per training data, the Examiner simply responds with the “purpose” for which the actual produced data is used is not critical in the showing and context of these limitations. The mere fact that Yang et al. teaches the created modification to digital images but does not actually display the data does not preclude that the limitations are actually performed and taught. Furthermore, Applicant's arguments are directed against the Yang et al. reference individually without taking the combination of the cited prior art rejection. One cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986). Lastly, Applicant argues that the cited prior art rejection and cited prior art references are constructed via impermissible hindsight (see page 10 of Applicant’s Remarks). In response to Applicant's argument that the examiner's conclusion of obviousness is based upon improper hindsight reasoning, it must be recognized that any judgment on obviousness is in a sense necessarily a reconstruction based upon hindsight reasoning. But so long as it takes into account only knowledge which was within the level of ordinary skill at the time the claimed invention was made, and does not include knowledge gleaned only from the applicant's disclosure, such a reconstruction is proper. See In re McLaughlin, 443 F.2d 1392, 170 USPQ 209 (CCPA 1971). In lieu of the above response, the Examiner deems the rejections based upon cited prior art Wilson et al., Yang et al. and Schaaf et al. as just. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Antonio Caschera whose telephone number is (571) 272-7781. The examiner can normally be reached Monday-Friday between 6:30 AM and 2:30 PM EST. If attempts to reach the examiner by telephone are unsuccessful, the examiner's supervisor, Said Broome, can be reached at (571) 272-2931. Any response to this action should be mailed to: Mail Stop ____________ Commissioner for Patents P.O. Box 1450 Alexandria, VA 22313-1450 or faxed to: 571-273-8300 (Central Fax) See the listing of “Mail Stops” at http://www.uspto.gov/patents/mail.jsp and include the appropriate designation in the address above. Any inquiry of a general nature or relating to the status of this application or proceeding should be directed to the Technology Center 2600 Customer Service Office whose telephone number is (571) 272-2600. /Antonio A Caschera/ Primary Examiner, Art Unit 2612 2/23/26
Read full office action

Prosecution Timeline

Oct 04, 2023
Application Filed
May 09, 2025
Non-Final Rejection — §103, §112
Aug 14, 2025
Response Filed
Aug 14, 2025
Response after Non-Final Action
Aug 25, 2025
Response Filed
Nov 07, 2025
Final Rejection — §103, §112
Feb 10, 2026
Request for Continued Examination
Feb 18, 2026
Response after Non-Final Action
Feb 24, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602858
Rendering Method and Apparatus, and Device
2y 5m to grant Granted Apr 14, 2026
Patent 12602849
IMAGE GENERATION USING ONE-DIMENSIONAL INPUTS
2y 5m to grant Granted Apr 14, 2026
Patent 12586157
Methods and Systems for Modifying Hair Characteristics in a Digital Image
2y 5m to grant Granted Mar 24, 2026
Patent 12573328
Display device and display calibration method
2y 5m to grant Granted Mar 10, 2026
Patent 12562141
DISPLAY DEVICE, DISPLAY SYSTEM, AND DISPLAY DRIVING METHOD
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
87%
Grant Probability
95%
With Interview (+7.9%)
2y 7m
Median Time to Grant
High
PTA Risk
Based on 1019 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month