Prosecution Insights
Last updated: April 19, 2026
Application No. 18/319,568

GENERATIVE ADVERSERIAL NETWORK (GAN) ENABLED VEHICLE WINDSCREEN

Non-Final OA §103
Filed
May 18, 2023
Examiner
MCCOY, AIDAN WILLIAM
Art Unit
2611
Tech Center
2600 — Communications
Assignee
International Business Machines Corporation
OA Round
5 (Non-Final)
50%
Grant Probability
Moderate
5-6
OA Rounds
2y 9m
To Grant
99%
With Interview

Examiner Intelligence

Grants 50% of resolved cases
50%
Career Allow Rate
1 granted / 2 resolved
-12.0% vs TC avg
Strong +100% interview lift
Without
With
+100.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
25 currently pending
Career history
27
Total Applications
across all art units

Statute-Specific Performance

§101
4.7%
-35.3% vs TC avg
§103
52.9%
+12.9% vs TC avg
§102
15.9%
-24.1% vs TC avg
§112
22.4%
-17.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 2 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment The amendments filed March 10th 2026 have been entered. Claims 1-5, 7-12, 14-19 remain pending in the application. As discussed in the interview summary mailed on March 11th 2026; Applicant’s amendments to the claims have overcome the 35 U.S.C. 103 rejections previously set forth in the final office action mailed February 23rd 2026, however, new grounds of rejection have been entered as described below. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-3, 7-10, 14-17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Saxena (US 2023/0091837 A1) in view of Lee (US 2019/0147582 A1), Kanazawa (US 2022/0301227 A1), Seo (US 2023/0012645 A1), Zhang (US 2021/0125076 A1), Satoshi (JP 2020095544 A) and H. Yao, L. Chuyi, H. Dan and Y. Weiyu, "Gabor Feature Based Convolutional Neural Network for Object Recognition in Natural Scene," 2016 3rd International Conference on Information Science and Control Engineering (ICISCE), Beijing, China, 2016, pp. 386-390, doi: 10.1109/ICISCE.2016.91 (hereinafter “Yao”). Regarding claim 1, Saxena teaches A computer-implemented method comprising: monitoring, via reference to one or more sensors, visibility of a windscreen, (paragraph [0063]) wherein monitoring the visibility comprises: and generating a visibility score; (Fig. 4 #402 & Paragraph [0057]) and converting, dynamically, the windscreen into a display surface, (paragraph [0037]) in response to the visibility score falling below a predetermined threshold (Fig. 4 #402); analyzing an external sensor feed of a vehicle to identify visibility of a surrounding area (Fig. 1); comparing live images of the surrounding area with a series of new synthetic images (paragraphs [0036]-[0039]) identifying how much of the surrounding area is sufficient for a driver to safely drive the vehicle (Fig. 2 & 4, paragraphs [0028]-[0030]) to perform real-time image adaptation on a transparent display layer of the windscreen (paragraphs [0026] & [0037]) based on a context of the surrounding area (Fig. 2 & 4, paragraphs [0027]-[0030]); rendering, in real-time, the adaptation of the surrounding area of the vehicle (paragraphs [0037] & [0038]) on a transparent display layer of the windscreen of the vehicle (paragraph [0037]). Saxena describes the continual updating of images on the windscreen and the gathering of image data in the driver’s view. It further synthesizes images based on what object is obscured, and continues by tracking change in the driver view image in relation to the synthetic image to continually update the interposed synthetic image. The driver’s view image is analogous to the live image of the claimed invention and the comparison between synthetic images and live images is shown through the continual updating of the interposed image based on various differences between the driver’s view and the synthetic images. Saxena fails to teach generating a visibility score based upon video and image frames captured by a camera inside a vehicle, wherein generating the visibility score comprises extracting convolutional neural network (CNN) object identification features and Gabor filter-based features from the captured image fames; and calculating the visibility score over a time series of the captured image frames; comparing live images of the surrounding area with a series of new synthetic images via a generative adversarial network (GAN) discriminator; identifying how much of the surrounding area to be displayed is sufficient for a driver to safely drive the vehicle, based on available processing capability to perform GAN based image adaptation, and available battery power; determining a similarity threshold of the compared live images of the surrounding area with the series of new synthetic images; scoring the series of new synthetic images as highly representative if determined to be above a pre-defined threshold; initiating, dynamically, a GAN enabled adaptation of the surrounding area of the vehicle However, Lee teaches processing the captured video and image frames that provides object identification (paragraphs [0020], [0031], [0034], [0036], [0061]); comparing live images of the surrounding area with a series of new synthetic images via a generative adversarial network (GAN) discriminator (Fig. 2 #230, paragraph [0005]); perform GAN based image adaptation (Fig. 5 paragraphs [0005]-[0008]) determining a similarity threshold of the compared live images of the surrounding area with the series of new synthetic images (paragraph [0050]); scoring the series of new synthetic images as highly representative if determined to be above a pre-defined threshold (Fig. 4, paragraph [0043]); initiating, dynamically, GAN enabled adaptation of the surrounding area of the vehicle (Fig. 5) Lee describes the use of a GAN enabled adaptation, and a GAN discriminator. Lee describes the determination of whether an image is real or fake, this uses a similarity threshold described as a mapping function which when meeting a certain criteria determines whether an image is real or fake. Lee also describes various loss functions which are used to quantify how representative the SPIGAN model is in order to improve its training in generating the synthetic images, analogous to the scoring of synthetic images. The goal of these loss and mapping functions are to optimize the images to make them photorealistic, analogous to determining a synthetic image is “highly representative”. Lee also describes identifying objects and generating object identification information with the use of a predictor. Saxena and Lee are both considered analogous to the claimed invention because they are in the same field of image data processing. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Saxena to incorporate the teachings of Lee and implement a GAN for the purpose of alleviating visibility issues in a vehicle. Saxena in view of Lee fails to teach wherein generating the visibility score comprises extracting convolutional neural network (CNN) object identification features and Gabor filter-based features from the captured image fames; and calculating the visibility score over a time series of the captured image frames identifying how much of the surrounding area is sufficient for a driver to safely drive the vehicle based on available processing capability and available battery power and a convolutional neural network (CNN) that provides object identification, and generating a visibility score based upon video and image frames captured by a camera inside a vehicle, and wherein the visibility score is calculated over a time series of captured video and image frames; (emphasis added). However, Kanazawa teaches train a model based on available processing capability and available battery power (paragraphs [0137]-[0139]). Kanazawa describes customizing a model based on a device and its resource constraints. Kanazawa suggests the use of battery power and processing capabilities in its description of these constraints. Kanazawa is considered analogous to the claimed invention as it is in the same field of image processing a machine learning. Therefore it would have been obvious to one of ordinary skill in the art, before the effective filing date, to combine the teachings of Kanazawa with Saxena in view of Lee in order to incorporate device information such as available processing capability and battery power. The motivation for doing so would be to allow for increased model complexity which can aid in accuracy or prediction speed (paragraphs [0136] & [0138]). Saxena in view of Kanazawa and Lee fail to teach wherein generating the visibility score comprises extracting convolutional neural network (CNN) object identification features and Gabor filter-based features from the captured image fames; and calculating the visibility score over a time series of the captured image frames; a convolutional neural network (CNN) that provides object identification and generating a visibility score based upon video and image frames captured by a camera inside a vehicle However, Seo teaches a convolutional neural network (CNN) that provides object identification (paragraph [0113]). Seo is considered analogous to the claimed invention as it is in the same field of image data processing. Therefore it would have been obvious to one of ordinary skill in the art, before the effective filing date, to incorporate the teachings of Seo with Saxena in view of Lee and Kanazawa in order to implement a quick and efficient object identification method. Saxena in view of Lee, Kanazawa and Seo fail to teach wherein generating the visibility score comprises extracting convolutional neural network (CNN) object identification features and Gabor filter-based features from the captured image fames; and calculating the visibility score over a time series of the captured image frames; video and image frames captured by a camera inside a vehicle, However, Zhang teaches calculating the visibility score over a time series of the captured image frames (paragraph [0146]); Zhang describes time series data including visibility information. Zhang is considered analogous to the claimed invention as it is in the same field of vehicle detection and prediction systems. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date, to combine the determination of a visibility level with the specification of it being known over a time series to improve the performance of a machine learning model. Saxena in view of Lee, Kanazawa, Seo and Zhang fail to teach wherein generating the visibility score comprises extracting convolutional neural network (CNN) object identification features and Gabor filter-based features from the captured image fames; and video and image frames captured by a camera inside a vehicle. However, Satoshi teaches visibility based upon video and image frames captured by a camera inside a vehicle (last paragraph of pg. 2 - first paragraph of pg. 3, last paragraph of pg. 4). Satoshi describes a system of object detection within a vehicle. This system comprises an interior camera that is used in determining visibility information such as the region of visibility for a driver. Satoshi is considered analogous to the claimed invention as it is in the same field of computer graphics in relation to vehicles. Therefore it would have been obvious to one of ordinary skill in the art, before the effective filing date, to incorporate the teachings of Satoshi to take into account information gathered with a camera inside a vehicle to improve determining the visibility of a driver. Saxena in view of Lee, Kanazawa, Seo, Zhang and Satoshi fail to teach wherein generating the visibility score comprises extracting convolutional neural network (CNN) object identification features and Gabor filter-based features from the captured image fames. However, Yao teaches extracting convolutional neural network (CNN) object identification features and Gabor filter-based features from the captured image fames (title, abstract, Section III B “Gabor feature based CNN”). Yao describes a system for object recognition that utilizes Gabor filter-based features from image data. Yao also describes saving the parameters of a pre-trained CNN, which can be considered analogous to extracting CNN object identification features. Yao is considered analogous to the claimed invention as it is in the same field of computer vision and machine learning. Therefore it would have been obvious to one of ordinary skill in the art to utilize the teachings of Yao, and improve the visibility score of Saxena in view of Lee, Kanazawa, Seo Zhang and Satoshi, using the teachings of Yao, which are shown to improve object identification accuracy. Furthermore, because the visibility determination of Saxena includes consideration of object detection, it would be obvious to try implementing a method which have shown to improve object detection accuracy, such as that of Yao, to improve visibility detection. Regarding claim 2, Saxena in view of Lee, Kanazawa, Seo, Zhang and Satoshi teaches the computer-implemented method of claim 1, further comprising: aligning the external sensor feed, of an identified viewing direction of a user, with the real-time rendering of the GAN enabled adaptation of the surrounding area on the transparent display layer of the windscreen. (Saxena, paragraphs [0067] and [0069]). Saxena describes the alignment of the overlayed image and the users view gathered through the use of “sensors or object detection” which is considered equivalent to an alignment of external sensors such as those described in paragraph [0027], and the real-time rendering of its surrounding area adaptation. Regarding claim 3, Saxena in view of Lee, Kanazawa, Seo, Zhang and Satoshi teaches the computer-implemented method of claim 1, further comprising: identifying a minimum amount of GAN enabled adaptation of the surrounding area data to render, on the transparent display layer of the windscreen, that is adequate to remove an identified visibility issue (Saxena, [0065] & Fig. 4). Saxena teaches identification of a minimum amount of image to alter by describing a process which alters a portion of an image to alleviate visibility issues, this implies a minimum amount adaptation to apply as the system must identify the problem area of the image to alleviate a visibility issue. Product claims 8-10 are drawn to the product corresponding to the method of using as claimed in claims 1-3. Therefore, product claims 8-10 correspond to method claims 1-3, and are rejected for the same reasons of obviousness as used above. Apparatus claims 15-17, are drawn to the apparatus corresponding to the method of using as claimed in claims 1-3. Therefore, apparatus claims 15-17 correspond to method claims 1-3 and are rejected for the same reasons of obviousness as used above. Regarding claim 7, the combination of Saxena in view of Lee, Kanazawa, Seo, Zhang and Satoshi teaches the computer-implemented method of claim 1, further comprising: updating, continually, the series of new synthetic images (Saxena, Paragraph [0026] "to continually 'overlay' the object with the interposed image.") and projecting the series of new synthetic images onto the transparent display layer of the windscreen (Saxena, paragraph [0037] "the windshield comprises an active transparent display capable of displaying images"). Seo further teaches a high frame rate supported by the GAN enabled adaptation, wherein a high frame rate comprises temporarily increasing a capture rate, per second of the one or more sensors (paragraph [0090]); Seo is considered analogous to the claimed invention as it is in the same field of image data processing. Seo describes the use of multiple possible capture rates, it would have been obvious to one of ordinary skill in the art, before the effective filing date, to specify the use/order of these rates to be temporarily increasing. Product claim 14 is drawn to the product corresponding to the method of using as claimed in claim 7. Therefore, product claim 14 corresponds to method claim 7, and is rejected for the same reasons of obviousness as used above. Claim(s) 4, 11, 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Saxena in view of Lee, Kanazawa, Seo, Zhang and Satoshi, and in further view of Bruemmer (US 2015/0285646 A1). Regarding claim 4, Saxena in view of Lee, Kanazawa, Seo, Zhang and Satoshi teach the computer implemented method of claim 3, Seo further teaches identifying a minimum amount of GAN enabled adaptation of the surrounding area data to render, on the transparent display layer of the windscreen, that is adequate to remove an identified visibility issue is based on: data collected from one or more vehicles in a multi-vehicle ecosystem (Seo, [0028] second sentence). Seo is considered analogous to the claimed invention because it is in the same field of image data processing. Therefore, it would have been obvious to one of ordinary skill in the art to utilize multiple vehicles to improve data collection. Saxena in view of Lee, Kanazawa, Seo, Zhang and Satoshi fail to teach wherein the one or more vehicles collaborate with each other based on a relative position and direction of the one or more vehicles in the multi-vehicle ecosystem. However, Bruemmer teaches wherein the one or more vehicles collaborate with each other based on a relative position and direction of the one or more vehicles in the multi-vehicle ecosystem. (Bruemmer claim 12). Bruemmer describes the collaboration of vehicles utilizing relative position and spatial position along a path to coordinate movement of multiple vehicles. The use of spatial position along a path for this use inherently implies relative direction. Bruemmer is considered analogous to the claimed invention because it is in the field of image data processing. Therefore, it would have been obvious to someone of ordinary skill in the art to incorporate the teachings of Bruemmer, which uses relative direction and position of vehicles to collaborate in multi-vehicle data collection, with the combination of Seo, Saxena, Lee and Kanazawa. Product claim 11 is drawn to the product corresponding to the method of using as claimed in claim 4. Therefore, product claim 11 corresponds to method claim 4, and is rejected for the same reasons of obviousness as used above. Apparatus claim 18 is drawn to the apparatus corresponding to the method of using as claimed in claims 4. Therefore, apparatus claim 18 corresponds to method claim 4, and is rejected for the same reasons of obviousness as used above. Claim(s) 5, 12, 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Saxena in view of Lee, Kanazawa, Seo, Zhang and Satoshi and further in view of Moisel (GB 2406735 A). Regarding claim 5, Saxena in view of Lee, Kanazawa, Seo, Zhang and Satoshi teach the GAN enabled windscreen as described in claim 1, but fail to teach setting a maximum velocity limit of the vehicle to execute the GAN enabled adaptation to restore visibility to the user. However, Moisel teaches setting a maximum velocity limit of the vehicle to execute the adaptation to restore visibility to the user (Moisel, pg. 9, lines 6-11). Moisel is considered to be analogous to the claimed invention because both are in the same field of vehicle instruments, and improving vehicle visibility. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have implemented a maximum velocity limit of a vehicle to enable the system for the purpose of as safety and mitigating overreliance on the system, which may not function as well at high speeds, as described in Moisel. Product claim 12 is drawn to the product corresponding to the method of using as claimed in claim 5. Therefore, product claim 12 corresponds to method claim 5, and is rejected for the same reasons of obviousness as used above. Apparatus claim 19 is drawn to the apparatus corresponding to the method of using as claimed in claims 5. Therefore, apparatus claim 19 corresponds to method claim 5, and is rejected for the same reasons of obviousness as used above. Response to Arguments Applicant’s arguments with respect to claim(s) 1 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Additionally, applicant’s arguments on page 11, with respect to “specific multi-feature, weighted, normalized visibility-scoring computation” which are described as a “weighted normalization scoring pipeline, including the use of CNN, Gabor, LBP, and SIFT to generate the visibility score and its normalization to the [-1, +1] scale” are not representative of the claimed subject matter. The amended claim only includes limitations describing the use of a CNN and Gabor-filter based features. For this reason the above rejection and response to arguments do not reference or respond to the LBP, SIFT and weighted normalization features described in the Applicant’s response and the previous interview. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Dilek, E., & Dener, M. (2023). Computer Vision Applications in Intelligent Transportation Systems: A Survey. Sensors (Basel, Switzerland), 23(6), 2938. https://doi.org/10.3390/s23062938. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Aidan W McCoy whose telephone number is (571)272-5935. The examiner can normally be reached 8:00 AM-5:00 PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tammy Goddard can be reached at (571)272-7773. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /AIDAN W MCCOY/ Examiner, Art Unit 2611 /TAMMY PAIGE GODDARD/ Supervisory Patent Examiner, Art Unit 2611
Read full office action

Prosecution Timeline

May 18, 2023
Application Filed
Mar 10, 2025
Non-Final Rejection — §103
Jun 11, 2025
Applicant Interview (Telephonic)
Jun 11, 2025
Examiner Interview Summary
Jun 13, 2025
Response Filed
Jul 02, 2025
Final Rejection — §103
Aug 28, 2025
Examiner Interview Summary
Aug 28, 2025
Applicant Interview (Telephonic)
Aug 29, 2025
Request for Continued Examination
Sep 03, 2025
Response after Non-Final Action
Sep 15, 2025
Non-Final Rejection — §103
Dec 08, 2025
Applicant Interview (Telephonic)
Dec 08, 2025
Examiner Interview Summary
Dec 09, 2025
Response Filed
Feb 19, 2026
Final Rejection — §103
Mar 09, 2026
Applicant Interview (Telephonic)
Mar 09, 2026
Examiner Interview Summary
Mar 10, 2026
Request for Continued Examination
Mar 12, 2026
Response after Non-Final Action
Mar 18, 2026
Non-Final Rejection — §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
50%
Grant Probability
99%
With Interview (+100.0%)
2y 9m
Median Time to Grant
High
PTA Risk
Based on 2 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month