Prosecution Insights
Last updated: April 19, 2026
Application No. 18/675,591

ADAPTIVE TEMPORAL IMAGE FILTERING FOR RENDERING REALISTIC ILLUMINATION

Non-Final OA §103§DP
Filed
May 28, 2024
Examiner
WU, MING HAN
Art Unit
2618
Tech Center
2600 — Communications
Assignee
Nvidia Corporation
OA Round
3 (Non-Final)
76%
Grant Probability
Favorable
3-4
OA Rounds
2y 8m
To Grant
99%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
282 granted / 370 resolved
+14.2% vs TC avg
Strong +23% interview lift
Without
With
+23.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
35 currently pending
Career history
405
Total Applications
across all art units

Statute-Specific Performance

§101
7.8%
-32.2% vs TC avg
§103
68.3%
+28.3% vs TC avg
§102
2.1%
-37.9% vs TC avg
§112
12.6%
-27.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 370 resolved cases

Office Action

§103 §DP
DETAILED ACTION In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application aft final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 09/08/2025 has been entered. DOUBLE PATENTING The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP §§ 706.02(l)(1) - 706.02(l)(3) for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp. Claims 1 – 7 of the current application are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1 – 13 of US Patent Application 11,651,547 B2. Although the claims at issue are not identical, they are not patentably distinct from each other because the limitations of the current application claims are essentially covered by the limitations of the patent claims. Claims 8 – 14 of the current application are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 14 – 27 of US Patent Application 11,651,547 B2. Although the claims at issue are not identical, they are not patentably distinct from each other because the limitations of the current application claims are essentially covered by the limitations of the patent claims. Claims 15 – 20 of the current application are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 14 – 27 of US Patent Application 11,651,547 B2. Although the claims at issue are not identical, they are not patentably distinct from each other because the limitations of the current application claims are essentially covered by the limitations of the patent claims. Also, shown below is a mapping between the limitations of independent claim 1 of current application U.S. Patent Application 17/324438 and independent claim 1 of U.S. Patent Application 11,651,547 B2. Claims Current Application Claims Patent Application 1 A computer-implemented method, comprising: performing, based at least on one or more motion vectors of a current frame, a backward projection pass to locate at least one pixel in a current frame having correlated pixel values relative to a previous frame in a sequence of frames; updating, based on the at least one pixel and the correlated pixel values, at least one surface parameter of a geometry buffer (G-buffer) for the current frame; and rendering an image using the G-buffer 1 A computer-implemented method, comprising: generating one or more motion vectors for a current frame; executing a backward projection pass for a set of two or more adjacent pixels in the current frame, using at least one previous frame of one or more previous frames; locating, from the backward projection pass, one or more matching surfaces in common between the current frame and the one or more previous frames using the motion vectors; patching a G-buffer of the current frame based, at least in part, upon information corresponding to the one or more matching surfaces; determining, based at least in part on the patched G-buffer, one or more differences in light between the current frame and the at least one previous frame; rendering an image based at least in part on the one or more differences in light; and outputting the rendered image for display on a display device. 8 A processor, comprising: one or more circuits to: perform, based at least on one or more motion vectors of a current frame, a backward projection pass to locate at least one pixel in a current frame having correlated pixel values relative to a previous frame in a sequence of frames; update, based on the at least one pixel and the correlated pixel values, at least one surface parameter of a geometry buffer (G-buffer) for the current frame; and render an image using the G-buffer. 14 A computer-implemented system comprising: one or more processors; and one or more memory devices that store instructions that, when executed by the one or more processors, cause the one or more processors to execute operations comprising: generating motion vectors for a current frame; executing a backward projection pass for a set of two or more adjacent pixels in the current frame, using at least one previous frame of one or more previous frames; locating, from the backward projection pass, one or more matching surfaces in common between the current frame and the one or more previous frames using the motion vectors; patching a G-buffer of the current frame based on information corresponding to the one or more matching surfaces; determining, based at least in part on the patched G-buffer, one or more differences in light between the current frame and the at least one previous frame; rendering an image based at least in part on the one or more differences in light; and outputting the rendered image for display on a display device. 15 A system comprising: at least one processor; and memory including instructions that, when executed by the at least one processor, perform, based at least on one or more motion vectors of a current frame, one or more operations including: performing a backward projection pass to locate at least one pixel in a current frame having correlated pixel values relative to a previous frame in a sequence of frames; updating, based on the at least one pixel and the correlated pixel values, at least one surface parameter of a geometry buffer (G-buffer) for the current frame; and rendering an image using the G-buffer. 14 A computer-implemented system comprising: one or more processors; and one or more memory devices that store instructions that, when executed by the one or more processors, cause the one or more processors to execute operations comprising: generating motion vectors for a current frame; executing a backward projection pass for a set of two or more adjacent pixels in the current frame, using at least one previous frame of one or more previous frames; locating, from the backward projection pass, one or more matching surfaces in common between the current frame and the one or more previous frames using the motion vectors; patching a G-buffer of the current frame based on information corresponding to the one or more matching surfaces; determining, based at least in part on the patched G-buffer, one or more differences in light between the current frame and the at least one previous frame; rendering an image based at least in part on the one or more differences in light; and outputting the rendered image for display on a display device. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1- 12, and 14 – 20 are rejected under 35 U.S.C. 103 as being unpatentable over Zhang (NPL: CN 112258600 A) in view of Kerzner et al. (Publication: US 2015/0279089 A1), Kim et al. (Publication: US 2009/0058991 A1). Regarding claim 1, see rejection on claim 8. Regarding claim 2, see rejection on claim 9. Regarding claim 3, see rejection on claim 10. Regarding claim 4, see rejection on claim 11. Regarding claim 5, see rejection on claim 12. Regarding claim 6, see rejection on claim 13. Regarding claim 7, see rejection on claim 14. Regarding claim 8, Zhang discloses a processor, comprising: one or more circuits to (Page 2 paragraph 2 – computer with SLAM technology. It is known that computer has CPU and instruction stored the memory to perform the following:): perform a backward projection pass to locate at least one pixel in a current frame having correlated pixel values relative to a previous frame in a sequence of frames (Page 4, last paragraph, page 5 paragraph 7 - Step 2.2.1: extracting and matching the characteristics of the previous frame image and the current frame image; π -1 is the back projection function of the pixel coordinate and the corresponding depth information is restored to the three-dimensional coordinate point; φ -1 is a back projection function, for projecting the pixel point into a three-dimensional ray, Step 3.3: further optimizing the result of the step 3.2, extracting the edge point feature and the surface point feature to match, calculating the error term, wherein the error term of the side point feature is: the feature point matched with the last key frame is less than Nmax, avoiding redundant storage; Nmax is the maximum matching threshold value; [00094] PNG media_image1.png 84 354 media_image1.png Greyscale Page 5 paragraph 8 - the subscript k and k-1 represent the current frame and the previous frame ; T represents the relative pose between two frames “in a sequence of frames”.); Update, based on the at least one pixel and the correlated pixel values (Page 4, paragraph 1 - matching the visual characteristic points of the previous frame to the current frame in the image data. page 4 paragraph 10, page 15 paragraph 5 - uses the laser radar to emit laser and receives the reflected light of the object to be measured to measure the distance; obtaining the laser point cloud data; step 2: using the external reference result of laser radar and camera combined calibration; projecting the laser point cloud to the pixel plane; converting into the depth information of the image; taking the depth information as the scale constraint of the visual distance meter; matching the visual characteristic points of the previous frame to the current frame in the image data; and calculating the visual motion information.). Zhang does not Kerzner discloses update, based on the at least one pixel, at least one surface parameter of a geometry buffer (G-buffer) for the current frame ([0017] - One or more surfaces may be identified in the pixel at block 36 based on the fragment data, wherein illustrated block 38 provides for storing each identified surface as an entry in a G-buffer corresponding to the pixel if a memory overflow condition (e.g., maximum of two or three surfaces) for the G-buffer is not met. Additionally, a weight may be assigned to each surface in the G-buffer at block 40 based on the coverage of the pixel by the surface and the occlusion status (e.g., occluded by another surface, not occluded by another surface) of the surface. In the illustrated example, block 42 resolves a color for the pixel based on the assigned); and render an image using the G-buffer ([0061] – output the visual content based on the G-buffer.). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Zhang with update, based on the at least one pixel, at least one surface parameter of a geometry buffer (G-buffer) for the current frame; and render an image using the G-buffer as taught by Kerzner. The motivation for doing is to improve performance as taught by Kerzner. Bao in view of Kerzner do not Kim discloses performing, based at least on one or more motion vectors of a current frame, a backward projection([0064] - back-projection scheme is used. [0018] – confirming matching image blocks, location of image blocks, between input frame and previous frame according to an average value of a corresponding image and a motion vector.). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Zhang in view of Kerzner with performing, based at least on one or more motion vectors of a current frame, a backward projection as taught by Kim. The motivation for doing is to as taught by Kim . Regarding claim 9, Zhang in view of Kerzner, Kim disclose all the limitations of claim 8. Zhang discloses wherein the correlated pixel values represent correlations in at least one feature of the current frame and of the previous frame (Page 4, paragraph 1 - matching the visual characteristic points of the previous frame to the current frame in the image data.). Regarding claim 10, Zhang in view of Kerzner, Kim disclose all the limitations of claim 8. Zhang discloses wherein the correlated pixel values represent correlations in one or more of reflections, refractions, or shading affecting at least one feature subject to movement between the current frame and the previous frame (page 4 paragraph 10, page 15 paragraph 5 - uses the laser radar to emit laser and receives the reflected light of the object to be measured to measure the distance; obtaining the laser point cloud data; step 2: using the external reference result of laser radar and camera combined calibration; projecting the laser point cloud to the pixel plane; converting into the depth information of the image; taking the depth information as the scale constraint of the visual distance meter; matching the visual characteristic points of the previous frame to the current frame in the image data; and calculating the visual motion information;). Regarding claim 11, Zhang in view of Kerzner, Kim disclose all the limitations of claim 8. Zhang discloses wherein the correlated pixel values represent correlations in locations of at least one feature of the current frame and of the previous frame (Page 4, paragraph 1 - matching the visual characteristic points of the previous frame to the current frame in the image data.). Regarding claim 12, Zhang in view of Kerzner, Kim disclose all the limitations of claim 8. Kerzner discloses generate the correlated pixel values from temporal gradients representing differences between shading of one or more features from the current frame with respect to the previous frame ([0063] wherein the deferred shader stage further includes a merge module to merge two or more identified surfaces together if the two or more identified surfaces have one or more of mutually exclusive coverage of the pixel, a common alignment in the pixel or substantially equal depths in the pixel. [0024] - the first blue fragment 24b and the second blue fragment 24c might be merged into the same second surface 28b (e.g., subject to other pixel coverage and/or depth conditions being met) and the second surface 28b may be saved to the G-buffer at block 58. If, on the other hand, the alignment for the green fragment 24a is compared to the alignment for the first blue fragment 24b, it may be determined at block 54 that the green fragment 24a and the first blue fragment 24b do not have a common alignment thus "current and previous frame" can be read on. In such a case, the first surface 28a may be created from the green fragment 24a .). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Zhang in view Kerzner with generate the correlated pixel values from temporal gradients representing differences between shading of one or more features from the current frame with respect to the previous frame; as taught by Kerzner. The motivation for doing is to improve performance as taught by Kerzner. Regarding claim 14, Zhang in view of Kerzner, Kim disclose all the limitations of claim 8. Zhang discloses wherein the correlated pixel values comprise one or more of depth and normals of a feature that comprises at least one surface in the current frame relative to the previous frame or visibility buffer data between the current frame and the previous frame (Page 3 paragraphs 1 and 2 - extracting the edge point feature and the surface point feature to match, calculating the error term. Page 5 paragraph 2 - step 2.2.3: counting the number of the characteristic point of the depth information in the previous frame ; if less than thethreshold value, returning the error information; using the uniform motion hypothesis; otherwise, performing PnP calculation to the feature point of the depthinformation in the previous frame and the feature point of the non-depth information in the current frame.). Regarding claim 15, see rejection on claim 1. Regarding claim 16, see rejection on claim 2. Regarding claim 17, see rejection on claim 3. Regarding claim 18, see rejection on claim 4. Regarding claim 19, see rejection on claim 5. Regarding claim 20, see rejection on claim 14. Claims 13 is rejected under 35 U.S.C. 103 as being unpatentable over Zhang (NPL: CN 112258600 A) in view of Kerzner et al. (Publication: US 2015/0279089 A1), Kim et al. (Publication: 2009/0058991 A1) and ULUDAG Y (NPL: CN 110874858 A). Regarding claim 13, 13. The processor of claim 8, wherein the one or more circuits are further to: Zhang discloses determine the correlated pixel values for the current frame and using further pixel values that are determined (Page 4, last paragraph, page 5 paragraph 7 - Step 2.2.1: extracting and matching the characteristics of the previous frame image and the current frame image; Step 3.3: further optimizing the result of the step 3.2, extracting the edge point feature and the surface point feature to match, calculating the error term, the feature point matched with the last key frame is less than Nmax, avoiding redundant storage; Nmax is the maximum matching threshold value;). ULUDAG discloses determine a set of light rays projected or traced from a light source in a plurality of directions in the current frame and in the previous frames (Page 11 paragraph 2- FIG. 14B and 14A, there are plurality of ray directions in current and previous frame.); determine a first one of pixel values from a location having incidence of at least one light ray from the set of light rays against a surface of a feature in the current frame (Page 11 paragraph 2- FIG. 14B a block diagram of a 3D scene representation of FIG. 14A, be reused wherein some rays from the neighbour pixel. As described above, when attempting to determine reflection information for a given pixel (e.g., pixel 1402B), some reflection information from nearby pixels from the previous frame can be reused. In the example shown, pixel 1402A and 1402C within a threshold radius of the pixel 1402B. The threshold radius can be configured. In one implementation, the threshold radius is from about 6-8 pixel radius is the center pixel of the examination. when calculating the time (in the previous frame or the current frame) of pixel 1402A of reflection information of the ray projection from pixel 1402A, and identified as intersecting with the object at the point 1406A. The method of FIG. 13, can already use ray or ray tracing to find intersection. In the example shown in FIG. 14B, via a ray travelling and finding intersection, because the object of intersection in the viewport. Similarly, when calculating the pixel 1402C (which in the previous frame or in the current frame) of the reflecting information when the ray projected from the pixel 1402C and identified as intersecting with the object at the point 1406C.); determine a second one of the pixel values from the at least one light ray being traced or from a second light ray of the set of light rays being projected after the surface of the feature, wherein the first one of the pixel values is associated with a direct illumination at the surface and the second one of the pixel values is associated with a shadow illumination after the surface (Page 11 paragraph 2- FIG. 14B a block diagram of a 3D scene representation of FIG. 14A, be reused wherein some rays from the neighbour pixel. As described above, when attempting to determine reflection information for a given pixel (e.g., pixel 1402B), some reflection information from nearby pixels from the previous frame can be reused. In the example shown, pixel 1402A and 1402C within a threshold radius of the pixel 1402B. The threshold radius can be configured. In one implementation, the threshold radius is from about 6-8 pixel radius is the center pixel of the examination. when calculating the time (in the previous frame or the current frame) of pixel 1402A of reflection information of the ray projection from pixel 1402A, and identified as intersecting with the object at the point 1406A. The method of FIG. 13, can already use ray or ray tracing to find intersection. In the example shown in FIG. 14B, via a ray travelling and finding intersection, because the object of intersection in the viewport. Similarly, when calculating the pixel 1402C (which in the previous frame or in the current frame) of the reflecting information when the ray projected from the pixel 1402C and identified as intersecting with the object at the point 1406C.); and using the first one of pixel values and the second one of pixel values for the current frame and, for the previous frame, in a manner associated with the first one of pixel values and the second one of pixel values of the current frame (Page 11 paragraph 2- FIG. 14B a block diagram of a 3D scene representation of FIG. 14A, be reused wherein some rays from the neighbour pixel. As described above, when attempting to determine reflection information for a given pixel (e.g., pixel 1402B), some reflection information from nearby pixels from the previous frame can be reused. In the example shown, pixel 1402A and 1402C within a threshold radius of the pixel 1402B. The threshold radius can be configured. In one implementation, the threshold radius is from about 6-8 pixel radius is the center pixel of the examination. when calculating the time (in the previous frame or the current frame) of pixel 1402A of reflection information of the ray projection from pixel 1402A, and identified as intersecting with the object at the point 1406A. The method of FIG. 13, can already use ray or ray tracing to find intersection. In the example shown in FIG. 14B, via a ray travelling and finding intersection, because the object of intersection in the viewport. Similarly, when calculating the pixel 1402C (which in the previous frame or in the current frame) of the reflecting information when the ray projected from the pixel 1402C and identified as intersecting with the object at the point 1406C.). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Zhang in view of Kerzner, Kim with determine a set of light rays projected or traced from a light source in a plurality of directions in the current frame and in the previous frames; determine a first one of pixel values from a location having incidence of at least one light ray from the set of light rays against a surface of a feature in the current frame; determine a second one of the pixel values from the at least one light ray being traced or from a second light ray of the set of light rays being projected after the surface of the feature, wherein the first one of the pixel values is associated with a direct illumination at the surface and the second one of the pixel values is associated with a shadow illumination after the surface; and using the first one of pixel values and the second one of pixel values for the current frame and, for the previous frame, in a manner associated with the first one of pixel values and the second one of pixel values of the current frame as taught by ULUDAG. The motivation for doing is to improve efficiency as taught by ULUDAG. Response to Arguments Claim Rejection Under 35 U.S.C. 103 Regarding claims 1, 8, and 16, the applicant asserts “As discussed during the Examiner Interview, the Office alleges that Bao discloses "perform[ing] [ ... ] a backward projection pass to locate at least one pixel in the current frame having correlated pixel values relative to a previous frame," citing to Bao's page 4. See Final Office Action, pp. 8-9. However, as we stated in our recent response, Bao describes forwards projecting and backwards projecting pixels between a three-dimensional space to a two dimensional space (i.e., to a pixel coordinate system). This is not the same as backwards projection from a current frame to a previous frame, e.g., between two frames in a sequence. For at least these reasons, Bao does not disclose the claim elements discussed herein. The Office alleges that the newly added reference, Kim, discloses "perform[ing], based at least on one or more motion vectors of a current frame, a backward projection pass," citing to Kim's paragraph [0064]. See Final Office Action, p. 10. Although Kim discloses a "motion vector" between a current frame and a previous frame, Kim does not describe using this motion vector to perform a "backward projection pass." Kim discloses only converting a frame "so as to be projected to a curved surface having the same focal distance (e.g., a cylindrical surface, a spherical surface, etc.)." See Kim, paragraph [0046]. Therefore, Kim does not disclose performing a "backward projection pass," let alone "based at least on one or more motion vectors." For at least this reason, Kim does not disclose the claim elements discussed herein.” Examiner disagrees. During patent examination, the pending claims must be given their broadest reasonable interpretation consistent with the specification. See MPEP § 2111. Further, although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993). See also MPEP § 2145(VI). Zhang discloses Page 4, last paragraph, page 5 paragraph 7 - Step 2.2.1: extracting and matching the characteristics of the previous frame image and the current frame image; π -1 is the back projection function of the pixel coordinate and the corresponding depth information is restored to the three-dimensional coordinate point; φ -1 is a back projection function, for projecting the pixel point into a three-dimensional ray, Step 3.3: further optimizing the result of the step 3.2, extracting the edge point feature and the surface point feature to match, calculating the error term, wherein the error term of the side point feature is: the feature point matched with the last key frame is less than Nmax, avoiding redundant storage; Nmax is the maximum matching threshold value;[00094] PNG media_image1.png 84 354 media_image1.png Greyscale Page 5 paragraph 8 - the subscript k and k-1 represent the current frame and the previous frame ; T represents the relative pose between two frames “in a sequence of frames”. Regarding dependent claims 2 – 7, 9 – 14, and 16 – 20, the Applicant asserts that they are not obvious over based on their dependency from independent claim 1, 8, and 15 respectively. The examiner cannot concur with the Applicant respectfully from same reason noted in the examiner’s response to argument asserted from claim 1, 8, and 15 respectively. Conclusion The prior art made of record and not relied upon is considered pertinent to Applicant's disclosure. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Ming Wu whose telephone number is (571)270-0724. The examiner can normally be reached on Monday - Friday: 9:30am - 6:00pm EST . Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kee Tung can be reached on 571-272-7794. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MING WU/ Primary Examiner, Art Unit 2618
Read full office action

Prosecution Timeline

May 28, 2024
Application Filed
Jan 09, 2025
Non-Final Rejection — §103, §DP
Apr 08, 2025
Applicant Interview (Telephonic)
Apr 08, 2025
Examiner Interview Summary
Apr 14, 2025
Response Filed
May 02, 2025
Final Rejection — §103, §DP
May 20, 2025
Examiner Interview (Telephonic)
May 20, 2025
Examiner Interview Summary
Jun 24, 2025
Examiner Interview Summary
Jun 24, 2025
Applicant Interview (Telephonic)
Sep 08, 2025
Request for Continued Examination
Sep 10, 2025
Response after Non-Final Action
Nov 01, 2025
Non-Final Rejection — §103, §DP
Feb 24, 2026
Examiner Interview Summary
Feb 24, 2026
Applicant Interview (Telephonic)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597109
SYSTEMS AND METHODS FOR GENERATING THREE-DIMENSIONAL MODELS USING CAPTURED VIDEO
2y 5m to grant Granted Apr 07, 2026
Patent 12579702
METHOD AND SYSTEM FOR ADAPTING A DIFFUSION MODEL
2y 5m to grant Granted Mar 17, 2026
Patent 12579623
IMAGE PROCESSING METHOD AND APPARATUS, ELECTRONIC DEVICE, AND READABLE STORAGE MEDIUM
2y 5m to grant Granted Mar 17, 2026
Patent 12567185
Method and system of creating and displaying a visually distinct rendering of an ultrasound image
2y 5m to grant Granted Mar 03, 2026
Patent 12548202
TEXTURE COORDINATE COMPRESSION USING CHART PARTITION
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
76%
Grant Probability
99%
With Interview (+23.3%)
2y 8m
Median Time to Grant
High
PTA Risk
Based on 370 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month