Prosecution Insights
Last updated: April 19, 2026
Application No. 18/241,436

FOUR-DIMENSIONAL TESSERACT IMAGE PROCESSING SYSTEM AND METHOD

Non-Final OA §103§112
Filed
Sep 01, 2023
Examiner
WAMBST, DAVID ALEXANDER
Art Unit
2663
Tech Center
2600 — Communications
Assignee
Kmb Telematics Inc.
OA Round
1 (Non-Final)
67%
Grant Probability
Favorable
1-2
OA Rounds
2y 11m
To Grant
99%
With Interview

Examiner Intelligence

Grants 67% — above average
67%
Career Allow Rate
18 granted / 27 resolved
+4.7% vs TC avg
Strong +47% interview lift
Without
With
+47.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
25 currently pending
Career history
52
Total Applications
across all art units

Statute-Specific Performance

§101
4.5%
-35.5% vs TC avg
§103
56.6%
+16.6% vs TC avg
§102
21.5%
-18.5% vs TC avg
§112
16.1%
-23.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 27 resolved cases

Office Action

§103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Objections Claims 15-17 and 19 are objected to because of the following informalities: “The image processing system of clam” in line 1 should read “The image processing system of claim”. Claim 51 is objected to because of the following informalities: “method of” in line 1 should read “The method of”. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 19 and 51 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Regarding claims 19 and 51, "the first range" is recited in line 4. There is insufficient antecedent basis for this limitation in the claim. It is unclear what first range is being referred to in this instance as no specific first range has been established in any claims from which it depends on. “The first range” should be changed to read “a first range”, or claims from which claim 19 depends on should be amended to include “a first range”. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-19, 23-51, and 55-58 as best understood are rejected under 35 U.S.C. 103 as being unpatentable over Jakowatz et al. (NPL, “Refocus of constant velocity moving targets in synthetic aperture radar imagery”, published 1998, pdf attached) in view of Cohen et al. (NPL, “SUMMeR: Sub-Nyquist MIMO Radar”, published 2016, pdf attached). Regarding claim 1, Jakowatz teaches an image processing system comprising (Pg. 1, “The refocus is accomplished by application of a two-dimensional phase function to the phase history data obtained via Fourier transformation of an image chip that contains the blurred moving target data.”, image processing is performed): a memory storing computer-program instructions; one or more processors coupled to the memory that execute the computer-program instructions to (Performing image processing indicates the usage of a processor coupled with memory): gather data samples associated with an antenna element (Pg. 1, “We consider the general problem of imaging a target (typically a vehicle) moving at constant speed in a straight line, with a SAR system that forms imagery collected in the spotlight mode.”), wherein the data samples correspond to data at a specific range (Pg. 3, “The formed SAR image is described by g(x,y), where x and y denote the ground plane cross-range and range dimensions, respectively.”); transform the data samples into a range representation for a field of view (Pg. 1, “This is sometimes referred to as a Doppler shift, and its classic demonstration is the image of a railroad train traveling in the range direction appearing displaced in cross-range off of its tracks; 2) the target signature is defocused (blurred) in the cross-range dimension by an amount that depends upon the cross-range component of the target velocity; and 3) the target is imparted a two-dimensional defocus (i.e., in both range and cross-range dimensions), the magnitude of which is determined by the range component of the target velocity.”; Pg. 5, “In this section we propose an algorithm for automatically refocusing a section (chip) of image domain data that is thought to contain the blurred signature of a moving target undergoing constant velocity.”, the image chip containing the blurred moving target data is the range representation for a field of view); sample the range representation at each range of one or more ranges within the field of view to obtain a spatial frequency representation of the field of view (Pg. 3, “As shown in Figure 1(b) the phase history data are denoted as G(X, Y), where X and Y are the ground-plane cross-range and range spatial frequencies, respectively. The formed SAR image is described by g(x,y), where x and y denote the ground plane cross-range and range dimensions, respectively.”, the phase history data is the spatial frequency representation); transform the spatial frequency representation to obtain spatial-velocity values for the field of view (Pg. 1, “By considering separately the phase effects of the range and cross-range components of the target velocity vector, we show how the appropriate phase correction term can be derived as a two-parameter function.”; Pg. 3, “The net effect will be a 2-D function that completely accounts for all of the effects of constant velocity motion of the target, and that is parametrized by the two components of the target velocity.”); perform motion correction on one or more targets in an arrangement of spatial- velocity values to generate a motion corrected representation of the field of view (Pg. 1, “The refocus is accomplished by application of a two-dimensional phase function to the phase history data obtained via Fourier transformation of an image chip that contains the blurred moving target data. By considering separately the phase effects of the range and cross-range components of the target velocity vector, we show how the appropriate phase correction term can be derived as a two-parameter function.”); and generate from the motion corrected representation of the field of view, a representation of the field of view at the specific range (Fig. 4, shows the motion corrected representation of the field of view at the specific range; Pg. 5, “When the optimization procedure is complete, we end up with a well-focused image of the moving target, as well as estimates for the two velocity components.”). Jakowatz does not explicitly disclose the usage of a plurality of antenna elements. Cohen teaches to gather data samples associated with a plurality of antenna elements (Pg. 1, Col. 1, “Multiple input multiple output (MIMO) [1] radar, which presents significant potential for advancing state-of-the-art modern radar in terms of flexibility and performance, poses new theoretical and practical challenges. This radar architecture combines multiple antenna elements both at the transmitter and receiver where each transmitter radiates a different waveform.”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Jakowatz to incorporate the teachings of Cohen to include a plurality of antenna elements. Jakowatz teaches a motion correction algorithm using SAR imager, but do they not explicitly disclose that multiple antenna are used for the imagery. One of ordinary skill in the art would recognize that combining the motion correction algorithm of Jakowatz with newer MIMO-based radar systems, such as the one disclosed in Cohen, is a well-known, routine design choice to improve resolution and performance. Regarding claim 2, Jakowatz as modified above teaches all of the elements of claim 1, as stated above, as well as wherein the plurality of antenna elements are arranged in a multi-input multi-output (MIMO) configuration (Cohen; Pg. 1, Col. 1, “Multiple input multiple output (MIMO) [1] radar, which presents significant potential for advancing state-of-the-art modern radar in terms of flexibility and performance, poses new theoretical and practical challenges. This radar architecture combines multiple antenna elements both at the transmitter and receiver where each transmitter radiates a different waveform.”). Regarding claim 3, Jakowatz as modified above teaches all of the elements of claim 1, as stated above, as well as wherein the plurality of antenna elements includes a plurality of reception antennas (Cohen; Fig. 1, Pg. 2, Col. 1, “The traditional approach to collocated MIMO adopts a virtual ULA structure [26], where R receivers”, multiple receiver antennas are disclosed). Regarding claim 4, Jakowatz as modified above teaches all of the elements of claim 1, as stated above, as well as wherein the one or more ranges correspond to reflections from respective objects located at the one or more ranges (Figs. 2-4). Regarding claim 5, Jakowatz as modified above teaches all of the elements of claim 1, as stated above, as well as wherein the one or more processors are configured to render the field of view at the one or more ranges based on data of the motion corrected representation (Fig. 4, shows the field of view being rendered based on data of the motion corrected). Regarding claim 6, Jakowatz as modified above teaches all of the elements of claim 1, as stated above, as well as wherein the one or more processors execute the computer-program instructions to: perform a summation on data of the motion corrected representation of the field of view to generate one or more images of the field of view at the specific range (Fig. 4, Pg. 5, “For each guess at the two components of moving target velocity, the conjugates of the hyperbolic and quadratic phase terms are applied to the Fourier transform of the mover's image chip. Upon inverse Fourier transformation, an estimate of the refocused image chip is produced and a value for the contrast is produced.”, an inverse Fourier transformation is performed, which sums the frequency components to produce the final focused image). Regarding claim 7, Jakowatz as modified above teaches all of the elements of claim 6, as stated above, as well as wherein the summation comprises a coherent summation on the motion corrected representation of the field of view (Figs. 3-4, Pg. 5, “For each guess at the two components of moving target velocity, the conjugates of the hyperbolic and quadratic phase terms are applied to the Fourier transform of the mover's image chip. Upon inverse Fourier transformation, an estimate of the refocused image chip is produced and a value for the contrast is produced.”, coherent summation is necessarily performed in order to generate a high-resolution image that has been refocused as shown in the above figures). Regarding claim 8, Jakowatz as modified above teaches all of the elements of claim 6, as stated above, as well as wherein the one or more images comprise one or more two-dimensional (2D) images of the field of view (Fig. 4). Regarding claim 9, Jakowatz as modified above teaches all of the elements of claim 1, as stated above, as well as wherein one or more ranges comprise a plurality of ranges (Pg. 4, “The effect of the hyperbolic term will be to blur the image domain data in both range and cross-range dimensions, while the effect of the linear term will be to translate the formed moving target signature in the image cross-range dimension.”, multiple ranges are disclosed as being used in the processing). Regarding claim 10, Jakowatz as modified above teaches all of the elements of claim 1, as stated above, as well as wherein the data samples comprise samples of a modulated radar signal (Pg. 3, “Each pulse is transmitted, received, and demodulated at a particular position & along the synthetic aperture, yielding one column of Fourier transform (phase history) data for the associated value of spatial frequency X.”, the radar signal is demodulated, necessarily meaning that the signal was a modulated radar signal). Regarding claim 11, Jakowatz as modified above teaches all of the elements of claim 10, as stated above, as well as wherein the modulated radar signal comprises a time modulated radar signal or a frequency modulated radar signal (Cohen; Pg. 3, Col. 1, “Each transmitting antenna sends P pulses, such that the mth transmitted signal is given by (Eq 1) where hm (t), 0 ≤ m ≤ T −1 are narrowband and orthogonal pulses with bandwidth Bh, modulated with carrier frequency fc.”. One of ordinary skill in the art would also understand that the high range resolution necessary for the SAR images in Jakowatz is well-known to be achieved by using Frequency Modulation signals). Regarding claim 12, Jakowatz as modified above teaches all of the elements of claim 10, as stated above, as well as wherein the modulated radar signal comprises a time domain multiplexed (TDM) modulated radar signal or a frequency division multiplexing (FDM) modulated radar signal (Cohen; Pg. 3, Col. 1, “This property implies that the orthogonal signals cannot overlap in frequency [19], leading to FDMA. Alternatively, time invariant orthogonality can be approximately achieved using CDMA.”, when there are a plurality of antennas, the signals need to be separated/orthogonalized). Regarding claim 13, Jakowatz as modified above teaches all of the elements of claim 1, as stated above, as well as further comprising an analog to digital converter that samples the data samples associated with the plurality of antenna elements when a first set of data samples are generated (Cohen; Pgs. 2-3, Cols. 2-1, “Low-rate data acquisition is based on the ideas of Xampling [15], [16], which consist of an ADC performing analog prefiltering of the signal before taking point-wise samples.”, digital processing performed on samples of the received signal necessitates the usage of an analog to digital converter. One of ordinary skill in the art would recognize that this applies to the system of Jakowatz as well). Regarding claim 14, Jakowatz as modified above teaches all of the elements of claim 1, as stated above, as well as wherein signals associated with the plurality of antenna elements (Cohen; Pg. 1, Col. 1, “Multiple input multiple output (MIMO) [1] radar, which presents significant potential for advancing state-of-the-art modern radar in terms of flexibility and performance, poses new theoretical and practical challenges. This radar architecture combines multiple antenna elements both at the transmitter and receiver where each transmitter radiates a different waveform.”) are sampled to generate a plurality of data samples (Pg. 1, “That is, we will assume that the image resolution and patch size are such that the processing step of polar-to-Cartesian interpolation (also known as polar reformatting) of the phase history samples prior to Fourier transformation is not required.”, multiple samples are disclosed) at a plurality of ranges (Pg. 4, “The effect of the hyperbolic term will be to blur the image domain data in both range and cross-range dimensions, while the effect of the linear term will be to translate the formed moving target signature in the image cross-range dimension.”). Regarding claim 15, Jakowatz as modified above teaches all of the elements of claim 1, as stated above, as well as wherein the one or more processors execute the computer-program instructions to: arrange data samples for each range of the one or more ranges along the one or more spatial dimensions to generate one or more first representations of the field of view at the one or more ranges (Pg. 4, “The formed SAR image is described by g(x,y), where x and y denote the ground plane cross-range and range dimensions, respectively.”). Regarding claim 16, Jakowatz as modified above teaches all of the elements of claim 15, as stated above, as well as wherein the one or more first representations comprise one or more first data cubes of the field of view at the one or more ranges (Cohen; Pg. 2, Col. 2, “we separate the three dimensions (range, azimuth and Doppler) by adapting OMP to matrix form, with several matrix system equations.”, the raw data is initially organized into a three-dimensional structure, or “data cube”). Regarding claim 17, Jakowatz as modified above teaches all of the elements of claim 1, as stated above, as well as wherein the one or more processors execute the computer-program instructions to: arrange the velocity values for each range of the one or more ranges along the one or more spatial dimensions to generate one or more second representations of the field of view at the one or more ranges (Pg. 4, “In addition, the range component gives rise to a linear phase in the X dimension that accounts for the Doppler shift of the mover. The cross-range component of target velocity will be shown to impose a one-dimensional quadratic phase term in X onto the target phase history. The net effect will be a 2-D function that completely accounts for all of the effects of constant velocity motion of the target, and that is parametrized by the two components of the target velocity.”). Regarding claim 18, Jakowatz as modified above teaches all of the elements of claim 17, as stated above, as well as wherein the one or more second representations comprise one or more second data cubes of the field of view at the one or more ranges(Cohen; Pg. 9, Col. 1, “To recover jointly the range, azimuth and Doppler frequency of the targets, we apply the concept of Doppler focusing from [12] to our setting.”, Jakowatz discloses a representation across a 2D image plane, however for a high-resolution MIMO system, this analysis must be performed on data organized in a structure that separates all three spatial-velocity dimensions. The joint recovery operation of Cohen discloses that the data is structured as the three-dimensional data cube. One of ordinary skill in the art would recognize that when performing an analysis to represent the radar data in a MIMO system, a three-dimensional data structure is necessary, as disclosed by Cohen). Regarding claim 19, Jakowatz as modified above teaches all of the elements of claim 17, as stated above, as well as wherein the one or more processors execute the computer-program instructions to: identify one or more targets (Pg. 1, “In this paper we address the problem of refocusing a blurred signature that has by some means been identified as a moving target.”) in the one or more second representations of the field of view at the first range (Pg. 4, “In addition, the range component gives rise to a linear phase in the X dimension that accounts for the Doppler shift of the mover. The cross-range component of target velocity will be shown to impose a one-dimensional quadratic phase term in X onto the target phase history. The net effect will be a 2-D function that completely accounts for all of the effects of constant velocity motion of the target, and that is parametrized by the two components of the target velocity.”). Regarding claim 23, Jakowatz as modified above teaches all of the elements of claim 1, as stated above, as well as wherein the transform comprises a fast Fourier transform (FFT) (Cohen; Pg. 9, Col. 1, “Note that step 1 can be performed using fast Fourier transform (FFT)”). Regarding claim 24, Jakowatz as modified above teaches all of the elements of claim 23, as stated above, as well as wherein the FFT is a three-dimensional (3D) FFT (Cohen; Pg. 9, Col. 1, “Note that step 1 can be performed using fast Fourier transform (FFT)”, step 1 is part of an algorithm for sparse 3D recovery, indicating the usage of a 3D FFT). Regarding claim 25, Jakowatz as modified above teaches all of the elements of claim 1, as stated above, as well as wherein the motion correction is performed by decoupling spatial data from Doppler data (Pg. 1, “By considering separately the phase effects of the range and cross-range components of the target velocity vector, we show how the appropriate phase correction term can be derived as a two-parameter function.”). Regarding claim 26, Jakowatz as modified above teaches all of the elements of claim 1, as stated above, as well as wherein the motion correction is performed by decoupling spatial data from Doppler data by decoupling the velocity values from the one or more spatial dimensions orthogonal to a first range (Pg. 4, “In addition, the range component gives rise to a linear phase in the X dimension that accounts for the Doppler shift of the mover. The cross-range component of target velocity will be shown to impose a one-dimensional quadratic phase term in X onto the target phase history.”). Regarding claim 27, the recited system performs variably the same function as the image processing system of claim 1. It is rejected under the same analysis. Regarding claim 28, Jakowatz as modified above teaches all of the elements of claim 1, as stated above, as well as one or more pre-processing units configured to generate the data samples from responses to the representation of the modulated signal (Pg. 1, “We consider the general problem of imaging a target (typically a vehicle) moving at constant speed in a straight line, with a SAR system that forms imagery collected in the spotlight mode.”, necessarily pre-processing units are configured to generate data samples of the modulated signal in a SAR system). Regarding claim 29, the recited method performs variably the same function as the image processing system of claim 1. It is rejected under the same analysis. Regarding claim 30, the recited system performs variably the same function as the image processing system of claim 1. It is rejected under the same analysis. Regarding claim 31, the recited system performs variably the same function as the image processing system of claim 1. It is rejected under the same analysis. Regarding claim 32, the recited image processing system performs variably the same function as the image processing system of claim 1 and claim 5. It is rejected under the same analysis. Regarding claim 33, the recited method performs variably the same function as the image processing system of claim 1. It is rejected under the same analysis. Regarding claim 34, the recited elements perform variably the same function as that of claim 2. It is rejected under the same analysis. Regarding claim 35, the recited elements perform variably the same function as that of claim 3. It is rejected under the same analysis. Regarding claim 36, the recited elements perform variably the same function as that of claim 4. It is rejected under the same analysis. Regarding claim 37, the recited elements perform variably the same function as that of claim 5. It is rejected under the same analysis. Regarding claim 38, the recited elements perform variably the same function as that of claim 6. It is rejected under the same analysis. Regarding claim 39, the recited elements perform variably the same function as that of claim 7. It is rejected under the same analysis. Regarding claim 40, the recited elements perform variably the same function as that of claim 8. It is rejected under the same analysis. Regarding claim 41, the recited elements perform variably the same function as that of claim 9. It is rejected under the same analysis. Regarding claim 42, the recited elements perform variably the same function as that of claim 10. It is rejected under the same analysis. Regarding claim 43, the recited elements perform variably the same function as that of claim 11. It is rejected under the same analysis. Regarding claim 44, the recited elements perform variably the same function as that of claim 12. It is rejected under the same analysis. Regarding claim 45, the recited elements perform variably the same function as that of claim 13. It is rejected under the same analysis. Regarding claim 46, the recited elements perform variably the same function as that of claim 14. It is rejected under the same analysis. Regarding claim 47, the recited elements perform variably the same function as that of claim 15. It is rejected under the same analysis. Regarding claim 48, the recited elements perform variably the same function as that of claim 16. It is rejected under the same analysis. Regarding claim 49, the recited elements perform variably the same function as that of claim 17. It is rejected under the same analysis. Regarding claim 50, the recited elements perform variably the same function as that of claim 18. It is rejected under the same analysis. Regarding claim 51, the recited elements perform variably the same function as that of claim 19. It is rejected under the same analysis. Regarding claim 55, the recited elements perform variably the same function as that of claim 23. It is rejected under the same analysis. Regarding claim 56, the recited elements perform variably the same function as that of claim 24. It is rejected under the same analysis. Regarding claim 57, the recited elements perform variably the same function as that of claim 25. It is rejected under the same analysis. Regarding claim 58, the recited elements perform variably the same function as that of claim 26. It is rejected under the same analysis. Claim(s) 20-22 and 52-54 are rejected under 35 U.S.C. 103 as being unpatentable over Jakowatz as modified by Cohen above, further in view of Rohling (NPL, “Radar CFAR Thresholding in Clutter and Multiple Target Situations”, published 1983, pdf attached). Regarding claim 20, Jakowatz as modified in view of Cohen teaches all of the elements of claim 19, as stated above. They do not explicitly disclose wherein the one or more objects identified in the one or more second representations of the field of view at the first range are identified based on an adaptive threshold. Rohling teaches wherein the one or more objects identified in the one or more second representations of the field of view at the first range are identified based on an adaptive threshold (Pg. 1, Abstract, “Radar detection procedures involve the comparison of the received signal amplitude to a threshold. In order to obtain a constant false-alarm rate (CFAR), an adaptive threshold must be applied reflecting the local clutter situation. The cell averaging approach, for example, is an adaptive procedure.”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Jakowatz and Cohen to incorporate the teachings of Rohling to include the one or more objects identified based on an adaptive threshold. CFAR is the standard adaptive thresholding technique applied after radar target recovery. One of ordinary skill in the art would recognize the advantage of utilizing this well-known technique to improve the accuracy of object identification. Regarding claim 21, Jakowatz as modified in view of Cohen teaches all of the elements of claim 19, as stated above, and when modified further in view of Rohling teaches wherein the one or more objects identified in the one or more second representations of the field of view at the first range are identified based on a CFAR (Rohling; Fig. 3, Pgs. 608-609, Cols. 2-1, “The idea is to modify the common CFAR techniques by replacing the usual clutter power estimation based on arithmetic averaging by a new procedure which has proven useful for similar tasks in general image processing applications.”). Regarding claim 22, Jakowatz as modified in view of Cohen teaches all of the elements of claim 19, as stated above, and when modified further in view of Rohling teaches wherein the CFAR (Rohling; Fig. 3) is a three-dimensional (3D) CFAR (Cohen; Pg. 9, Col. 1, “To recover jointly the range, azimuth and Doppler frequency of the targets, we apply the concept of Doppler focusing from [12] to our setting.”, In view of the 3D Range-Azimuth-Doppler data cube generated by Cohen, one of ordinary skill in the art would realize that implementing a 3D CFAR algorithm to accurately estimate the background noise in all three-dimensions would be a routine adaptation of the known CFAR methods). Regarding claim 52, the recited elements perform variably the same function as that of claim 20. It is rejected under the same analysis. Regarding claim 53, the recited elements perform variably the same function as that of claim 21. It is rejected under the same analysis. Regarding claim 54, the recited elements perform variably the same function as that of claim 22. It is rejected under the same analysis. Conclusion Pertinent Prior Art: US 2023/0059523A1, “Object sensing from a potentially moving frame of reference with virtual apertures formed from sparse antenna arrays”, Cattle et al. Very similar architecture to the claimed invention. Same assignee but different inventors. Any inquiry concerning this communication or earlier communications from the examiner should be directed to DAVID A WAMBST whose telephone number is (703)756-1750. The examiner can normally be reached M-F 9-6:30 EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Gregory Morse can be reached at (571)272-3838. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DAVID ALEXANDER WAMBST/Examiner, Art Unit 2663 /GREGORY A MORSE/Supervisory Patent Examiner, Art Unit 2698
Read full office action

Prosecution Timeline

Sep 01, 2023
Application Filed
Nov 14, 2025
Non-Final Rejection — §103, §112
Mar 26, 2026
Examiner Interview Summary
Mar 26, 2026
Applicant Interview (Telephonic)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597278
IMAGE AUTHENTICITY DETECTION METHOD AND DEVICE, COMPUTER DEVICE, AND STORAGE MEDIUM
2y 5m to grant Granted Apr 07, 2026
Patent 12524892
SYSTEMS AND METHODS FOR IMAGE REGISTRATION
2y 5m to grant Granted Jan 13, 2026
Patent 12437437
DIFFUSION MODELS HAVING CONTINUOUS SCALING THROUGH PATCH-WISE IMAGE GENERATION
2y 5m to grant Granted Oct 07, 2025
Patent 12423783
DIFFERENTLY CORRECTING IMAGES FOR DIFFERENT EYES
2y 5m to grant Granted Sep 23, 2025
Patent 12380566
METHOD OF SEPARATING TERRAIN MODEL AND OBJECT MODEL FROM THREE-DIMENSIONAL INTEGRATED MODEL AND APPARATUS FOR PERFORMING THE SAME
2y 5m to grant Granted Aug 05, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
67%
Grant Probability
99%
With Interview (+47.4%)
2y 11m
Median Time to Grant
Low
PTA Risk
Based on 27 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month