Prosecution Insights
Last updated: April 19, 2026
Application No. 18/398,360

SYSTEM AND METHOD FOR ENHANCING RADAR DATA

Final Rejection §102§103§112
Filed
Dec 28, 2023
Examiner
HENSON, BRANDON JAMES
Art Unit
3648
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
BITSENSING INC.
OA Round
2 (Final)
69%
Grant Probability
Favorable
3-4
OA Rounds
3y 3m
To Grant
96%
With Interview

Examiner Intelligence

Grants 69% — above average
69%
Career Allow Rate
38 granted / 55 resolved
+17.1% vs TC avg
Strong +27% interview lift
Without
With
+27.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
61 currently pending
Career history
116
Total Applications
across all art units

Statute-Specific Performance

§101
3.4%
-36.6% vs TC avg
§103
53.1%
+13.1% vs TC avg
§102
21.6%
-18.4% vs TC avg
§112
21.1%
-18.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 55 resolved cases

Office Action

§102 §103 §112
DETAILED ACTION Status of Claims Claims 1, 9, 11 are amended. Claims 1-11 are pending. Priority Applicant’s claim for the benefit of a prior-filed application filed in KR 1020230176526 on 12/07/2023 under 35 U.S.C. 119(e) or under 35 U.S.C. 120, 121, 365(c), or 386(c) is acknowledged. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1, 11 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Regarding claims 1 and 11, the phrase "so as" renders the claim indefinite because it is unclear whether the limitations following the phrase are part of the claimed invention. See MPEP § 2173.05(d). It is unclear if the claim includes restoring pixels. It is also unclear how pixels can be restored (i.e. increasing a pixel count) without changing the pixel range. For examination purposes the examiner in interpreting the limitation to recite, “wherein the pretrained artificial intelligence restoration model has been trained…in order to enhance pixels of the first image without changing a pixel range of the first image.” Claims 2-10 are rejected 35 U.S.C. 112(b) due to their dependency on Claim 1. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-6, 9-11 are rejected under 35 U.S.C. 103 as being unpatentable over Tyagi (US 20230140890) in view of Chen (US 20230046274). Regarding Claims 1, 11, Tyagi discloses the following limitations: A system for enhancing radar data, comprising: (Tyagi – [0004] This document describes techniques and systems for machine-learning-based super resolution of radar data.) (Claim 11) A method for enhancing radar data, comprising: (Tyagi – [0004], [0005] a method includes obtaining, from an electromagnetic sensor, sensor data. The method further includes generating, based on the sensor data, a first sensor image representing the sensor data in multiple dimensions. The method further includes generating, based on the sensor data, a second sensor image having a higher resolution in the multiple dimensions than the first sensor image. The method further includes training, by machine learning and based on the first sensor image being used as input data and the second sensor image being used as ground truth data, a model to generate a high-resolution sensor image similar to the second sensor image, the high-resolution sensor image to be used for at least one of detecting objects, tracking objects, classification, or segmentation.) a radar device; (Tyagi – [0004]) at least one processor; and at least one memory including a computer program code, wherein the at least one memory and the computer program code are configured to, with the at least one processor, cause the system to: (Tyagi – [0006] These and other described techniques may be performed by hardware or a combination of hardware and software executing thereon. For example, a computer-readable storage media (CRM) may have instructions stored thereon and that when executed configure a processor to perform the described techniques. A system may include means for performing the described techniques. A processor or processor unit may be part of a system that is configured to execute the methods and techniques described herein.) acquire first radar data corresponding to a target area through the radar device, convert the acquired first radar data into a first image, (Tyagi – [0005]) infer a second image corresponding to the first image based on a pretrained artificial intelligence restoration model, and generate second radar data corresponding to the target area based on the inferred second image, the second radar data having a higher resolution than the first radar data, and (Tyagi – [Fig. 1], [0005], [0029] Likewise, a high-resolution image generator 116 can generate a high-resolution sensor image 118 that has been processed such that the resolution of the azimuth dimension and/or the range dimension has been increased to a desired level. To accomplish this, the high-resolution image generator 116 may use a traditional method (e.g., IAA) that includes time consuming and computing resource consuming calculations but provides the desired level of resolution for the high-resolution sensor image 118. Using the traditional method may be acceptable for this task since the training is executed as a training application and not a real time application. In other aspects, continuous training and regular updates to an already trained super resolution model may take place in parallel to using the already trained super resolution model in a real time application.) wherein the pretrained artificial intelligence restoration model has been trained, during a training phase, using based on first training radar data and second training radar data with a higher resolution than the first training radar data in order to enhance pixels of the first image without changing a pixel range of the first image. (Tyagi – [0005], [0029] Tyagi does not explicitly teach “so as to restore pixels of the first image without changing a pixel range of the first image”.) Tyagi does not explicitly teach the following limitations, however Chen, in the same field of endeavor, teaches: pixels (Chen – [0054] the output of radar NN 510 can include an array 524 of n×m radar superpixels, each radar superpixel associated with a respective radar feature vector 522. The size of the array can be smaller than the size N×M of the array of pixels of radar data 502. Each of the radar superpixels of the array 524 can correspond to multiple pixels of radar data 502.) so as to restore pixels of the first image without changing a pixel range of the first image. (Chen – [0054], [0056] The output of radar NN 510 and camera NN 520 can be joined (e.g., concatenated) into a combined feature vector 540. For example, object identification module 220 of FIG. 2 can identify, for a specific radar blob (or return point), a radar superpixel of the array 524 that corresponds to a region containing the blob. The object identification module 220 can select a radar feature vector 522 associated with the identified superpixel.) Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the radar data of Tyagi with the superpixel arrays of Chen in order to determine weights and biases during training (Chen – [0054]). Regarding Claim 2, Tyagi further discloses: wherein the first training radar data and the second training radar data are acquired corresponding to a same environment. (Tyagi – [0026] The training environment 100 can use the vehicle 102 equipped with the radar system 104 to collect radar data 110 related to an object 106 to be input into a super resolution model training system 108.) Regarding Claim 3, Tyagi further teaches: wherein the at least one memory and the computer program code are configured to, with the at least one processor, cause the system to: (Tyagi – [0006]) Tyagi does not explicitly teach the following limitations, however Chen, in the same field of endeavor, teaches: convert the first radar data into a location data in a form of Cartesian coordinates. (Chen – [0031] The low-level data can include the radar intensity map I(x.sub.1,x.sub.2,x.sub.3), where {x.sub.j} is a set of coordinates, e.g., spherical coordinates R, θ, ϕ or Cartesian coordinates x, y, z, or any other suitable coordinates (e.g., elliptic coordinates, parabolic coordinates, etc.).) Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the radar data of Tyagi with the Cartesian coordinates of Chen in order to validate objects detected with radar data (Chen – [0031]). Regarding Claim 4, Tyagi further teaches: wherein the at least one memory and the computer program code are configured to, with the at least one processor, cause the system to: (Tyagi – [0006]) as at least a part of the inferring the second image, generate new data corresponding to pixels not containing data among a plurality of pixels included in the first image while maintaining a number of pixels in the first image. (Tyagi – [0005], [0029] Tyagi does not explicitly teach “pixels”.) Tyagi does not explicitly teach the following limitations, however Chen, in the same field of endeavor, teaches: pixels (Chen – [0054]) Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the radar data of Tyagi with the pixel arrays of Chen in order to determine weights and biases during training (Chen – [0054]). Regarding Claim 5, Tyagi further teaches: wherein the first image and the second image are in a form of an image composed of a plurality of channels, and the plurality of channels correspond to density, elevation and power. (Tyagi – [0045] the input layer 402 can include two or more input channels. One input channel can receive the magnitudes of the low-level radar data [0046] model 400 receives the radar data from the input layer 402 and performs feature extraction functions on the inputted radar data. The first convolution layer 404-1 (and the other convolutions layers 404) uses a rectangular filter kernel to compensate for a lower resolution in at least one of the dimensions (e.g., azimuth angle, elevation angle) included in the radar data. Tyagi does not explicitly teach “density”.) Tyagi does not explicitly teach the following limitations, however Chen, in the same field of endeavor, teaches: density (Chen – [0054]) Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the radar data of Tyagi with the pixel arrays of Chen in order to determine weights and biases during training (Chen – [0054]). Regarding Claim 6, Tyagi further teaches: wherein the at least one memory and the computer program code are configured to, with the at least one processor, cause the system to: (Tyagi – [0006]) as at least a part of the converting the acquired first radar data into the first image, convert a location data of the first radar data into an image of a first channel among the plurality of channels and convert an elevation value and a power value of the first radar data corresponding to the location data into an image of a second channel among the plurality of channels and an image of a third channel among the plurality of channels. (Tyagi – [0005], [0029], [0045], [0046]) Regarding Claim 9, Tyagi further teaches: wherein the pretrained artificial model has been trained to infer the second image from the first image by minimizing a pixel value error between a first training image corresponding to the first training radar data and a second training image corresponding to the second training radar data. (Tyagi – [0005], [0029], [0051] During the training stage, the super-resolved radar approximations 608 can be compared to the desired high-resolution sensor images, and an error can be calculated between the desired high-resolution sensor images and the super-resolved approximations 608. The error can be used to further train the model. During the inference stage, the model can output the super-resolved approximations 608 of the low-level radar data input that predict the desired high-resolution sensor images, circumventing the need for the time-consuming iterative methods and resulting in detecting targets with minimum sidelobe interference.) Tyagi does not explicitly teach the following limitations, however Chen, in the same field of endeavor, teaches: pixel (Chen – [0054]) Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the radar data of Tyagi with the pixel arrays of Chen in order to determine weights and biases during training (Chen – [0054]). Regarding Claim 10, Tyagi further discloses: wherein the at least one memory and the computer program code are configured to, with the at least one processor, cause the system to: (Tyagi – [0006]) detect, track or classify an object based on the second radar data. (Tyagi – [0031] the super resolution model may be ideal for certain real time applications such as detecting and tracking objects in automotive systems, classification, segmentation, and various perception tasks.) Claim 7-8 is rejected under 35 U.S.C. 103 as being unpatentable over Tyagi (US 20230140890) in view of Chen (US 20230046274), as applied to Claim 5 above, and further in view of Feit (US 12227209). Regarding Claim 7, Tyagi further teaches: wherein the at least one memory and the computer program code are configured to, with the at least one processor, cause the system to: (Tyagi – [0006]) as at least a part of the generating the second radar data, normalize pixel values of an image of a first channel among the plurality of channels corresponding to the density with normalized data and sample a location data of pixels from the normalized data. (Tyagi – [0045], [0046]) Tyagi does not explicitly teach “density” or “normalize pixel values”.) Tyagi does not explicitly teach the following limitations, however Chen, in the same field of endeavor, teaches: density (Chen – [0054]) Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the radar data of Tyagi with the pixel arrays of Chen in order to determine weights and biases during training (Chen – [0054]). Tyagi does not explicitly teach the following limitations, however Feit, in the same field of endeavor, teaches: normalize pixel values (Feit – [col. 2 ln. 40-46] the model can determine a normalized occupancy prediction based at least in part on comparing the occupancy prediction and the blurred occupancy prediction. In some examples, the normalized occupancy prediction can be determined by dividing a value associated with the occupancy prediction (e.g., a value of one or more pixels in an occupancy map) Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the calculations of Tyagi with the standard devation calculations of Feit in order to determine a normalized occupancy prediction (Feit – [col. 2 ln. 40-46]). Regarding Claim 8, Tyagi further teaches: wherein the at least one memory and the computer program code are configured to, with the at least one processor, cause the system to: (Tyagi – [0006]) as at least a part of the generating the second radar data, extract an elevation value and a power value of the pixel from an image of a second channel among the plurality of channels and an image of a third channel among the plurality of channels respectively corresponding to the elevation value and the power value of the second image based on the sampled location data. (Tyagi – [0005], [0029], [0045], [0046]) Tyagi does not explicitly teach “pixel”.) Tyagi does not explicitly teach the following limitations, however Chen, in the same field of endeavor, teaches: pixel (Chen – [0054]) Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the radar data of Tyagi with the pixel arrays of Chen in order to determine weights and biases during training (Chen – [0054]). Response to Arguments Applicant’s arguments, see Pages 5-8, filed 02/05/2026, with respect to the rejection under 35 U.S.C. § 102(a)(2) regarding Claims 1-2, 10-11 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Claims 1- 11 are now rejected under 35 U.S.C. § 103. Applicant’s arguments, see Pages 5-8, filed 02/05/2026, with respect to the rejection under 35 U.S.C. § 103 have been fully considered and are not persuasive. Applicant argues, on paged 7-8, that the combination of Tiyagi and Chen does not teach “an artificial intelligence model pretrained to restore pixels of a first image without changing a pixel range of the first image”. The examiner has rejected the use of restoring pixels under 35 U.S.C. § 112(b) since no loss of pixels is occurring as claimed. Chen clearly teaches the use of superpixels to correspond to the pixels according to the radar data while still being associated with a respective radar vector. Applicant’s arguments, see Page 8, filed 02/05/2026, with respect to the rejection under 35 U.S.C. § 102(a)(2) and 103 have been fully considered and are not persuasive. Applicant argues that the dependent claims are allowable due to the dependency on the independent claims. The examiner disagrees due to the above-mentioned rejections. Applicant's remaining arguments amount to a general allegation that the claims define a patentable invention without specifically pointing out how the language of the claims is understandable and distinguishable from other inventions. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure or directed to the state of art is listed on the PTO-892 mailed on 10 November 2025. The following is a brief description for relevant prior art that was cited but not applied: Chen (US 20230350051) describes techniques for three dimensional (3D) object detection and localization. Any inquiry concerning this communication or earlier communications from the examiner should be directed to BRANDON JAMES HENSON whose telephone number is (703)756-1841. The examiner can normally be reached Monday-Friday 9:00 am - 5:00 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Resha H. Desai can be reached at (571) 270-7792. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /BRANDON JAMES HENSON/Examiner, Art Unit 3648 /RESHA DESAI/Supervisory Patent Examiner, Art Unit 3648
Read full office action

Prosecution Timeline

Dec 28, 2023
Application Filed
Nov 01, 2025
Non-Final Rejection — §102, §103, §112
Feb 05, 2026
Response Filed
Feb 20, 2026
Final Rejection — §102, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12601830
METHOD AND APPARATUS FOR OBTAINING LOCATION INFORMATION USING RANGING BLOCK AND RANGING ROUNDS
2y 5m to grant Granted Apr 14, 2026
Patent 12584996
HARDWARE GENERATION OF 3D DMA CONFIGURATIONS
2y 5m to grant Granted Mar 24, 2026
Patent 12566242
RADIO FREQUENCY APPARATUS AND METHOD FOR ASSEMBLING RADIO FREQUENCY APPARATUS
2y 5m to grant Granted Mar 03, 2026
Patent 12566258
SYSTEM AND METHOD OF FULLY POLARIMETRIC PULSED RADAR
2y 5m to grant Granted Mar 03, 2026
Patent 12560700
METHOD AND DEVICE FOR DETERMINING AT LEAST ONE ARTICULATION ANGLE OF A VEHICLE COMBINATION
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
69%
Grant Probability
96%
With Interview (+27.2%)
3y 3m
Median Time to Grant
Moderate
PTA Risk
Based on 55 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month