Prosecution Insights
Last updated: April 19, 2026
Application No. 17/726,279

HUMAN-PERCEPTIBLE AND MACHINE-READABLE SHAPE GENERATION AND CLASSIFICATION OF HIDDEN OBJECTS

Non-Final OA §103
Filed
Apr 21, 2022
Examiner
BROUGHTON, KATHLEEN M
Art Unit
2661
Tech Center
2600 — Communications
Assignee
UNIVERSITY OF SOUTH CAROLINA
OA Round
1 (Non-Final)
83%
Grant Probability
Favorable
1-2
OA Rounds
2y 7m
To Grant
92%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
219 granted / 263 resolved
+21.3% vs TC avg
Moderate +8% lift
Without
With
+8.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
34 currently pending
Career history
297
Total Applications
across all art units

Statute-Specific Performance

§101
10.9%
-29.1% vs TC avg
§103
51.2%
+11.2% vs TC avg
§102
24.1%
-15.9% vs TC avg
§112
11.4%
-28.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 263 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment A Preliminary Amendment was made 05/31/2022 to amend Figure 1. Information Disclosure Statement The information disclosure statement (IDS) submitted on 03/23/ is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is considered by examiner. Drawings Color photographs and color drawings are not accepted in utility applications unless a petition filed under 37 CFR 1.84(a)(2) is granted. Any such petition must be accompanied by the appropriate fee set forth in 37 CFR 1.17(h), one set of color drawings or color photographs, as appropriate, if submitted via the USPTO patent electronic filing system or three sets of color drawings or color photographs, as appropriate, if not submitted via the via USPTO patent electronic filing system, and, unless already present, an amendment to include the following language as the first paragraph of the brief description of the drawings section of the specification: The patent or application file contains at least one drawing executed in color. Copies of this patent or patent application publication with color drawing(s) will be provided by the Office upon request and payment of the necessary fee. Color photographs will be accepted if the conditions for accepting color drawings and black and white photographs have been satisfied. See 37 CFR 1.84(b)(2). Color was identified in Figures 1-8. Specification The disclosure is objected to because it contains an embedded hyperlink and/or other form of browser-executable code, as identified in at least specification pages 18 – 19. Applicant is required to delete the embedded hyperlink and/or other form of browser-executable code; references to websites should be limited to the top-level domain name without any prefix such as http:// or other browser-executable code. See MPEP § 608.01. Claim Objections Claims 1, 11 are objected to because of the following informalities: Claim 1. Appropriate correction is required. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 3-4, 6-10 are rejected under 35 U.S.C. 103 as being unpatentable over Sharma et al (Towards a convolutional neural network coupled millimetre-wave coded aperture image classifier system) in view of Guan et al (High Resolution mmWave Imaging for Self-Driving Cars). Regarding Claim 1, Sharma et al teach Method for approximating Synthetic Aperture Radar (SAR) imaging on (computational imaging (CI) radar aperture synthesized method using CNN; Fig 1 and 2. CNN enabled Coded Aperture Image Classifier ¶ 1-3), to enable human-perceptible and machine-readable shape generation and classification of hidden objects on (the computation mmW imaging is processed with machine learning techniques (CNN) to image a target object and classify the object on imaging devices used for security; Fig 1-3 and 1. Introduction ¶ 1, 2. CNN enabled Coded Aperture Image Classifier ¶ 1-3, 2.2 Building the CNN ¶ 1-2), comprising: 1 obtaining from a (mmWave data is obtained in bi-static model; Fig 1 and CNN enabled Coded Aperture Image Classifier ¶ 1-3); and using a machine-learning model to recover high-spatial frequencies in the object and reconstruct a 2D shape of the target object (the CNN algorithm is used to compress the SAR pattern to identify a shape of the object; Fig 1-3 and 2. CNN enabled Coded Aperture Image Classifier ¶ 1-3, 2.2 Building the CNN ¶ 1-2). Sharma et al does not explicitly teach to obtain 3D mmWave data from a mobile device. Guan et al is analogous art pertinent to the technological problem addressed in this application and teach obtaining from a mobile device 3D mmWave shape data for a target object (3D mmWave heat-maps are generated from sensors on an autonomous vehicle (mobile device) used to identify objects; Fig 1-4, 10 and 1. Introduction ¶ 6-7, 5. Hawkeye’s GAN Architecture ¶ 2). It would have been obvious to one of ordinary skill in the art before the effective filing date of this application to combine the teachings of Sharma et al with Guan et al including to obtain 3D mmWave data from a mobile device. By using a conditional GAN to generate high-resolution 2D depth-map data from a 3D mmWave heat-map, each pixel represents depth and can perform the 3D-2D conversion focused on the object independent of the background while removing artifacts while generating the object data, thereby improving the mapping and accuracy of generating the object, as recognized by Guan et al (1. Introduction – Contribution point 1, 3. Primer on GANs ¶ 2) and by using the model on a mobile device, the model can be deployed on an autonomous vehicle, thereby improving the safety and driving function of self-driving cars in low visibility conditions, as recognized by Guan et al (1. Introduction – Contribution point 3). Regarding Claim 3, Sharma et al in view of Guan et al teach the method of claim 1 (as described above), further comprising predicting 3D features (Guan et al, the 3D heat map can be used to determine quantitative features, such as size, orientation and shape of object (car); Fig 10 and 9.2 Quantitative Metrics – Size, Orientation, Shape of Car) and category of the target object (Sharma et al, the CNN algorithm is used to include classification of the object; Fig 2-6 and 2.2 Building the CNN, 3.2 Confusion Matrix and Classification Report). It would have been obvious to one of ordinary skill in the art before the effective filing date of this application to combine the teachings of Sharma et al with Guan et al including predicting 3D features. By using 3D mmWave data, pixels can be represented by depth, thereby improving the propagation and reflection characteristics to determine accurate distance of objects in low visibility conditions, as recognized by Guan et al (1. Introduction – Contribution point 3). Regarding Claim 4, Sharma et al in view of Guan et al teach the method of claim 1 (as described above), wherein the target object comprises one of a set of target objects to screen and remove for security applications without requiring physical searches (Sharma et al, objects are identified during security screen (interpreted as a means to eliminate physical searches); 3.2 Confusion Matrix and Classification Report ¶ 2). Regarding Claim 6, Sharma et al in view of Guan et al teach the method of claim 1 (as described above), wherein the machine-learning model comprises a conditional Generative Adversarial Network (cGAN) trained, based on inputs of examples of mmWave shapes from traditional reconstruction and based on the corresponding ground truth shapes, to learn the association between the 3D mmWave shape data and the 2D ground truth shape (Guan et al, a conditional GAN is trained and used to analyze mmWave imaging based on ground truth depth map and mmWave radar heatmaps to predict the object; Fig 1-3 and 4. System Overview, 3. Hawkeye’s cGAN ¶ 1-2). Regarding Claim 7, Sharma et al in view of Guan et al teach the method of claim 6 (as described above), further comprising generating a full 2D image of the target object based on the 3D mmWave shape data for a target object (Guan et al, the 3D mmWave Input data is transposed to a 2D output in the Generator; 5. Hawkeye’s cGAN ¶ 2-4). Regarding Claim 8, Sharma et al in view of Guan et al teach the method of claim 1 (as described above), further comprising predicting the shape of the target object (Sharma et al, the object is predicted based on its shape, regardless of data augmentation, for classification; Fig 2, 3, 5 and 2.1 Image Dataset), and the mean depth (Guan et al, the m channels of 2D depth maps are concatenated to a feature map (thereby mean depth); 5.A. GAN Generator Architecture ¶ 6) and orientation of the shape in a 3D plane (Guan et al, the depths and orientation of the object can be detected from the shape; Fig 10 and 9.1 Qualitative Results ¶ 1, 9.3 Quantitative Results – Orientation). It would have been obvious to one of ordinary skill in the art before the effective filing date of this application to combine the teachings of Sharma et al with Guan et al including predicting the mean depth and orientation of the shape in a 3D plane. By using 3D mmWave data, pixels can be represented by depth, thereby improving the propagation and reflection characteristics to determine accurate qualitative and quantitative data of objects in low visibility conditions, such as location, shape, size and orientation of objects, as recognized by Guan et al (1. Introduction – Implementation, Contribution). Regarding Claim 9, Sharma et al in view of Guan et al teach the method of claim 8 (as described above), further comprising automatically classifying the objects into different categories (Sharma et al, the object is classified into one of the four categories; Fig 2, 3, 5 and 2.1 Image Dataset). Regarding Claim 10, Sharma et al in view of Guan et al teach the method of claim 1 (as described above), further comprising providing one or more processors programmed to provide a machine-learning model to perform the method (Sharma et al, the CNN model is enabled on a CPU and FPGA platform; 2. CNN Enabled Coded Aperture Image Classifier ¶ 7). Claims 2, 5, 11-20 are rejected under 35 U.S.C. 103 as being unpatentable over Sharma et al (Towards a convolutional neural network coupled millimetre-wave coded aperture image classifier system) in view of Guan et al (High Resolution mmWave Imaging for Self-Driving Cars) and Lu et al (See Through Smoke: Robust Indoor Mapping with Low-cost mmWave Radar). Regarding Claim 2, Sharma et al in view of Guan et al teach the method of claim 1 (as described above). Sharma et al in view of Guan et al do not explicitly teach displaying the reconstructed 2D target object shape. Lu et al is analogous art pertinent to the technological problem addressed in this application and teach displaying the reconstructed 2D target object shape (a 2D prediction image is generated and displayed in the hand-held device; Fig 11 and 7.4 Hand-held Devices). It would have been obvious to one of ordinary skill in the art before the effective filing date of this application to combine the teachings of Sharma et al in view of Guan et al with Lu et al including displaying the reconstructed 2D target object shape. By displaying the predicted object images on a handheld device, first responders may quickly and effectively view an environment in an emergency or rescue environment quickly and effectively, thereby improving rescue operations, as recognized by Lu et al (7.3 Testing in Smoke-filled Environments, 7.4 Hand-held Devices). Regarding Claim 5, Sharma et al in view of Guan et al teach the method of claim 1 (as described above). Sharma et al in view of Guan et al do not teach wherein the mobile device is handheld. Lu et al is analogous art pertinent to the technological problem addressed in this application and teach wherein the mobile device is handheld (the mmWave radar indoor mapping system may be incorporated with a handheld device; Fig 1, 10, 11 and 7.4 Extending to Hand-held Devices). It would have been obvious to one of ordinary skill in the art before the effective filing date of this application to combine the teachings of Sharma et al in view of Guan et al with Lu et al including wherein the mobile device is handheld. By using the map construction of mmWave images on a handheld device, first responders may quickly and effectively view an environment in an emergency or rescue environment quickly and effectively, thereby improving rescue operations, as recognized by Lu et al (7.3 Testing in Smoke-filled Environments, 7.4 Hand-held Devices). Regarding Claim 11, Sharma et al teach Method for imaging and screening in (computational imaging (CI) radar aperture synthesized method using CNN; Fig 1 and 2. CNN enabled Coded Aperture Image Classifier ¶ 1-3), to achieve hidden shape perception by humans or classification by machines, to enable in situ security check without physical search of persons or baggage (the computation mmW imaging is processed with machine learning techniques (CNN) to image a target object and classify the object on imaging devices used for security checks, such as airport or train station; Fig 1-3 and 1. Introduction ¶ 1, 2. CNN enabled Coded Aperture Image Classifier ¶ 1-3, 2.2 Building the CNN ¶ 1-2), comprising: training a machine-learning model, based on inputs of examples of (the CNN algorithm is trained to identify a shape pattern of the object, based on original CI image data (interpreted as ground truth prior to augmentation), and classify the object; Fig 1-3 and, 2.1 Image Dataset, 2.3 Model Training); providing input to the trained machine-learning model, such input comprising (mmWave data is obtained in bi-static model; Fig 1 and CNN enabled Coded Aperture Image Classifier ¶ 1-3); and operating the trained machine-learning model to process such input data to determine and output the corresponding ground truth 2D shape (the CNN algorithm is used to compress the SAR pattern to identify a shape of the object; Fig 1-3 and 2. CNN enabled Coded Aperture Image Classifier ¶ 1-3, 2.2 Building the CNN ¶ 1-2). Sharma et al does not explicitly teach to obtain 3D mmWave data from a mobile device or in handheld device settings. Guan et al is analogous art pertinent to the technological problem addressed in this application and teach to obtain 3D mmWave shape data for a target object from a mobile device (3D mmWave heat-maps are generated from sensors on an autonomous vehicle (mobile device) used to identify objects; Fig 1-4, 10 and 1. Introduction ¶ 6-7, 5. Hawkeye’s GAN Architecture ¶ 2). It would have been obvious to one of ordinary skill in the art before the effective filing date of this application to combine the teachings of Sharma et al with Guan et al including to obtain 3D mmWave data from a mobile device. By using a conditional GAN to generate high-resolution 2D depth-map data from a 3D mmWave heat-map, each pixel represents depth and can perform the 3D-2D conversion focused on the object independent of the background while removing artifacts while generating the object data, thereby improving the mapping and accuracy of generating the object, as recognized by Guan et al (3. Primer on GANs ¶ 2). Lu et al is analogous art pertinent to the technological problem addressed in this application and teach wherein the mobile device is handheld (examiner notes the “handheld device” is part of the preamble and not the body of the claim; see MPEP § 2111.02) (the mmWave radar indoor mapping system may be incorporated with a handheld device; Fig 1, 10, 11 and 7.4 Extending to Hand-held Devices). It would have been obvious to one of ordinary skill in the art before the effective filing date of this application to combine the teachings of Sharma et al in view of Nguyen et al with Lu et al including wherein the mobile device is handheld. By using the map construction of mmWave images on a handheld device, first responders may quickly and effectively view an environment in an emergency or rescue environment quickly and effectively, thereby improving rescue operations, as recognized by Lu et al (7.3 Testing in Smoke-filled Environments, 7.4 Hand-held Devices). Regarding Claim 12, Sharma et al in view of Guan et al and Lu et al teach the method according to claim 11 (as described above), further comprising: displaying the determined corresponding ground truth 2D shape (Lu et al, a 2D prediction image is generated and displayed in the hand-held device with corresponding ground truth via lidar data (3. Millimap Overview Mobile Robot Sensing and Fig 5, 8); Fig 10, 11 and 7.4 Hand-held Devices); and predicting 3D features (Guan et al, the 3D heat map can be used to determine quantitative features, such as size, orientation and shape of object (car); Fig 10 and 9.2 Quantitative Metrics – Size, Orientation, Shape of Car) and classification category of the determined corresponding ground truth 2D shape (Sharma et al, the object is predicted based on its shape, compared to the original CI image data for classification; Fig 2, 3, 5 and 2.1 Image Dataset). It would have been obvious to one of ordinary skill in the art before the effective filing date of this application to combine the teachings of in view of Guan et al and Lu et al including displaying the determined corresponding ground truth 2D shape by Lu et al and predicting 3D features by Guan et al. By displaying the ground truth 2D (lidar), the effectiveness of the map prior loss is shown, thereby allowing for the reconstruction loss functions to be visually shown, as recognized by Lu et al (Fig 5 and 4.4 Reconstruction Loss Function – Map Prior). By using 3D mmWave data, pixels can be represented by depth, thereby improving the propagation and reflection characteristics to determine accurate qualitative and quantitative data of objects in low visibility conditions, such as location, shape, size and orientation of objects, as recognized by Guan et al (1. Introduction – Implementation, Contribution). Regarding Claim 13, Sharma et al in view of Guan et al and Lu et al teach the method according to claim 12 (as described above), wherein the classification category includes at least one of guns, knives, scissors, hammers, boxcutters, cell phones, explosives, screwdrivers, and other (Sharma et al, classification includes categories of grenades, guns, knives and scissors; Fig 2 and 2.1 image dataset). Regarding Claim 14, Sharma et al in view of Guan et al and Lu et al teach the method according to claim 13 (as described above), further comprising indicating that the predicted classification category falls into a binary classification of whether the shape is suspicious or not (Sharma et al, classification includes whether the object belongs to one of the categories indicating a weapon based on the confusion matrix; Fig 2 and 3.2 Confusion Matrix and Classification Report). Regarding Claim 15, Sharma et al in view of Guan et al and Lu et al teach the method according to claim 11 (as described above), wherein the machine-learning model comprises a conditional Generative Adversarial Network (cGAN)-based trained system (Guan et al, a conditional GAN is trained and used to analyze mmWave imaging based on ground truth depth map and mmWave radar heatmaps to predict the object; Fig 1-3 and 4. System Overview, 3. Hawkeye’s cGAN ¶ 1-2). Regarding Claim 16, Sharma et al in view of Guan et al and Lu et al teach the method according to claim 12 (as described above), further comprising determining the mean depth, the azimuth angle, the elevation angle, and the rotation angle of the corresponding ground truth 2D shape in a 3D plane (Lu et al, the target depth (down-range v cross-range vs. height, average interpreted as the center), the azimuth angle, the elevation integration angle, and the rotation of azimuth angle, can be determined; Fig 3, 4, 12 and 3. Resolution Analysis 4.2 3D SAR Imagery of Realistic Scene Using EM Simulation Data). Regarding Claim 17, Sharma et al in view of Guan et al and Lu et al teach the method according to claim 15 (as described above), further comprising providing one or more programmed processors for implementing the conditional Generative Adversarial Network (cGAN)-based trained system (Guan et al, the algorithms for the cGAN are trained and executed on a Nvidia Titan RTX GPU; 8.A. GAN Implementation & Training). Regarding Claim 18, Sharma et al in view of Guan et al and Lu et al teach the method according to claim 17 (as described above), wherein the one or more processors are further programmed to provide respective generator and discriminator network blocks of the machine-learning model, which collectively generate a full 2D image of the shape based on the 3D mmWave shape data (Guan et al, the cGAN algorithm includes a generator and discriminator, which are used to pair the 3D mmWave heat-maps with 2D depth maps to generate the 3D scene; Fig 4-7 and 5. GAN Architecture). It would have been obvious to one of ordinary skill in the art before the effective filing date of this application to combine the teachings of Sharma et al in view of Guan et al and Lu et al including to provide respective generator and discriminator network blocks of the machine-learning model, which collectively generate a full 2D image of the shape based on the 3D mmWave shape data. By using a conditional GAN, with a generator and discriminator, to generate high-resolution 2D depth-map data from a 3D mmWave heat-map, each pixel represents depth and can perform the 3D-2D conversion focused on the object independent of the background while removing artifacts thereby improving the mapping and accuracy of generating the object, as recognized by Guan et al (1. Introduction – Implementation, Contribution, 3. Primer on GANs ¶ 2). Regarding Claim 19, Sharma et al in view of Guan et al and Lu et al teach the method according to claim 18 (as described above), wherein the one or more processors are further programmed to implement within the generator network block an encoder- decoder architecture (Guan et al, the GAN, implemented on the Nvidia Tital RTX GPU (8.A. GAN Implementation & Training), uses an encoder-decoder in the Generator; Fig 4 and 5.A. GAN Generator Architecture). Regarding Claim 20, Sharma et al in view of Guan et al and Lu et al teach the method according to claim 19 (as described above), wherein the one or more processors are further programmed: to provide respective quantifier and classifier network blocks of the cGAN-based machine-learning system (Guan et al, the cGAN includes a Generator G and Discriminator to perform quantitative output via the loss function in identification of the features and accuracy of shape (interpreted broadly for classifying) of car is quantified; Fig 13 and ; 5.C. GAN Loss Function), 9.2 Quantitative Metrics); and to enable the generator network block to use feedback from the discriminator network block to adjust weights of the generator network block encoder-decoder architecture encoder-decoder layers to learn and predict accurate 2D shapes (Guan et al, the discriminator and generator are used to calculate the loss function and where a relative weight is determined for the perceptual loss and a difference loss (distance between ground truth and output) and used to update the generator and discriminator to optimize each function; 5.C. GAN Loss Function). Claims 24-30 are rejected under 35 U.S.C. 103 as being unpatentable over Guan et al (High Resolution mmWave Imaging for Self-Driving Cars) in view of Sharma et al (Towards a convolutional neural network coupled millimetre-wave coded aperture image classifier system) and Lu et al (See Through Smoke: Robust Indoor Mapping with Low-cost mmWave Radar). Regarding Claim 24, Guan et al teach a system that approximates, on mobile mmWave devices, SAR imaging of full- sized systems, to enable human-perceptible and machine-readable shape generation and classification of hidden objects on mobile mmWave devices (self-driving car (mobile mmWave device) with cGAN system (Hawkeye) to generate shapes of objects, which may be hidden (such as in fog/rain) and may extend to multiple identified object classes; Fig 4, 10, 12 and 5. Hawkeye GAN Architecture, 9. Evaluation – Results in Fog/Rain, 10. Discussion – Extending Beyond Cars), comprising: a conditional generative adversarial network (cGAN)-based machine-learning system (interpreted as the hardware system with cGAN) (the algorithms for the cGAN are trained and executed on a Nvidia Titan RTX GPU; 8.A. GAN Implementation & Training), trained based on inputs of examples of 3D mmWave shapes and based on the corresponding ground truth shapes, to learn the association between 3D mmWave shapes and the corresponding 2D ground truth shapes (a conditional GAN is trained and used to analyze 3D mmWave imaging based on ground truth depth map and 3D mmWave radar heatmaps to predict the object; Fig 1-3 and 4. System Overview, 3. Hawkeye’s cGAN ¶ 1-2); an input to the cGAN-based machine-learning system from a mobile device of 3D mmWave shape data of target objects (mmWave signals are input to the cGAN for analysis of the 3D mmWave data and identification of the object; Fig 4-6 and 6. mmWave Imaging Module, 9.1 Results). Guan et al does not teach to enable human perceptible classification of hidden objects and a display for producing corresponding human perceptible 2D shapes output from the cGAN-based machine-learning system based on the input thereto. Sharma et al is analogous art pertinent to the technological problem addressed in this application and teach to enable human perceptible classification of hidden objects (the computation mmW imaging is processed with machine learning techniques (CNN) to image a target object and classify the object on imaging devices used for security checks, such as airport or train station; Fig 1-3 and 1. Introduction ¶ 1, 2. CNN enabled Coded Aperture Image Classifier ¶ 1-3, 2.2 Building the CNN ¶ 1-2). It would have been obvious to one of ordinary skill in the art before the effective filing date of this application to combine the teachings of Guan et al with Sharma et al including to enable human perceptible classification of hidden objects. By classifying hidden objects detected in mmW imaging using machine learning, real-time data can be generated and analyzed for use in security screening applications, thereby improving safety and security in public spaces, as recognized by Sharma et al (1. Introduction ¶ 1). Lu et al is analogous art pertinent to the technological problem addressed in this application and teach a display for producing corresponding human perceptible 2D shapes output from the cGAN-based machine-learning system based on the input thereto (“input thereto” is interpreted as the input to the cGAN; a 2D prediction image is generated and displayed in the hand-held device based on the cGAN based Generator; Fig 1, 11 and 7.4 Hand-held Devices). It would have been obvious to one of ordinary skill in the art before the effective filing date of this application to combine the teachings of Guan et al with Lu et al including a display for producing corresponding human perceptible 2D shapes output from the cGAN-based machine-learning system based on the input thereto. By displaying the predicted object images on a handheld device, first responders may quickly and effectively view an environment in an emergency or rescue environment quickly and effectively, thereby improving rescue operations, as recognized by Lu et al (7.3 Testing in Smoke-filled Environments, 7.4 Hand-held Devices). Regarding Claim 25, Guan et al in view of Sharma et al and Lu et al teach the system according to claim 24 (as described above), wherein the cGAN-based machine-learning system further includes respective generator and discriminator network blocks, which collectively generate a full 2D image of the target object based on the 3D mmWave shape data for a target object (Guan et al, the cGAN algorithm includes a generator and discriminator, which are used to pair the 3D mmWave heat-maps with 2D depth maps to generate the 3D scene; Fig 4-7 and 5. GAN Architecture). Regarding Claim 26, Guan et al in view of Sharma et al and Lu et al teach the system according to claim 25 (as described above), wherein the cGAN-based machine-learning system further includes respective quantifier and classifier network blocks (Guan et al, the cGAN includes a Generator G and Discriminator to perform quantitative output via the loss function in identification of the features and accuracy of shape (interpreted broadly for classifying) of car is quantified; Fig 13 and ; 5.C. GAN Loss Function), 9.2 Quantitative Metrics). Regarding Claim 27, Guan et al in view of Sharma et al and Lu et al teach the system according to claim 26 (as described above), wherein the quantifier network block is operative, based on cGAN outputs of the generator network block and ground truth image features of a set of target ground truth shapes, to learn and predict the mean depth (Guan et al, the m channels of 2D depth maps are concatenated to a feature map (thereby mean depth); 5.A. GAN Generator Architecture ¶ 6) and orientation of the shape in a 3D plane (Guan et al, the depths and orientation of the object can be detected from the shape; Fig 10 and 9.1 Qualitative Results ¶ 1, 9.3 Quantitative Results – Orientation). Regarding Claim 28, Guan et al in view of Sharma et al and Lu et al teach the system according to claim 27 (as described above), wherein the classifier network block is operative, based on cGAN outputs of the generator network block and supervised classification labels of a set of target ground truth shapes, to learn and automatically classify the target objects into different categories (Lu et al, supervised learning is applied to the cGAN system with labels applied to the lidar patches data for use during training to identify and classify the objects; Fig 1 and 4.2 Cross-modal Supervision by Collocation; 7.6 Semantic Mapping Performance Metrics and Baseline). Regarding Claim 29, Guan et al in view of Sharma et al and Lu et al teach the system according to claim 28 (as described above), wherein the generator network block includes an encoder-decoder architecture having encoder-decoder layers (Guan et al, the cGAN includes a Generator G and Discriminator to perform quantitative output via the loss function in identification of the features and accuracy of shape (interpreted broadly for classifying) of car is quantified; Fig 13 and ; 5.C. GAN Loss Function), 9.2 Quantitative Metrics), and the generator network block is operative to use feedback from the discriminator network block to adjust weights of the generator network block encoder-decoder architecture encoder- decoder layers to learn and predict accurate 2D shapes (Guan et al, the discriminator and generator are used to calculate the loss function and where a relative weight is determined for the perceptual loss and a difference loss (distance between ground truth and output) and used to update the generator and discriminator to optimize each function; 5.C. GAN Loss Function). Regarding Claim 30, Guan et al in view of Sharma et al and Lu et al teach the system according to claim 25 (as described above), wherein the cGAN-based machine-learning system comprises one or more programmed processors (Guan et al, the algorithms for the cGAN are trained and executed on a Nvidia Titan RTX GPU; 8.A. GAN Implementation & Training). Claim 31-42 is rejected under 35 U.S.C. 103 as being unpatentable over Guan et al (High Resolution mmWave Imaging for Self-Driving Cars) in view of Lu et al (See Through Smoke: Robust Indoor Mapping with Low-cost mmWave Radar). Regarding Claim 31, Guan et al teach a conditional generative adversarial network (cGAN)-based machine-learning system (cGAN system (Hawkeye) to generate shapes of objects, which may be hidden (such as in fog/rain) and may extend to multiple identified object classes; Fig 4, 10, 12 and 5. Hawkeye GAN Architecture, 9. Evaluation – Results in Fog/Rain, 10. Discussion – Extending Beyond Cars), comprising one or more processors programmed to use a machine-learning model (the cGAN algorithms are trained and executed on a Nvidia Titan RTX GPU; 8.A. GAN Implementation & Training) to recover the high-spatial frequencies in imperceptible 3D mmWave shape data for a target object (3D mmWave heat-maps are used to identify objects, which may be in poor visibility conditions; Fig 1, 10 and 1. Introduction ¶ 6-7, 5. Hawkeye’s GAN Architecture ¶ 2, 9.1 Qualitative Results in Fog/Rain), and to reconstruct (data is reconstructed as a 2D image depth map; Fig 1, 10 and 1. Introduction ¶ 6-7, 5. Hawkeye’s GAN Architecture ¶ 2, 9.1 Qualitative Results in Fog/Rain). Guan et al does not explicitly teach to display an accurate human-perceivable 2D target object shape. Lu et al is analogous art pertinent to the technological problem addressed in this application and teach to display an accurate human-perceivable 2D target object shape (a 2D prediction image is generated and displayed in the hand-held device; Fig 11 and 7.4 Hand-held Devices). It would have been obvious to one of ordinary skill in the art before the effective filing date of this application to combine the teachings of Guan et al in view of Lu et al including to display an accurate human-perceivable 2D target object shape. By displaying the predicted object images on a handheld device, first responders may quickly and effectively view an environment in an emergency or rescue environment quickly and effectively, thereby improving rescue operations, as recognized by Lu et al (7.3 Testing in Smoke-filled Environments, 7.4 Hand-held Devices). Regarding Claim 32, Guan et al in view of Lu et al teach the conditional generative adversarial network (cGAN)-based machine-learning system as in claim 31 (as described above), wherein the one or more processors are further programmed to predict the 3D image features of the 2D shape of the target object (Guan et al, the 2D depth map or 3D heat map can be used to determine quantitative metrics, such as size, orientation and shape of object (car); Fig 10 and 9.2 Quantitative Metrics – Size, Orientation, Shape of Car). Regarding Claim 33, Guan et al in view of Lu et al teach the conditional generative adversarial network (cGAN)-based machine-learning system as in claim 32 (as described above), wherein the 3D image features comprise at least one of the object categories (Guan et al, shape of the object is determined from categories of surface missed and artifacts; 9.2 Quantitative metrics – Shape), the mean depth (Guan et al, the m channels of 2D depth maps are concatenated to a feature map (thereby mean depth); 5.A. GAN Generator Architecture ¶ 6), the azimuth angle (Guan et al, spherical coordinates (azimuth angle, elevation angle, range) are generated from in the 3D mmWave input; 5.A Generator Architecture ¶ 2), the elevation angle (Guan et al, spherical coordinates (azimuth angle, elevation angle, range) are generated from in the 3D mmWave input; 5.A Generator Architecture ¶ 2), and the rotation angle of the image (Guan et al, orientation angle (rotation angle) of the object are generated from in the 3D mmWave input; 9.2 Quantitative Metrics - Orientation). Regarding Claim 34, Guan et al in view of Lu et al teach the conditional generative adversarial network (cGAN)-based machine-learning system as in claim 31 (as described above), wherein the one or more processors are further programmed to provide respective generator and discriminator network blocks of the machine-learning model (Guan et al, the cGAN algorithm includes a generator and discriminator, which are used to pair the 3D mmWave heat-maps with 2D depth maps to generate the 3D scene; Fig 4-7 and 5. GAN Architecture), which collectively generate a full 2D image of the target object based on the 3D mmWave shape data for a target object (Guan et al, the 3D mmWave Input data is transposed to a 2D output in the Generator; 5. Hawkeye’s cGAN ¶ 2-4). Regarding Claim 35, Guan et al in view of Lu et al teach the conditional generative adversarial network (cGAN)-based machine-learning system as in claim 34 (as described above), wherein the one or more processors are further programmed to implement within the generator network block an encoder-decoder architecture (Guan et al, the cGAN, implemented on the Nvidia Tital RTX GPU (8.A. GAN Implementation & Training), uses an encoder-decoder in the Generator; Fig 4 and 5.A. GAN Generator Architecture). Regarding Claim 36, Guan et al in view of Lu et al teach the conditional generative adversarial network (cGAN)-based machine-learning system as in claim 35 (as described above), wherein the one or more processors are further programmed for the encoder to convert the 3D mmWave shape data into a 1D feature vector using multiple 3D convolution layers and an end flatten layer (Guan et al, the input data to the encoder of the Generator creates a vector (1D vector z) representation based on multiple layers in the encoder; Fig 4 and 5.A. Generator ¶ 2-3), so that the 1D representation compresses the 3D shape so that the deeper layers of the generator learn high-level abstract features (Guan et al, vector representation represents a common feature space between input and output by compressing the data; Fig 4 and 5.A. Generator ¶ 2-3). Regarding Claim 37, Guan et al in view of Lu et al teach the conditional generative adversarial network (cGAN)-based machine-learning system as in claim 36 (as described above), wherein the one or more processors are further programmed to implement a skip connection between the generator network block and the discriminator network block (Guan et al, skip connections are implemented for direct transfer of data from the generator input to the output directed to the discriminator; Fig 4 and 5.A. Generator ¶ 5). Regarding Claim 38, Guan et al in view of Lu et al teach the conditional generative adversarial network (cGAN)-based machine-learning system as in claim 37 (as described above), wherein the one or more processors are further programmed to implement a skip connection which extracts a highest energy 2D slice from the 3D shape and concatenate it to a 2D deconvolution layer of the generator network block (Guan et al, data from the skip connections are implemented for direct transfer of 3D mmWave Input data from the generator input to the output as 2D Output with a Heat2Depth map directed to the discriminator; Fig 4 and 5.A. Generator ¶ 5). Regarding Claim 39, Guan et al in view of Lu et al teach the conditional generative adversarial network (cGAN)-based machine-learning system as in claim 34 (as described above), wherein the one or more processors are further programmed to provide respective quantifier and classifier network blocks of the cGAN-based machine-learning system (Guan et al, the cGAN includes a Generator G and Discriminator to perform quantitative output via the loss function in identification of the features and accuracy of shape (interpreted broadly for classifying) of car is quantified; Fig 13 and ; 5.C. GAN Loss Function), 9.2 Quantitative Metrics). Regarding Claim 40, Guan et al in view of Lu et al teach the conditional generative adversarial network (cGAN)-based machine-learning system as in claim 39 (as described above), wherein the one or more processors are further programmed for the quantifier network block, based on cGAN outputs of the generator network block and ground truth image features of a set of target ground truth shapes, to learn and predict the mean depth (Guan et al, the m channels of 2D depth maps are concatenated to a feature map (thereby mean depth); 5.A. GAN Generator Architecture ¶ 6) and orientation of the shape in a 3D plane (the depths and orientation of the object can be detected from the shape; Fig 10 and 9.1 Qualitative Results ¶ 1, 9.3 Quantitative Results – Orientation). Regarding Claim 41, Guan et al in view of Lu et al teach the conditional generative adversarial network (cGAN)-based machine-learning system as in claim 39 (as described above), wherein the one or more processors are further programmed for the classifier network block, based on cGAN outputs of the generator network block and supervised classification labels of a set of target ground truth shapes, to learn and automatically classify the target objects into different categories (Lu et al, supervised learning is applied to the cGAN system with labels applied to the lidar patches data for use during training to identify and classify the objects; Fig 1 and 4.2 Cross-modal Supervision by Collocation; 7.6 Semantic Mapping Performance Metrics and Baseline). Regarding Claim 42, Guan et al in view of Lu et al teach the conditional generative adversarial network (cGAN)-based machine-learning system as in claim 37 (as described above), wherein the one or more processors are further programmed to implement a skip connection (Guan et al, skip connections are implemented for direct transfer of data from the generator input to the output directed to the discriminator; Fig 4 and 5.A. Generator ¶ 5) which enables the generator network block to use feedback from the discriminator network block to adjust weights of the generator network block encoder-decoder architecture encoder-decoder layers to learn and predict accurate 2D shapes (Guan et al, the discriminator and generator are used to calculate the loss function and where a relative weight is determined for the perceptual loss and a difference loss (distance between ground truth and output) and used to update the generator and discriminator to optimize each function; 5.C. GAN Loss Function). Allowable Subject Matter Claims 21-23, 43-49 are each objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. In each claim, the prior art was not readily identified to teach the entirety of the claim limitation in combination with the claim in which it depends. Each claim is recited as follows: Claim 21. The method according to claim 20, wherein the one or more processors are further programmed for the generator and discriminator network blocks to use the Li- norm loss Li(G) and traditional GAN loss L(G) to train the cGAN-based system comprising the generator and discriminator network blocks, with combined cGAN-based system loss determined by: PNG media_image1.png 18 441 media_image1.png Greyscale Claim 22. The method according to claim 20, wherein the one or more processors are further programmed for the quantifier network block to determine its loss function: PNG media_image2.png 18 434 media_image2.png Greyscale Claim 23. The method according to claim 20, wherein the one or more processors are further programmed for the classifier network block to determine its loss function calculated as: PNG media_image3.png 86 542 media_image3.png Greyscale where c(s) and t; are the predicted and actual probabilities of ith class (categorical output), po and to are the predicted and actual probabilities of suspicious object (binary output), and the hyper-parameters (L,)F,c,)B) represent the networks' focus on shape reconstruction, features prediction, and classification. Claim 43. The conditional generative adversarial network (cGAN)-based machine-learning system as in claim 34, wherein the network parameters of the generator network block comprise: PNG media_image4.png 317 703 media_image4.png Greyscale with 3DC: 3D Convolution (with batch normalization); 2DDC: 2D Deconvolution (with batch normalization); Act. Fcn: Activation Function; LRelu: LeakyRelu; and output layer using linear activation. Claim 44. The conditional generative adversarial network (cGAN)-based machine-learning system as in claim 34, wherein the network parameters of the discriminator network block comprise: PNG media_image5.png 310 771 media_image5.png Greyscale with 3DC: 3D Convolution (with batch normalization); FC: Fully Connected; 2DC: 2D Convolution (with batch norm.); Act. Fcn: Activation Function; LRelu: LeakyRelu; and output layer using sigmoid activation. Claim 45. The conditional generative adversarial network (cGAN)-based machine-learning system as in claim 39, wherein the network parameters of the quantifier network block comprise: PNG media_image6.png 135 743 media_image6.png Greyscale with 2DC: 2D Convolution (with batch normalization); FC: Fully Connected; Act. Fcn: Activation Function; LRelu: LeakyRelu; and output layer using linear activation. Claim 46. The conditional generative adversarial network (cGAN)-based machine-learning system as in claim 39, wherein the network parameters of the classifier network block comprise: PNG media_image7.png 142 769 media_image7.png Greyscale with 2DC: 2D Convolution (with batch normalization); FC: Fully Connected; Categorical class output layer uses softmax and Binary output layer using sigmoid activation functions. Claim 47. The conditional generative adversarial network (cGAN)-based machine-learning system as in claim 39, wherein the one or more processors are further programmed for the generator and discriminator network blocks to use the Li-norm loss L1(G) and traditional GAN loss L(G) to train the cGAN-based system comprising the generator and discriminator network blocks, with combined cGAN-based system loss determined by: PNG media_image8.png 18 441 media_image8.png Greyscale Claim 48. The conditional generative adversarial network (cGAN)-based machine-learning system as in claim 39, wherein the one or more processors are further programmed for the quantifier network block to determine its loss function: PNG media_image9.png 18 434 media_image9.png Greyscale Claim 49. The conditional generative adversarial network (cGAN)-based machine-learning system as in claim 39, wherein the one or more processors are further programmed for the classifier network block to determine its loss function calculated as: PNG media_image10.png 86 542 media_image10.png Greyscale where c(s) and t; are the predicted and actual probabilities of ith class (categorical output), po and to are the predicted and actual probabilities of suspicious object (binary output), and the hyper-parameters (L,)F, C,)B) represent the networks' focus on shape reconstruction, features prediction, and classification. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Guan et al (US 2021/0192762) teach a system and method for a neural network based mmWave imaging technique to transpose the 3D radar data of an object to a 2D representative image including the probability that the object detected object is a given object. Valdes Garcia (US 2022/0026561) teach a system and method for detection of a concealed object based on infrared domain camera imaging and millimeter wave radar imaging to determine emissivity and detect a concealed object. Hamidi et al (3D Near-Field Millimeter Wave Synthetic Aperture Radar Imaging) teach a 3D mmWave analysis for large SAR imaging and mathematical manipulations to transpose the data from 3D to create the 2D synthetic aperture data. Sheen et al (Three-Dimensional Millimeter-Wave Imaging for Concealed Weapon Detection) teach a 3D mmWave imaging techniques for detection of concealed weapons and contraband at security locations including transformation of the 3D data to reconstruct a 2D focused image. Nguyen et al (3D imaging for millimeter-wave forward-looking synthetic aperture radar (SAR)) teach a system and method to utilizing 3D mmWave SAR technology to provide navigational feedback to avoid obstacles and hazardous terrain in degraded visual environments. Any inquiry concerning this communication or earlier communications from the examiner should be directed to KATHLEEN M BROUGHTON whose telephone number is (571)270-7380. The examiner can normally be reached Monday-Friday 8:00-5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, John Villecco can be reached at (571) 272-7319. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /KATHLEEN M BROUGHTON/Primary Examiner, Art Unit 2661
Read full office action

Prosecution Timeline

Apr 21, 2022
Application Filed
Jan 18, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602915
FEATURE FUSION FOR NEAR FIELD AND FAR FIELD IMAGES FOR VEHICLE APPLICATIONS
2y 5m to grant Granted Apr 14, 2026
Patent 12597233
SYSTEM AND METHOD FOR TRAINING A MACHINE LEARNING MODEL
2y 5m to grant Granted Apr 07, 2026
Patent 12586203
IMAGE CUTTING METHOD AND APPARATUS, COMPUTER DEVICE, AND STORAGE MEDIUM
2y 5m to grant Granted Mar 24, 2026
Patent 12567227
METHOD AND SYSTEM FOR UNSUPERVISED DEEP REPRESENTATION LEARNING BASED ON IMAGE TRANSLATION
2y 5m to grant Granted Mar 03, 2026
Patent 12565240
METHOD AND SYSTEM FOR GRAPH NEURAL NETWORK BASED PEDESTRIAN ACTION PREDICTION IN AUTONOMOUS DRIVING SYSTEMS
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
83%
Grant Probability
92%
With Interview (+8.3%)
2y 7m
Median Time to Grant
Low
PTA Risk
Based on 263 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month