Prosecution Insights
Last updated: April 19, 2026
Application No. 18/521,483

METHOD FOR DETECTING INFRARED SHIP TARGET BASED ON IMPROVED YOLOV7

Non-Final OA §103§112
Filed
Nov 28, 2023
Examiner
RUSH, ERIC
Art Unit
2677
Tech Center
2600 — Communications
Assignee
Yangtze University
OA Round
1 (Non-Final)
61%
Grant Probability
Moderate
1-2
OA Rounds
3y 5m
To Grant
97%
With Interview

Examiner Intelligence

Grants 61% of resolved cases
61%
Career Allow Rate
383 granted / 628 resolved
-1.0% vs TC avg
Strong +36% interview lift
Without
With
+36.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
32 currently pending
Career history
660
Total Applications
across all art units

Statute-Specific Performance

§101
10.8%
-29.2% vs TC avg
§103
40.0%
+0.0% vs TC avg
§102
12.7%
-27.3% vs TC avg
§112
27.7%
-12.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 628 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA. Specification The lengthy specification has not been checked to the extent necessary to determine the presence of all possible minor errors. Applicant’s cooperation is requested in correcting any errors of which applicant may become aware in the specification. Claim Objections Claim 1 is objected to because of the following informalities: Line 2 of claim 1 recites, in part, “version 7), comprising following steps:” which appears to contain a grammatical error and/or a minor informality. The Examiner suggests amending the claim to --version 7), comprising the following steps:-- in order to improve the clarity and precision of the claim . Appropriate correction is required. Claim 2 is objected to because of the following informalities: Line s 4 - 5 of claim 2 recite, in part, “ and then dividing the infrared maritime ship data set processed into a training ” which appears to contain grammatical error s and/or a minor informalit ies . The Examiner suggests amending the claim to -- and then dividing the processed infrared maritime ship data set processed into a training -- in order to improve the clarity and precision of the claim. Appropriate correction is required. Claim 5 is objected to because of the following informalities: Line 4 of claim 5 recites, in part, “and adds a intermediate feature” which appears to contain a grammatical error and/or a minor informality. The Examiner suggests amending the claim to --and adds [[a]] an intermediate feature-- in order to improve the clarity and precision of the claim. Appropriate correction is required. Claim 8 is objected to because of the following informalities: Line 4 of claim 8 recites, in part, “and the SENet structure is used to extract importance degree of each” which appears to contain a grammatical error and/or a minor informality. The Examiner suggests amending the claim to --and the SENet structure is used to extract an importance degree of each-- in order to improve the clarity and precision of the claim. Appropriate correction is required. Claim 9 is objected to because of the following informalities: Line 3 of claim 9 recites, in part, “loss function is shown in following formulas:” which appears to contain a grammatical error and/or a minor informality. The Examiner suggests amending the claim to --loss function is shown in the following formulas:-- in order to improve the clarity and precision of the claim. Appropriate correction is required. Claim 10 is objected to because of the following informalities: Line 5 of claim 10 recites, in part, “a scaling of the training set, verification set and test set” which appears to contain a grammatical errors and/or a minor informalities. The Examiner suggests amending the claim to --a scaling of the training set, the verification set and the test set-- in order to improve the clarity and precision of the claim. Appropriate correction is required. Claim 10 is objected to because of the following informalities: Line 7 of claim 10 recites, in part, “and the verification set after an adjusting, and adjusting” which appears to contain a grammatical error and/or a minor informality. The Examiner suggests amending the claim to --and the verification set after an adjusting an adjustment , and adjusting-- in order to improve the clarity and precision of the claim. Appropriate correction is required. Claim 10 is objected to because of the following informalities: Line 8 of claim 10 recites, in part, “until the average accuracy change and loss change” which appears to contain a grammatical error and/or a minor informality. The Examiner suggests amending the claim to --until the average accuracy change and the loss change-- in order to improve the clarity and precision of the claim. Appropriate correction is required. Claim 10 is objected to because of the following informalities: Line 10 of claim 10 recites, in part, “detection model trained; finally, testing the infrared ship” which appears to contain a grammatical error and/or a minor informality. The Examiner suggests amending the claim to --detection model trained; and finally, testing the infrared ship-- in order to improve the clarity and precision of the claim. Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b ) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the appl icant regards as his invention. Claims 1 - 10 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 1 recites the limitation "the infrared ship target detection model trained ;" (emphasis added) in line 8. There is insufficient antecedent basis for this limitation in the claim. T he Examiner suggests amending the aforementioned limitation to -- [[the]] an infrared ship target detection model trained ;-- and subsequent recitations of “the infrared ship target detection model trained ” to --the trained infrared ship target detection model trained --. Claim 6 recites the limitation "the input feature layer with different resolutions and corresponding weight parameters " (emphasis added) in lines 5 - 6. There is insufficient antecedent basis for this limitation in the claim. Claim 6 recites the limitation "the input feature layer" in line 7. There is insufficient antecedent basis for this limitation in the claim. Claim 7 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention because the variable s “ p i in ”, “ p i+1 in ” and “ p i-1 out ” recited on line 4 and line 5 are undefined and it is there fore unclear as to how “ p i td ” and “ p i out ” are calculated. Therefore, the claim is found to be indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 9 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention because the variable “ L IoU * ” recited on line 5 is undefined and it is there fore unclear as to how “ L WIoUv1 ” and “ L WIoUv3 ” are calculated. Therefore, the claim is found to be indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 9 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention because it is unclear as to what “and * represents separating (W g ,H g ) from a current calculation diagram”, recited on line 12, means. The Examiner asserts that the instant disclosure merely repeats the aforementioned limitation and provides no clarification regarding what is meant by “separating (W g ,H g ) from a current calculation diagram” nor how such an operation is performed. Thus, the Examiner assert that it is therefore unclear as to how “ R WIoU ”, “ L WIoUv1 ” and “ L WIoUv3 ” are calculated. Clarification and appropriate correction are required. Therefore, the claim is found to be indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 10 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention because it is unclear as to which adjusting “the adjusting” recited on line 11 is referencing. Is it referring to the “adjusting” recited on line 5 of claim 10, the “adjusting” recited on line 7 of claim 10 or the ”adjusting” recited on line 7 of claim 10? Clarification and appropriate correction are required. For purposes of examination, the Examiner will treat the claim as referencing one or more of the “adjusting” recited on line 5 of claim 10, the “adjusting” recited on line 7 of claim 10 and the ”adjusting” recited on line 7 of claim 10. Claims 2 - 5 and 8 are also rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA), second paragraph, due to being dependent upon a rejected base claim(s) but would be withdrawn from the rejection if their base claim(s) overcome the rejection. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis ( i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim s 1 - 6 are rejected under 35 U.S.C. 103 as being unpatentable over Wang et al., “An improved YOLOv7 method for vehicle detection in traffic scenes”, IEEE, 35th Chinese Control and Decision Conference (CCDC), May 2023, pages 766 - 771 in view of Jing Ye, Zhaoyu Yuan, Cheng Qian, and Xiaoqiong Li, “CAA-YOLO: Combined-Attention-Augmented YOLO for Infrared Ocean Ships Detection”, Sensors, Vol. 22, Issue 10, May 2022, pages 1 - 23, herein referred to as “Ye et al.” . - With regards to claim 1, Wang et al. disclose a method for detecting a target based on an improved YOLOv7 (You Only Look Once version 7), (Wang et al., Pg. 766 Abstract, Pg. 766 § I ¶ 3, Pg. 767 § II ¶ 1, Pg. 768 Subsection C ¶ 1, Pg. 769 § III - Subsection B, Pg. 770 Subsection D - Section “Discussion and Analysis”, Pg. 770 Fig. 7 ) comprising following steps: obtaining a data set; (Wang et al., Pg. 766 § I ¶ 3, Pg. 769 § III - Pg. 770 Section “Discussion and Analysis”) reforming a YOLOv7 network structure based on an MobileNetv3 network and a bidirectional weighted feature pyramid network, (Wang et al., Pg. 766 Abstract, Pg. 766 § I ¶ 3, Pg. 767 § II - Pg. 768 Subsection B, Pg. 770 Section “Discussion and Analysis” ) and obtaining a target detection model by introducing an attention mechanism and an optimized loss function; (Wang et al., Pg. 766 Abstract, Pg. 766 § I ¶ 3, Pg. 767 § II - Pg. 768 Subsection C, Pg. 770 Section “Discussion and Analysis”) training and verifying the target detection model based on the data set to obtain the target detection model trained; (Wang et al., Pg. 766 Abstract, Pg. 766 § I ¶ 3, Pg. 769 § III - Pg. 770 Section “Discussion and Analysis”) and detecting a target based on the target detection model trained. ( Wang et al., Pg. 766 Abstract, Pg. 766 § I ¶ 3, Pg. 767 § II ¶ 1, Pg. 769 § III - Pg. 770 Section “Discussion and Analysis”, Pg. 770 Fig. 7) Wang et al. fail to disclose explicitly detecting an infrared ship target ; an infrared maritime ship data set; an infrared ship target detection model; and detecting a maritime ship . Pertaining to analogous art, Ye et al. disclose a method for detecting an infrared ship target based on an improved YOLO (You Only Look Once), ( Ye et al., Pg. 1 Abstract, Pg. 2 First-Full Paragraph - Second-Full Paragraph, Pg. 5 § 3 - Pg. 7 § 3.3 ¶ 1, Pg. 11 § 4 - Pg. 13 § 4.2.2, Pg. 13 Fig. 9, Pg. 15 Figs. 10 & 11, Pg. 16 Fig. 12, Pg. 20 § 5, Pg. 20 Fig. 16 ) comprising following steps: obtaining an infrared maritime ship data set; (Ye et al., Pg. 1 Abstract, Pg. 5 § 3 - § 3.1, Pg. 11 § 4 - Pg. 13 § 4.2.2, Pg. 13 Fig. 9) reforming a YOLO network structure, and obtaining an infrared ship target detection model by introducing an attention mechanism; (Ye et al., Pg. 1 Abstract, Pg. 2 First-Full Paragraph - Second-Full Paragraph, Pg. 5 § 3 - Pg. 6 § 3.2 ¶ 1, Pg. 7 § 3.3 - Pg. 9 § 3.4 ¶ 1, Pg. 9 Figs. 4 & 5, Pg. 20 § 5) training and verifying the infrared ship target detection model based on the infrared maritime ship data set to obtain the infrared ship target detection model trained; (Ye et al., Pg. 1 Abstract, Pg. 2 First-Full Paragraph - Second-Full Paragraph, Pg. 11 § 4 - Pg. 13 § 4.2.2, Pg. 15 Figs. 10 & 11, Pg. 16 Fig. 12, Pg. 20 § 5, Pg. 20 Fig. 16) and detecting a maritime ship based on the infrared ship target detection model trained. (Ye et al., Pg. 1 Abstract, Pg. 2 First-Full Paragraph - Second-Full Paragraph, Pg. 11 § 4 - Pg. 13 § 4.2.2, Pg. 15 Figs. 10 & 11, Pg. 16 Fig. 12, Pg. 20 § 5, Pg. 20 Fig. 16) Wang et al. and Ye et al. are combinable because they are both directed towards image processing systems and methods that utilize machine learning models to detect objects in images. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Wang et al. with the teachings of Ye et al. This modification would have been prompted in order to enhance the base device of Wang et al. with the well-known and applicable technique Ye et al. applied to a comparable device. Training a target detection model to detect maritime ships based on an infrared maritime ship data set, as taught by Ye et al., would enhance the base device of Wang et al. by allowing for it to be employed in a wider variety of situations and/or utilized in an increased number and variety of related and applicable applications and/or environments, such as for the detection of target objects in infrared images, thereby improving its overall appeal, usefulness and marketability to potential end-users. This combination could be completed according to well-known techniques in the art and would likely yield predictable results, in that the target detection model of the base device of Wang et al. would be trained on an infrared maritime ship data set and subsequently utilized to detect maritime ships in order to increase the number and variety of applications, environments and/or situations in which it may be employed so as to improve its overall appeal, usefulness and marketability to potential end-users. Therefore, it would have been obvious to combine Wang et al. with Ye et al. to obtain the invention as specified in claim 1. - With regards to claim 2, Wang et al. in view of Ye et al. disclose the method for detecting an infrared ship target based on an improved YOLOv7 according to claim 1, wherein after obtaining the data set, further comprising: carrying out a data enhancement processing on the data set, (Wang et al., Pg. 766 Abstract, Pg. 769 § III - Subsection B [“the mosaic data enhancement is used,”] ) and then dividing the data set processed into a training set, a verification set and a test set based on a preset ratio. (Wang et al., Pg. 769 § III - Pg. 770 Section “Discussion and Analysis” [“the data set is divided into training set, validation set and test set according to the ration of 7:2:1’]) Wang et al. fail to disclose explicitly the infrared maritime ship data set. Pertaining to analogous art, Ye et al. disclose wherein after obtaining the infrared maritime ship data set, further comprising: carrying out a data enhancement processing on the data set, (Ye et al., Pg. 5 § 3.1 ¶ 1, Pg. 11 § 4 - Pg. 13 § 4.2.2, Pg. 13 Fig. 9) and then dividing the infrared maritime ship data set processed into a training set and a verification set based on a preset ratio. (Ye et al., Pg. 11 § 4 - Pg. 13 § 4.2.2 , Pg. 19 First-Full Paragraph ) - With regards to claim 3, Wang et al. in view of Ye et al. disclose the method for detecting an infrared ship target based on an improved YOLOv7 according to claim 1, wherein a process of reforming the YOLOv7 network structure based on the MobileNetv3 network and the bidirectional weighted feature pyramid network comprises: replacing a backbone feature extraction network in the YOLOv7 network structure with the MobileNetv3 network, (Wang et al., Pg. 766 Abstract, Pg. 766 § I ¶ 3, Pg. 767 § II - Pg. 768 Subsection B , Pg. 769 Subsection B - Pg. 770 Section “Discussion and Analysis” ) and replacing a feature fusion network in the YOLOv7 network structure with the bidirectional weighted feature pyramid network. (Wang et al., Pg. 766 Abstract, Pg. 766 § I ¶ 3, Pg. 767 § II - Pg. 768 Subsection B , Pg. 769 Subsection B - Pg. 770 Section “Discussion and Analysis” ) - With regards to claim 4, Wang et al. in view of Ye et al. disclose the method for detecting an infrared ship target based on an improved YOLOv7 according to claim 3, wherein the MobileNetv3 network combines a depthwise separable convolution structure and an inverted residual structure, (Wang et al., Pg. 766 Abstract, Pgs. 767 - 768 Subsection A, Pg. 767 Figs. 1 & 2) and is integrated into a channel attention mechanism network; (Wang et al., Pg. 766 Abstract, Pgs. 767 - 768 Subsection A, Pg. 767 Figs. 1 & 2) wherein, the depthwise separable convolution structure comprises a depthwise convolution and a pointwise convolution. (Wang et al., Pg. 766 Abstract, Pgs. 767 - 768 Subsection A, Pg. 767 Figs. 1 & 2) - With regards to claim 5, Wang et al. in view of Ye et al. disclose the method for detecting an infrared ship target based on an improved YOLOv7 according to claim 3, wherein the bidirectional weighted feature pyramid network increases a feature image weight, (Wang et al., Pg. 766 Abstract, Pg. 768 Subsection B, Pg. 768 Fig. 3) introduces a residual strategy, (Wang et al., Pg. 766 Abstract, Pg. 768 Subsection B, Pg. 768 Fig. 3) deletes nodes with low contribution, (Wang et al., Pg. 766 Abstract, Pg. 768 Subsection B, Pg. 768 Fig. 3) and adds a intermediate feature channel. (Wang et al., Pg. 766 Abstract, Pg. 768 Subsection B, Pg. 768 Fig. 3) - With regards to claim 6, Wang et al. in view of Ye et al. disclose the method for detecting an infrared ship target based on an improved YOLOv7 according to claim 5, wherein a process of increasing the feature image weight comprises: the bidirectional weighted feature pyramid network automatically learns weight parameters of each input feature layer, (Wang et al., Pg. 766 Abstract, Pg. 768 Subsection B, Pg. 768 Fig. 3) and then performs a weighted feature fusion on the input feature layer with different resolutions and corresponding weight parameters and performs an output; (Wang et al., Pg. 766 Abstract, Pg. 768 Subsection B, Pg. 768 Fig. 3) wherein the bidirectional weighted feature pyramid network adds a jump connection between the input feature layer and an output feature layer in a same layer. (Wang et al., Pg. 766 Abstract, Pg. 768 Subsection B, Pg. 768 Fig. 3) Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Wang et al., “An improved YOLOv7 method for vehicle detection in traffic scenes”, IEEE, 35th Chinese Control and Decision Conference (CCDC), May 2023, pages 766 - 771 in view of Jing Ye, Zhaoyu Yuan, Cheng Qian, and Xiaoqiong Li, “CAA-YOLO: Combined-Attention-Augmented YOLO for Infrared Ocean Ships Detection”, Sensors, Vol. 22, Issue 10, May 2022, pages 1 - 23, herein referred to as “Ye et al.”, as applied to claim 6 above, and further in view of Mingxing Tan, Ruoming Pang, and Quoc Le, “EfficientDet: Scalable and Efficient Object Detection”, IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020, pages 10778 - 10787, herein referred to as “Tan et al.” . - With regards to claim 7, Wang et al. in view of Ye et al. disclose the method for detecting an infrared ship target based on an improved YOLOv7 according to claim 6. Wang et al. fail to disclose explicitly wherein calculation formulas of the weighted feature fusion is as follows: wherein p i td and p i out represent intermediate transition features of an i-layer on a top-down path and final output features of an i-layer on a down-top path; w 1 and w 2 respectively represent the weight parameters for calculating an input of a current layer and an input of a next layer of the intermediate transition features; w 1 ′, w 2 ′ and w 3 ′ respectively represent a weight of the input of the current layer, a weight of an output of a transition unit of the current layer and a weight of an output of a previous layer, and ∈ value is 0.0001, and Conv stands for a convolution operation on a whole calculation result. Pertaining to analogous art, Tan et al. disclose wherein calculation formulas of the weighted feature fusion is as follows: wherein p i td and p i out represent intermediate transition features of an i-layer on a top-down path and final output features of an i-layer on a down-top path; w 1 and w 2 respectively represent the weight parameters for calculating an input of a current layer and an input of a next layer of the intermediate transition features; w 1 ′, w 2 ′ and w 3 ′ respectively represent a weight of the input of the current layer, a weight of an output of a transition unit of the current layer and a weight of an output of a previous layer, and ∈ value is 0.0001, and Conv stands for a convolution operation on a whole calculation result. (Tan et al., Pgs. 10779 - Pgs. 10780 - 10781 § 3.3) Wang et al. in view of Ye et al. and Tan et al. are combinable because they are all directed towards image processing systems and methods that utilize machine learning models to detect objects in images. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combined teachings of Wang et al. in view of Ye et al. with the teachings of Tan et al. This modification would have been prompted in order to substitute the undisclosed weighted feature fusion formulas of Wang et al. for the weighted feature fusion formulas of Tan et al. The weighted feature fusion formulas of Tan et al. could be substituted in place of the undisclosed weighted feature fusion formulas of Wang et al. utilizing well-known techniques in the art and would likely yield predictable results, in that, in the combination, the weighted feature fusion formulas of Tan et al. would be utilized to fuse the feature information of the different feature layers of the combined base device. This combination could be completed according to well-known techniques in the art and would likely yield predictable results, in that the weighted feature fusion formulas of Tan et al. would be utilized to fuse the feature information of the different feature layers of the combined base device. Therefore, it would have been obvious to combine Wang et al. in view of Ye et al. with Tan et al. to obtain the invention as specified in claim 7. Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Wang et al., “An improved YOLOv7 method for vehicle detection in traffic scenes”, IEEE, 35th Chinese Control and Decision Conference (CCDC), May 2023, pages 766 - 771 in view of Jing Ye, Zhaoyu Yuan, Cheng Qian, and Xiaoqiong Li, “CAA-YOLO: Combined-Attention-Augmented YOLO for Infrared Ocean Ships Detection”, Sensors, Vol. 22, Issue 10, May 2022, pages 1 - 23, herein referred to as “Ye et al.”, as applied to claim 1 above, and further in view of Li Xiangrong and Sun Lihui, “Multiscale Infrared Target Detection Based on Attention Mechanism”, “Infrared Technology, Vol. 45, Issue 7, July 2023, pages 746 - 754, herein referred to as “Xiangrong et al.”. The Examiner notes that the citations to Xia n grong et al. correspond to the provided machine translation . - With regards to claim 8, Wang et al. in view of Ye et al. disclose the method for detecting an infrared ship target based on an improved YOLOv7 according to claim 1, wherein the attention mechanism is an SENet (Squeeze-and-Excitation Networks) structure. (Wang et al., Pg. 767 § II - Pg. 768 Subsection B) Wang et al. fail to disclose expressly an SENet structure with a soft attention mechanism, and the SENet structure is used to extract importance degree of each feature channel by an active learning method, then give the each feature channel different weights, and finally perform a filtration processing for features in a detection task based on a weight of the each feature channel. Pertaining to analogous art, Xiangrong et al. disclose wherein the attention mechanism is an SENet (Squeeze-and-Excitation Networks) structure with a soft attention mechanism, (Xiangrong et al., Pg. 747 Left-Hand Column Second-Full Paragraph - § 1.1, Pg. 747 Fig. 1, Pg. 749 § 1.5, Pg. 749 Fig. 4) and the SENet structure is used to extract importance degree of each feature channel by an active learning method, (Xiangrong et al., Pg. 747 Left-Hand Column Second-Full Paragraph - § 1.1, Pg. 747 Fig. 1, Pg. 749 § 1.5, Pg. 749 Fig. 4) then give the each feature channel different weights, (Xiangrong et al., Pg. 747 Left-Hand Column Second-Full Paragraph - § 1.1, Pg. 747 Fig. 1, Pg. 749 § 1.5, Pg. 749 Fig. 4) and finally perform a filtration processing for features in a detection task based on a weight of the each feature channel. (Xiangrong et al., Pg. 747 Left-Hand Column Second-Full Paragraph - § 1.1, Pg. 747 Fig. 1, Pg. 749 § 1.5, Pg. 749 Fig. 4) Wang et al. in view of Ye et al. and Xiangrong et al. are combinable because they are all directed towards image processing systems and methods that utilize machine learning models to detect objects in images. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combined teachings of Wang et al. in view of Ye et al. with the teachings of Xiangrong et al. This modification would have been prompted in order to substitute the SE attention mechanism of Wang et al. for the Squeeze-and-Excitation Networks (SENet) channel attention mechanism of Xiangrong et al. The SENet channel attention mechanism of Xiangrong et al. could be substituted in place of the SE attention mechanism of Wang et al. utilizing well-known techniques in the art and would likely yield predictable results, in that, in the combination, the SENet channel attention mechanism of Xiangrong et al. would be utilized to enable the combined base device to focus on and strengthen the representational power of useful information and features . This combination could be completed according to well-known techniques in the art and would likely yield predictable results, in that the SENet channel attention mechanism of Xiangrong et al. would be utilized to enable the combined base device to focus on and strengthen the representational power of useful information and features . Therefore, it would have been obvious to combine Wang et al. in view of Ye et al. with Xiangrong et al. to obtain the invention as specified in claim 8. Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Wang et al., “An improved YOLOv7 method for vehicle detection in traffic scenes”, IEEE, 35th Chinese Control and Decision Conference (CCDC), May 2023, pages 766 - 771 in view of Jing Ye, Zhaoyu Yuan, Cheng Qian, and Xiaoqiong Li, “CAA-YOLO: Combined-Attention-Augmented YOLO for Infrared Ocean Ships Detection”, Sensors, Vol. 22, Issue 10, May 2022, pages 1 - 23, herein referred to as “Ye et al.”, as applied to claim 1 above, and further in view of Zanjia Tong, Yuhang Chen, Zewei Xu, and Rong Yu, “Wise-IoU: Bounding Box Regression Loss with Dynamic Focusing Mechanism”, “arXiv, arXiv:2301.10051v3, Apr. 2023, pages 1 - 8, herein referred to as “Tong et al.” . - With regards to claim 9, Wang et al. in view of Ye et al. disclose the method for detecting an infrared ship target based on an improved YOLOv7 according to claim 1. Wang et al. fail to disclose explicitly wherein the optimized loss function is shown in following formulas: wherein β is an outlier degree to describe a quality of an anchor frame; r is a nonmonotonic focusing coefficient, α and δ are hyperparameters; R WIoU is a penalty term of a loss function; L IoU is an overlap loss between a prediction frame and the anchor frame; (x,y) are center coordinates of the prediction frame, and (x gt ,y gt ) are center coordinates of a real frame; (W g ,H g ) are a width and a height of a minimum bounding rectangle of the real frame and the prediction frame; and * represents separating (W g ,H g ) from a current calculation diagram. Pertaining to analogous art, Tong et al. disclose wherein the optimized loss function is shown in following formulas: wherein β is an outlier degree to describe a quality of an anchor frame; r is a nonmonotonic focusing coefficient, α and δ are hyperparameters; R WIoU is a penalty term of a loss function; L IoU is an overlap loss between a prediction frame and the anchor frame; (x,y) are center coordinates of the prediction frame, and (x gt ,y gt ) are center coordinates of a real frame; (W g ,H g ) are a width and a height of a minimum bounding rectangle of the real frame and the prediction frame; and * represents separating (W g ,H g ) from a current calculation diagram. (Tong et al., Pg. 1 § I - Pg. 2 Subsection C, Pg. 1 Fig. 1, Pgs. 4 - 6 Subsection C) Wang et al. in view of Ye et al. and Tong et al. are combinable because they are all directed towards image processing systems and methods that utilize machine learning models to detect objects in images. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combined teachings of Wang et al. in view of Ye et al. with the teachings of Tong et al. This modification would have been prompted in order to substitute the loss function of Wang et al. for the Wise-IoU loss function of Tong et al. The Wise-IoU loss function of Tong et al. could be substituted in place of the loss function of Wang et al. utilizing well-known techniques in the art and would likely yield predictable results, in that, in the combination, the Wise-IoU loss function of Tong et al. would be utilized as the optimized loss function of the combined base device. This combination could be completed according to well-known techniques in the art and would likely yield predictable results, in that the Wise-IoU loss function of Tong et al. would be utilized as the optimized loss function of the combined base device. Therefore, it would have been obvious to combine Wang et al. in view of Ye et al. with Tong et al. to obtain the invention as specified in claim 9. Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Wang et al., “An improved YOLOv7 method for vehicle detection in traffic scenes”, IEEE, 35th Chinese Control and Decision Conference (CCDC), May 2023, pages 766 - 771 in view of Jing Ye, Zhaoyu Yuan, Cheng Qian, and Xiaoqiong Li, “CAA-YOLO: Combined-Attention-Augmented YOLO for Infrared Ocean Ships Detection”, Sensors, Vol. 22, Issue 10, May 2022, pages 1 - 23, herein referred to as “Ye et al.”, as applied to claim 2 above, and further in view of Zh e ng et al., “A lightweight ship target detection model based on improved YOLOv5 algorithm”, PLoS ONE, 18(4) , Apr. 2023, pages 1 - 23 . - With regards to claim 10, Wang et al. in view of Ye et al. disclose the method for detecting an infrared ship target based on an improved YOLOv7 according to claim 2, wherein a process of training and verifying the infrared ship target detection model comprises: setting an initial learning rate and initial iterations of the target detection model, (Wang et al., Pg. 769 § III - Subsection B) and adaptively adjusting a scaling of the training set, verification set and test set based on a preset input image size; (Wang et al., Pg. 767 Fig. 1, Pg. 768 Subsection B, Pg. 769 § III - Subsection B [“the resolution of the input image is set to 618x618px,”]) and adjusting the initial learning rate and the initial iterations, so as to obtain a target learning rate and target iterations (Wang et al., Pg. 769 § III - Pg. 770 Section “Discussion and Analysis”) and further obtain the target detection model trained; (Wang et al., Pg. 769 § III - Pg. 770 Section “Discussion and Analysis”, Pg. 770 Fig. 7) finally, testing the target detection model trained based on the test set after the adjusting. (Wang et al., Pg. 769 § III - Pg. 770 Section “Discussion and Analysis”, Pg. 770 Fig. 7) Wang et al. fail to disclose explicitly the infrared ship target detection model; cross-verifying an average accuracy change and loss change trend of the infrared ship target detection model based on the training set and the verification set after an adjusting, and adjusting the initial learning rate and the initial iterations until the average accuracy change and loss change tend to be stable . Pertaining to analogous art, Ye et al. disclose wherein a process of training and verifying the infrared ship target detection model comprises: setting an initial learning rate and initial iterations of the infrared ship target detection model, (Ye et al., Pg. 11 § 4 - Pg. 13 § 4.2.2, Pg. 19 First-Full Paragraph - Third-Full Paragraph) and adaptively adjusting a scaling of the training set, verification set and test set based on a preset input image size; (Ye et al., Pg. 11 § 4 - Pg. 13 § 4.2.2) and adjusting the initial learning rate and the initial iterations, so as to obtain a target learning rate and target iterations, (Ye et al., Pg. 11 § 4 - Pg. 13 § 4.2.2, Pg. 19 First-Full Paragraph - Third-Full Paragraph) and further obtain the infrared ship target detection model trained; (Ye et al., Pg. 1 Abstract, Pg. 2 First-Full Paragraph - Second-Full Paragraph, Pg. 11 § 4 - Pg. 13 § 4.2.2, Pg. 15 Figs. 10 & 11, Pg. 16 Fig. 12, Pg. 19 First-Full Paragraph - Third-Full Paragraph, Pg. 20 § 5, Pg. 20 Fig. 16) finally, testing the infrared ship target detection model trained based on the test set after the adjusting. (Ye et al., Pg. 1 Abstract, Pg. 2 First-Full Paragraph - Second-Full Paragraph, Pg. 11 § 4 - Pg. 13 § 4.2.2, Pg. 15 Figs. 10 & 11, Pg. 16 Fig. 12, Pg. 19 First-Full Paragraph - Third-Full Paragraph, Pg. 20 § 5, Pg. 20 Fig. 16) Ye et al. fail to disclose explicitly cross-verifying an average accuracy change and loss change trend of the infrared ship target detection model based on the training set and the verification set after an adjusting, and adjusting the initial learning rate and the initial iterations until the average accuracy change and loss change tend to be stable . Pertaining to analogous art, Zheng et al. disclose wherein a process of training and verifying the infrared ship target detection model comprises: setting an initial learning rate and initial iterations of the infrared ship target detection model; (Zheng et al., Pg. 1 Abstract, Pg. 12 § 4.2 - Pg. 15 First-Full Paragraph, Pg. 15 Fig. 11) cross-verifying an average accuracy change and loss change trend of the infrared ship target detection model based on the training set and the verification set after an adjusting, (Zheng et al., Pg. 13 § 4.3 - Pg. 15 First-Full Paragraph, Pg. 15 Fig. 11) and adjusting the initial learning rate and the initial iterations until the average accuracy change and loss change tend to be stable, so as to obtain a target learning rate and target iterations, (Zheng et al., Pg. 12 § 4.2 - Pg. 15 First-Full Paragraph, Pg. 15 Fig. 11) and further obtain the infrared ship target detection model trained; (Zheng et al., Pg. 1 Abstract, Pg. 13 § 4.3 - Pg. 15 First-Full Paragraph, Pg. 15 Fig. 1, Pgs. 20 - 21 § 6) finally, testing the infrared ship target detection model trained based on the test set after the adjusting. (Zheng et al., Pg. 1 Abstract, Pg. 13 § 4.3 - Pg. 19 § 5.2, Pg. 18 Fig. 16, Pgs. 20 - 21 § 6) Wang et al. in view of Ye et al. and Zheng et al. are combinable because they are all directed towards image processing systems and methods that utilize machine learning models to detect objects in images. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combined teachings of Wang et al. in view of Ye et al. with the teachings of Zheng et al. This modification would have been prompted in order to enhance the combined base device of Wang et al. in view of Ye et al. with the well-known and applicable technique Zheng et al. applied to a comparable device. Cross-verifying an average accuracy change and loss change trend based on the training set and the verification set after an adjusting and adjusting the initial learning rate and the initial iterations until the average accuracy change and loss change tend to be stable, as taught by Zheng et al., would enhance the combined base device by improving its ability to effectively and reliably obtain a sufficiently trained target detection model exhibiting a high level of performance since the target detection model would be trained until a point where subsequent training does not yield any appreciable amount of improvement in performance. Furthermore, this modification would have been prompted by the teachings and suggestions of Wang et al. that their dataset is divided into training, validation and test sets and that a stochastic gradient descent optimization strategy is used for training, see at least page 769 section III - subsection B of Wang et al. Moreover, this modification would have been prompted by the teachings and suggestions of Ye et al. t hat their data set is split into training and validation sets and that a stochastic gradient descent optimization and cosine learning rate decay strategy is used for training, see at least pages 12 - 13 section 4.2.1 of Ye et al. This combination could be completed according to well-known techniques in the art and would likely yield predictable results, in that an average accuracy change and loss change trend would be cross-verified based on the training set and the verification set after an adjusting and that the initial learning rate and the initial iterations would be adjusted until the average accuracy change and loss change tend to be stable so as to enhance the ability of the combined base device to effectively and reliably obtain a sufficiently trained target detection model exhibiting a high level of performance. Therefore, it would have been obvious to combine Wang et al. in view of Ye et al. with Zheng et al. to obtain the invention as specified in claim 10. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Tang et al. U.S. Publication No. 2024/0365004 A1; which is directed towards an image processing system and method for detecting optical codes, wherein a machine learning model based on a You Only Look Once version 5 (YOLOv5) network and a weighted bi-directional feature pyramid network (BiFPN) are utilized to detect optical codes in images. Zhang et al. U.S. Publication No. 2024/0005759 A1; which is directed towards an image processing system and method for detecting smoke, wherein a You Only Look Once version 5 (YOLOv5) network model is reformed by replacing its backbone network with a backbone network of a Mobilenetv3 network and a Squeeze-and-Excitation Network (SENet) and the reformed YOLOv5 network model is utilized to detect smoke in images. Ronglu Jin, Yidong Xu, Wei Xue, Beiming Li, Yingwei Yang, and Wenjian Chen, “An Improved Mobilenetv3-Yolov5 Infrared Target Detection Algorithm Based on Attention Distillation”, International Conference on Advanced Hybrid Information Processing, 2022, pages 266 - 279; which is directed towards reforming a You Only Look Once version 5 (YOLOv5) network model and training the reformed network model to detect targets in infrared images . Zhifei Wei, Jiangtao Zhao, Xiaochen Chen, Aihua Wang, Fang Li, and Yifan Gu, “Infrared Target Detection Based on the Fusion of Attention Mechanism and YOLOv5”, IEEE, 2nd International Conference on Electrical Engineering, Big Data and Algorithms (EEBDA), Feb. 2023, pages 933 - 936; which is directed towards integrating an attention mechanism into a You Only Look Once version 5 (YOLOv5) network model and utilizing the integrated network model to detect targets in infrared images. Huanlong Zhang, Qifan Du, Qiye Qi, Jie Zhang, Fengxian Wang, and Miao Gao, “A recursive attention-enhanced bidirectional feature pyramid network for small object detection”, Multimedia Tools and Applications, Vol. 82, Apr. 2023, pages 13999 - 14018; which is directed towards improving the detection accuracy of the Single Shot MultiBox Detector (SSD) method by incorporating an Attention-Enhanced Bidirectional Feature Pyramid Network (A-BiFPN). Any inquiry concerning this communication or earlier communications from the examiner should be directed to FILLIN "Examiner name" \* MERGEFORMAT ERIC RUSH whose telephone number is FILLIN "Phone number" \* MERGEFORMAT (571) 270-3017 . The examiner can normally be reached FILLIN "Work Schedule?" \* MERGEFORMAT 9am - 5pm Monday - Friday . Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, FILLIN "SPE Name?" \* MERGEFORMAT Andrew Bee can be reached at FILLIN "SPE Phone?" \* MERGEFORMAT (571) 270 - 5183 . The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ERIC RUSH/ Primary Examiner, Art Unit 2677
Read full office action

Prosecution Timeline

Nov 28, 2023
Application Filed
Mar 25, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586229
COMPUTER IMPLEMENTED METHODS AND DEVICES FOR DETERMINING DIMENSIONS AND DISTANCES OF HEAD FEATURES
2y 5m to grant Granted Mar 24, 2026
Patent 12548292
METHOD AND SYSTEM FOR IDENTIFYING REFLECTIONS IN THERMAL IMAGES
2y 5m to grant Granted Feb 10, 2026
Patent 12548395
SYSTEMS, METHODS AND DEVICES FOR MONITORING BETTING ACTIVITIES
2y 5m to grant Granted Feb 10, 2026
Patent 12541856
MASKING OF OBJECTS IN AN IMAGE STREAM
2y 5m to grant Granted Feb 03, 2026
Patent 12518504
METHOD FOR CALIBRATING AN OBJECT RE-IDENTIFICATION SOLUTION IMPLEMENTING AN ARRAY OF A PLURALITY OF CAMERAS
2y 5m to grant Granted Jan 06, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
61%
Grant Probability
97%
With Interview (+36.2%)
3y 5m
Median Time to Grant
Low
PTA Risk
Based on 628 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month