DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
Applicant’s Amendments filed on 12/16/2025 has been entered and made of record:
Currently pending Claim(s)
1–5 and 8–20
Independent Claim(s)
1 and 10
Amended Claim(s)
1–5, 8, and 10–13
Canceled Claim(s)
6 and 7
Withdrawn Claim(s)
14–20
Information Disclosure Statement
The information disclosure statements (IDS) submitted on 12/16/2025 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements have been considered by the examiner.
Response to Arguments
This office action is responsive to Applicant’s Arguments/Remarks Made in an Amendment received on 12/16/2025.
In view of the drawing amendments [Remarks] filed on 12/16/2025, the drawing objections have been withdrawn.
In view of the claim amendments [Remarks] filed on 12/16/2025 with respect to 35 U.S.C. 112(b) claim rejections to claims 1–13 have been carefully considered and the claim rejections under 35 U.S.C. 112(b) are withdrawn.
Regarding the rejections made under 35 USC 103, Applicant’s Arguments/Remarks with respect to independent claims 1 and 10, on the bottom of page 8 to the top of page 11, have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made with newly cite art Huang et al. (“Multiple Instance Learning Convolutional Neural Networks for Fine-Grained Aircraft Recognition”) and Wu et al. (“Aircraft Recognition in High-Resolution Optical Satellite Remote Sensing Images”).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 5, and 9–12 are rejected under 35 U.S.C. 103 as being unpatentable over Wu et al. (Wu, Qichang, et al. "Aircraft recognition in high-resolution optical satellite remote sensing images." IEEE Geoscience and Remote Sensing Letters 12.1 (2014): 112-116) (hereafter, “Wu”) in view of Huang et al. (Huang, Xiaolan, et al. "Multiple instance learning convolutional neural networks for fine-grained aircraft recognition." Remote Sensing 13.24 (2021): 5132) (hereafter, “Huang”) and further in view of Deng et al. (US 10,290,219 B2) (hereafter, “Deng”).
Regarding claim 1, Wu discloses a method for aircraft detection [we propose a new aircraft recognition approach that can recognize aircraft, pg. 112, Abstract], comprising: capturing camera image data of a new aircraft [we evaluate the proposed method on an image set collected from panchromatic 0.6-m resolution Quickbird imagery, pg. 115, left column, III. Experiment, first paragraph]; generating a segmented aircraft mask [Table I; Table I shows the seven types of aircraft for recognition. In this table, each row includes four testing samples of a type and the template corresponding to the type, pg. 115, left column, III. Experiment, second paragraph ... for each type, we use a binary image with a target of this type in the center upright as the template of this type, pg. 113, B. Reconstruction-Based Similarity Measure, right column, second paragraph]; segmenting the captured camera image data of the new aircraft into body part segmentation data [Figure 2; we generate segments with four-scale segmentation, and the segmentation in each scale is based on the number of segments in an image as a parameter, i.e., 30, 60, 90, and 120, pg. 115, left column, III. Experiment, fourth paragraph ... multiscale segmentation is used to obtain a collection of segments with different scales, pg. 113, right column, C. Target Representation, first paragraph ... in the tree-cut algorithm, the segmentation begins at the first level, and then each segment is split into α subjects in the following levels iteratively by using the normalized-cut algorithm, pg. 113, right column, C. Target Representation, fourth paragraph].
PNG
media_image1.png
657
1329
media_image1.png
Greyscale
PNG
media_image2.png
840
916
media_image2.png
Greyscale
Wu reference: Example of aircraft template images from Table I (top) & Figure 2 of segmentation tree (bottom)
Wu fails to explicitly disclose classifying the body part segmentation data into a plurality of classes by classifying images of those body parts against a reference dataset of aircraft body parts; analyzing each class of body part segmentation data to predict an aircraft type for the new aircraft using a plurality of prediction sub-engines, wherein the prediction sub-engines each specialize in predicting an aircraft type based on one classification factor and each prediction sub-engine makes a prediction based on the aircraft body part it is analyzing; determining the aircraft type of the new aircraft based on the prediction analysis; and generating aircraft specific docking guidance for the new aircraft based on the determined aircraft type.
However, Huang teaches classifying the body part segmentation data into a plurality of classes [Figure 6c; we introduce the generalized MIL paradigm, which supposes a bag label is inferred by multiple instance concepts C = {c1, c2, ..., c4}(ci: x→Ω), to optimize the standard MIL method in aircraft recognition. N(X, ci) signifies the number of instances corresponding to ci in the bag X ... which indicates that an image marked as positive contains a positive instance referring to the instance concept ci at least. The instance concept represents the sub-semantics of an aircraft, such as head, tail, and wing, pg. 6, 2.1. Problem Statement, second paragraph] by classifying images of those body parts against a reference dataset of aircraft body parts [we attempt to introduce semi-supervised learning. Semi-supervised learning is dedicated to extracting explicit semantics by marked labels and mining implicit information with unlabeled samples ... the input data are composed of labeled images and part of unlabeled images, pg. 2, 1. Introduction, second paragraph]; analyzing each class of body part segmentation data to predict an aircraft type for the new aircraft using a plurality of prediction sub-engines, wherein the prediction sub-engines each specialize in predicting an aircraft type based on one classification factor and each prediction sub-engine makes a prediction based on the aircraft body part it is analyzing [Figure 9 & 17; after obtaining instance-level features driven by instance loss, we handle the MIL pooling part to anticipate labels, which contain the MIL classifier ... in the MIL pooling part, scoring instance blocks are employed by several 1x1 convolutions as an instance classifier ... N represents the number of feature channels, and C denotes the number of types. Si,j,c indicates the instance score ... and yc signifies the label score of cth channel position, pg. 10, 2.4. MIL Pooling Part, first paragraph]; determining the aircraft type of the new aircraft based on the prediction analysis [Table II; Figure 9 & 11c; the MIL pooling function aims to aggregate the instance scores into object probabilities, pg. 10, 2.4. MIL Pooling Part, first paragraph ... after obtaining the scores of multiple instances, this junction jointly determines the aircraft label placed on instance sub-semantics, pg. 17, 3.4. Comparative Experiment of the Standard MIL Networks and Generalized MIL Networks, third paragraph].
It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to modify Wu’s reference by incorporating the teachings of Huang with classification and prediction to reduce the prediction error in aircraft recognition, as recognized by Huang [pg. 17, 3.4. Comparative Experiment of the Standard MIL Networks and Generalized MIL Networks, second paragraph].
Neither Wu nor Huang appears to explicitly disclose generating aircraft specific docking guidance for the new aircraft based on the determined aircraft type.
However, Deng teaches generating aircraft specific docking guidance for the new aircraft based on the determined aircraft type [Figure 11; the berth guidance information, such as the specific position of the aircraft determined by the aircraft positioning step S5, including deviation to the left or right 7001 and the distance away from the stop line 7003, are displayed on the display device in real time ... the aircraft type information 7004 verified by the aircraft identification and identity verification step S6 is also displayed on the display device in real time, for pilots to observe the aircraft’s route, Col 25, line 15-19; line 20-23].
It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to modify Wu in view of Huang by incorporating the teachings of Deng with specific docking guidance to improve the safety of aircraft docking, as recognized by Deng [Col 25, line 23-24].
Further, one skilled in the art could have combined the elements as described above with known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Deng with Wu and Huang to obtain the invention as specified in claim 1.
Regarding claim 5, which claim 1 is incorporated, Wu fails to explicitly disclose wherein segmenting the captured camera image data of the new aircraft includes predicting to which body part of the aircraft each pixel belong.
However, Huang teaches wherein segmenting the captured camera image data of the new aircraft includes predicting to which body part of the aircraft each pixel belong [Figure 6c; Figure 6c shows the attention distribution derived from a generalized MIL network, which can observe the response distribution of typical components and generate the instance-level attention distribution mode, pg. 6, 2.1. Problem Statement, third paragraph ... N(X, ci) signifies the number of instances corresponding to ci in the bag X ... which indicates than an image marked as positive contains a positive instance referring to the instance concept ci at least. The instance concept represents the sub-semantics of an aircraft, such as head, tail, and wing, pg. 6, 2.1 Problem Statement, second paragraph].
It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to modify Wu’s reference and incorporate the teachings of Huang for more accurate visualization results, as recognized by Huang.
Further, one skilled in the art could have combined the elements as described above with known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Deng with Wu and Huang to obtain the invention as specified in claim 5.
Regarding claim 9, which claim 1 is incorporated, Wu fails to explicitly disclose wherein the classification factor includes a weighting factor that values a particular classification factor over other classification factors.
However, Huang teaches wherein the classification factor includes a weighting factor that values a particular classification factor over other classification factors [Figure 8; the index of each group selects different channel combinations to generate instance masks. The weight of the selected ones is adjusted to 1, and the remainder are adjusted to 0. In this way, instance masks of m groups with specific channel suppression can be received, pg. 8, 2.3. Instance Conversion Part (Instance Loss), second paragraph].
It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to modify Wu in view of Huang
and incorporate the teachings of Huang to reduce the sensitivity of abnormal instances and accurately predict labels, as recognized by Huang.
Further, one skilled in the art could have combined the elements as described above with known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Deng with Wu and Huang to obtain the invention as specified in claim 9.
Regarding claim 10, Wu discloses a method for aircraft detection [we propose a new aircraft recognition approach that can recognize aircraft, pg. 112, Abstract], comprising: capturing camera image data of a new aircraft [we evaluate the proposed method on an image set collected from panchromatic 0.6-m resolution Quickbird imagery, pg. 115, left column, III. Experiment, first paragraph]; generating a segmented aircraft mask [Table I; Table I shows the seven types of aircraft for recognition. In this table, each row includes four testing samples of a type and the template corresponding to the type, pg. 115, left column, III. Experiment, second paragraph ... for each type, we use a binary image with a target of this type in the center upright as the template of this type, pg. 113, B. Reconstruction-Based Similarity Measure, right column, second paragraph]; segmenting the captured camera image data of the new aircraft into body part segmentation data [Figure 2; we generate segments with four-scale segmentation, and the segmentation in each scale is based on the number of segments in an image as a parameter, i.e., 30, 60, 90, and 120, pg. 115, left column, III. Experiment, fourth paragraph ... multiscale segmentation is used to obtain a collection of segments with different scales, pg. 113, right column, C. Target Representation, first paragraph ... in the tree-cut algorithm, the segmentation begins at the first level, and then each segment is split into α subjects in the following levels iteratively by using the normalized-cut algorithm, pg. 113, right column, C. Target Representation, fourth paragraph]
Wu fails to explicitly disclose receiving camera image data of an aircraft and a scene having a number of non-aircraft elements within a field of view of a camera while the aircraft is approaching or in a bridge area of an airport; removing the aircraft from the scene; classifying the body part segmentation data into a plurality of classes by classifying images of those body parts against a reference dataset of aircraft body parts; analyzing each class of body part segmentation data to predict an aircraft type for the new aircraft using a plurality of prediction sub-engines, wherein the prediction sub-engines each specialize in predicting an aircraft type based on one classification factor and each prediction sub-engine makes a prediction based on the aircraft body part it is analyzing; determining the aircraft type of the new aircraft based on the prediction analysis; and generating aircraft specific docking guidance for the new aircraft based on the determined aircraft type.
However, Huang teaches classifying the body part segmentation data into a plurality of classes [Figure 6c; we introduce the generalized MIL paradigm, which supposes a bag label is inferred by multiple instance concepts C = {c1, c2, ..., c4}(ci: x→Ω), to optimize the standard MIL method in aircraft recognition. N(X, ci) signifies the number of instances corresponding to ci in the bag X ... which indicates that an image marked as positive contains a positive instance referring to the instance concept ci at least. The instance concept represents the sub-semantics of an aircraft, such as head, tail, and wing, pg. 6, 2.1. Problem Statement, second paragraph] by classifying images of those body parts against a reference dataset of aircraft body parts [we attempt to introduce semi-supervised learning. Semi-supervised learning is dedicated to extracting explicit semantics by marked labels and mining implicit information with unlabeled samples ... the input data are composed of labeled images and part of unlabeled images, pg. 2, 1. Introduction, second paragraph]; analyzing each class of body part segmentation data to predict an aircraft type for the new aircraft using a plurality of prediction sub-engines, wherein the prediction sub-engines each specialize in predicting an aircraft type based on one classification factor and each prediction sub-engine makes a prediction based on the aircraft body part it is analyzing [Figure 9 & 17; after obtaining instance-level features driven by instance loss, we handle the MIL pooling part to anticipate labels, which contain the MIL classifier ... in the MIL pooling part, scoring instance blocks are employed by several 1x1 convolutions as an instance classifier ... N represents the number of feature channels, and C denotes the number of types. Si,j,c indicates the instance score ... and yc signifies the label score of cth channel position, pg. 10, 2.4. MIL Pooling Part, first paragraph]; determining the aircraft type of the new aircraft based on the prediction analysis [Table II; Figure 9 & 11c; the MIL pooling function aims to aggregate the instance scores into object probabilities, pg. 10, 2.4. MIL Pooling Part, first paragraph ... after obtaining the scores of multiple instances, this junction jointly determines the aircraft label placed on instance sub-semantics, pg. 17, 3.4. Comparative Experiment of the Standard MIL Networks and Generalized MIL Networks, third paragraph].
It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to modify Wu’s reference by incorporating the teachings of Huang with classification and prediction to reduce the prediction error in aircraft recognition, as recognized by Huang [pg. 17, 3.4. Comparative Experiment of the Standard MIL Networks and Generalized MIL Networks, second paragraph].
Neither Wu nor Huang appears to explicitly disclose receiving camera image data of an aircraft and a scene having a number of non-aircraft elements within a field of view of a camera while the aircraft is approaching or in a bridge area of an airport; removing the aircraft from the scene; generating aircraft specific docking guidance for the new aircraft based on the determined aircraft type.
However, Deng teaches receiving camera image data of an aircraft and a scene having a number of non-aircraft elements within a field of view of a camera [Figure 1; the photographing device is installed behind a stop line 42 of an aircraft berth ground 4, preferably aiming at a guide line 41, which a height of the installation place higher than the body of an aircraft 5 ... the central processing device 2 may be a calculating device which is capable of receiving data, processing data, storing data, generating image data, Col 9, line 1-4; line 7-10] while the aircraft is approaching or in a bridge area of an airport [step S3 is an aircraft capturing step. In order to capture a docking aircraft for subsequent guiding operation, the images after pre-processing step S2 require further analysis, to accurately recognize whether an aircraft appears in these images, Col 13, line 34-39]; removing the aircraft from the scene [since the aircraft exists in the foreground of the images, in order to accurately capture the aircraft from the images, the background of the image should be eliminated firstly to erase noise in the images, Col 13, line 44-47]; and generating aircraft specific docking guidance for the new aircraft based on the determined aircraft type [Figure 11; the berth guidance information, such as the specific position of the aircraft determined by the aircraft positioning step S5, including deviation to the left or right 7001 and the distance away from the stop line 7003, are displayed on the display device in real time ... the aircraft type information 7004 verified by the aircraft identification and identity verification step S6 is also displayed on the display device in real time, for pilots to observe the aircraft’s route, Col 25, line 15-19; line 20-23].
It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to modify Wu in view of Huang by incorporating the teachings of Deng with specific docking guidance to improve the safety of aircraft docking, as recognized by Deng [Col 25, line 23-24].
Further, one skilled in the art could have combined the elements as described above with known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Deng with Wu and Huang to obtain the invention as specified in claim 10.
Regarding claim 11, which claim 10 is incorporated, neither Wu nor Huang appears to explicitly disclose wherein generating aircraft specific docking guidance for the new aircraft includes a direction to a particular lead-in line in the bridge area.
However, Deng teaches wherein generating aircraft specific docking guidance for the new aircraft includes a direction to a particular lead-in line in the bridge area [the berth guidance information, such as the specific position of the aircraft determined by the aircraft positioning step S5, including deviation to left or right 7001 and the distance away from the stop line 7003, are displayed on the display device in real time, Col 25, line 15-19 ... the aircraft front wheel deviation degree calculation step S52 is to determine whether the front wheel of the aircraft is on the guide line, or left or right with respect to the guide line, Col 20, line 28-31].
It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to modify Wu in view of Huang by incorporating the teachings of Deng with specific docking guidance to improve the safety of aircraft docking, as recognized by Deng [Col 25, line 25-33].
Further, one skilled in the art could have combined the elements as described above with known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Deng with Wu and Huang to obtain the invention as specified in claim 11.
Regarding claim 12, which claim 10 is incorporated, neither Wu nor Huang appears to explicitly disclose wherein generating aircraft specific docking guidance for the new aircraft includes a direction to a particular stop line in the bridge area.
However, Deng teaches wherein generating aircraft specific docking guidance for the new aircraft includes a direction to a particular stop line in the bridge area [the berth guidance information, such as the specific position of the aircraft determined by the aircraft positioning step S5, including deviation to left or right 7001 and the distance away from the stop line 7003, are displayed on the display device in real time, Col 25, line 15-19 ... the aircraft front wheel actual distance calculating step S53 is for real-time calculating the true distance of the aircraft from the stop line, Col 21, line 46-48].
It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to modify Wu in view of Huang by incorporating the teachings of Deng with specific docking guidance to improve the safety of aircraft docking, as recognized by Deng [Col 25, line 25-33].
Further, one skilled in the art could have combined the elements as described above with known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Deng with Wu and Huang to obtain the invention as specified in claim 12.
Claims 2 and 3 are rejected under 35 U.S.C. 103 as being unpatentable Wu ("Aircraft recognition in high-resolution optical satellite remote sensing images”) in view of Huang ("Multiple instance learning convolutional neural networks for fine-grained aircraft recognition”) and further in view of Deng (US 10,290,219 B2), as applied above, and Wang et al. (Wang, Wensheng, et al. “A novel method of aircraft detection based on high-resolution panchromatic optical remote sensing images.” Sensors 17.5 (2017): 1047) (hereafter, “Wang”).
Regarding claim 2, which claim 1 is incorporated, neither Wu, Huang, nor Deng appears to explicitly disclose wherein segmenting the captured camera image data of the new aircraft includes retrieving the total number of pixels present in the mask.
However, Wang teaches wherein segmenting the captured camera image data of the new aircraft includes retrieving the total number of pixels present in the mask [Figure 5 & Equation 10; the other feature FHR describes the radio of area of each triangle to the whole hull ... Sconvex is the number of pixels in the whole fragment, pg. 7, 2.3.2 New Features for Target Confirmation, second paragraph].
It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to modify Wang in view of Huang and further in view of Deng and incorporate the teachings of Wang to improve the precision of aircraft ROI extraction, as recognized by Wang.
Further, one skilled in the art could have combined the elements as described above with known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Wang with Wu, Huang, and Deng to obtain the invention as specified in claim 2.
Regarding claim 3, which claim 2 is incorporated, Wu discloses wherein segmenting the captured camera image data of the new aircraft includes counting a total number of pixels associated with entire aircraft shape [the value of pixels in the target is set to be 1, and the value of pixels outside the target is set to be 0 ... where α is the coefficient vector, the entries of which are either 0 or 1, pg. 114, left column, D. Target Reconstruction, second paragraph ... subject to a[i] = 1 or 0, i = 1,2, ..., p where ||∙||0 denotes the l0-norm, which simply counts the number of nonzero entries in a vector, pg. 114, left column, D. Target Reconstruction, fourth paragraph].
Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable Wu ("Aircraft recognition in high-resolution optical satellite remote sensing images”) in view of Huang ("Multiple instance learning convolutional neural networks for fine-grained aircraft recognition”) and further in view of Deng (US 10,290,219 B2) and Wang (“A novel method of aircraft detection based on high-resolution panchromatic optical remote sensing images”), as applied above, and Liu et al. (US 2019/0147279 A1) (hereafter, “Liu”).
Regarding claim 4, which claim 2 is incorporated, neither Wu, Huang, nor Deng appears to explicitly disclose wherein if the total number of pixels are less than a threshold, then the subsequent method steps are skipped and the method begins again with a next video frame to determine when the total pixels are over the threshold indicating a clearly visible aircraft can be processed.
However, Liu teaches wherein if the total number of pixels are less than a threshold, then the subsequent method steps are skipped and the method begins again with a next video frame to determine when the total pixels are over the threshold indicating a clearly visible aircraft can be processed [pixel difference with respect to the background frame is calculated and squared for each frame ... an appropriate threshold value is selected after evaluating both arrays ... the differences are stored and summed and if the sum is lower than a threshold ... the frame is skipped, para 0059, 0070].
It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to modify Wu in view of Huang and further in view of Deng and Wang by incorporating the teachings of Liu to analyze the whole object in the current frame, as recognized by Liu.
Further, one skilled in the art could have combined the elements as described above with known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Liu with Wu, Huang, and Deng to obtain the invention as specified in claim 4.
Claims 8 and 13 are rejected under 35 U.S.C. 103 as being unpatentable Wu ("Aircraft recognition in high-resolution optical satellite remote sensing images”) in view of Huang ("Multiple instance learning convolutional neural networks for fine-grained aircraft recognition”) and further in view of Deng (US 10,290,219 B2), as applied above, and Ali et al. (Ali, Syed Faisal, Jafreezal Jaafar, and Aamir Saeed Malik. "Proposed technique for aircraft recognition in intelligent video automatic target recognition system (ivatrs)." 2010 International Conference on Computer Applications and Industrial Electronics. IEEE, 2010) (hereafter, “Ali”).
Regarding claim 8, which claim 1 is incorporated, neither Wu, Huang, nor Deng appears to explicitly disclose wherein the classification factor is selected from a group of factors including: engine shape [Figure 3; aircraft engines can be identified using the same method as we have used in recognition of wings. The number of engines installed in the aircraft, their position, dimensions, their size, length ... will lead us to the recognition of aircraft type, pg. 177, left column, B. Engine, first paragraph], nose shape [fuselage is the lower area of aircraft which is distribute in three major areas; the nose, the mid, and the rear ... the shape of nose can be pointed straight, pointed down, curve or blunt nose, pg. 177, left column, C. Fuselage, first paragraph, second paragraph], wing shape [wings can be identified by the location where they are installed with the body of the aircraft and the shape they bear, pg. 176, right column, A. Wings/Rotars, second paragraph], and tail shape [number of tails, position, tail fin shape, and slant or angled are the features from tail that can be observed from distance and based on these we can identify the aircraft, pg. 177, right column, D. Tail, first paragraph].
It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to modify Wu in view of Huang and further in view of Deng by incorporating the teachings of Ali with different factors to minimize the time of target detection, as recognized by Ali [pg. 175, right column, III> Automating Traditional (WEFT) Technique, second paragraph].
Further, one skilled in the art could have combined the elements as described above with known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Ali with Wu, Huang, and Deng to obtain the invention as specified in claim 8.
Regarding claim 13, which claim 10 is incorporated, Deng discloses wherein segmenting the captured camera image data of the new aircraft into body part segmentation data includes one or more body part segments selected from the group [Figure 3; will be extracted and segmented using multiple segmentation techniques, pg. 176, left column, A. Method of Extracting Features, first paragraph] including: engine shape [Figure 3; aircraft engines can be identified using the same method as we have used in recognition of wings. The number of engines installed in the aircraft, their position, dimensions, their size, length ... will lead us to the recognition of aircraft type, pg. 177, left column, B. Engine, first paragraph], nose shape [fuselage is the lower area of aircraft which is distribute in three major areas; the nose, the mid, and the rear ... the shape of nose can be pointed straight, pointed down, curve or blunt nose, pg. 177, left column, C. Fuselage, first paragraph, second paragraph], wing shape [wings can be identified by the location where they are installed with the body of the aircraft and the shape they bear, pg. 176, right column, A. Wings/Rotars, second paragraph], and tail shape [number of tails, position, tail fin shape, and slant or angled are the features from tail that can be observed from distance and based on these we can identify the aircraft, pg. 177, right column, D. Tail, first paragraph].
It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to modify Wu in view of Huang and further in view of Deng by incorporating the teachings of Ali for higher accuracy, as recognized by Ali [pg. 176, left column, A. Method of Extracting Features, first paragraph].
Further, one skilled in the art could have combined the elements as described above with known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Ali with Wu, Huang, and Deng to obtain the invention as specified in claim 13.
Conclusion
The art made of record and not relied upon is considered pertinent to applicant's disclosure:
Explicable Fine-Grained Aircraft Recognition Via Deep Part Parsing Prior Framework for High-Resolution Remote Sensing Imagery to Chen et al. discloses a deep learning explicable aircraft recognition framework based on a part parsing prior (APPEAR) that models the aircraft as a pixel-level part parsing prior and divides it into five parts (nose, left wing, right wing, fuselage, and tail) to determine the aircraft type.
Automatic Detection of Geospatial Objects Using Taxonomic Semantics to Sun et al. discloses a method for detecting geospatial objects by representing an image as a segmentation tree and applying a multiscale segmentation algorithm, matching the trees to common subcategories, organizing the subcategories to learn taxonomic semantics of the objects categories, and performing detection and segmentation in different images.
Aircraft Recognition Based on Landmark Detection in Remote Sensing Images to Zhao et al. discloses a method for aircraft type recognition with a convolutional neural network called a vanilla network.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to TOLUWANI MARY-JANE IJASEUN whose telephone number is (571)270-1877. The examiner can normally be reached Monday - Friday 7:30AM-4PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Henok Shiferaw can be reached at (571) 272-4637. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/TOLUWANI MARY-JANE IJASEUN/Examiner, Art Unit 2676
/Henok Shiferaw/Supervisory Patent Examiner, Art Unit 2676