DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 05/06/2024 and 10/02/2025 have been considered by the examiner.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The limitations, under their broadest reasonable interpretation, cover mental process (concept performed in a human mind, including as observation, evaluation, judgment, opinion, organizing human activity and mathematical concepts and calculations). The independent claims 1, 11, and 20 recite a system and method. This judicial exception is not integrated into a practical application because the steps do not add meaningful limitations to be considered specifically applied to a particular technological problem to be solved .The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the steps of the claimed invention can be done mentally and no additional features in the claims would preclude them from being performed as such except for the generic computer elements at high level of generality (i.e., processor, memory).
According to the USPTO guidelines, a claim is directed to non-statutory subject matter if:
STEP 1: the claim does not fall within one of the four statutory categories of invention (process, machine, manufacture or composition of matter), or
STEP 2: the claim recites a judicial exception, e.g. an abstract idea, without reciting additional elements that amount to significantly more than the judicial exception, as determined using the following analysis:
STEP 2A (PRONG 1): Does the claim recite an abstract idea, law of nature, or natural phenomenon?
STEP 2A (PRONG 2): Does the claim recite additional elements that integrate the judicial exception into a practical application?
STEP 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception?
Using the two-step inquiry, it is clear that the independent claims 1, 11, and 20 are directed to an abstract idea as shown below:
STEP 1: Do the claims fall within one of the statutory categories? YES. Independent claims 1, 11, and 20 are directed to a machine and process.
STEP 2A (PRONG 1): Is the claim directed to a law of nature, a natural phenomenon or an abstract idea? YES, the claims are directed toward a mental process (i.e. abstract idea).
With regard to STEP 2A (PRONG 1), the guidelines provide three groupings of subject matter that are considered abstract ideas:
Mathematical concepts – mathematical relationships, mathematical formulas or equations, mathematical calculations;
Certain methods of organizing human activity – fundamental economic principles or practices (including hedging, insurance, mitigating risk); commercial or legal interactions (including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations); managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions); and
Mental processes – concepts that are practicably performed in the human mind (including an observation, evaluation, judgment, opinion).
Independent claims 1, 11, and 20 comprise a mental process and mathematical concept that can be practicably performed in the human mind (or generic computers or components configured to perform the method) and, therefore, an abstract idea.
Regarding independent claim 1: the limitations recite:
A computing system comprising:
at least one input sensor comprising one or a plurality of cameras (generic computer component);
processing circuitry (generic computer component); and
a memory storing a runway database and executable instructions that, in response to execution by the processing circuitry, cause the processing circuitry to (generic computer component):
collect a plurality of images related to at least an environment from the one or the plurality of cameras (data gathering);
execute a feature extractor to extract features for the plurality of images (extra solution activity);
generate a runway identification (extra solution activity) at least based on the extracted features by matching the extracted features with known runway and runway-associated features of a known runway in the runway database (mental process including observation and evaluation, and can be done mentally in the human mind); and
output the runway identification (presenting results of the analysis).
Regarding independent claim 11: the limitations recite:
A computing method comprising:
collecting a plurality of images related to at least an environment from one or a plurality of cameras (data gathering);
executing a feature extractor to extract features for the plurality of images (extra solution activity);
generating a runway identification (extra solution activity) at least based on the extracted features by matching the extracted features with known runway features of a known runway in a runway database (mental process including observation and evaluation, and can be done mentally in the human mind); and
outputting the runway identification (presenting results of the analysis).
Regarding independent claim 20: the limitations recite:
A computing system comprising (generic computer component):
at least one input sensor comprising one or a plurality of cameras (generic computer component);
processing circuitry (generic computer component); and
a memory storing a runway database and executable instructions that, in response to execution by the processing circuitry cause the processing circuitry to (generic computer component):
collect a plurality of images related to at least a portion of an environment from the one or the plurality of cameras (data gathering);
execute a convolutional neural network to extract features for the plurality of images (extra solution activity);
perform an analysis to match interest points in the extracted features to interest points on a runway or runway marking in a mask in the runway database (mental process including observation and evaluation, and can be done mentally in the human mind);
analyze spatial distributions and expected patterns of runway features to infer likely locations of missing interest points in the extracted features (mathematical concept);
generate a runway identification based on the mask that matches the extracted features (extra solution activity); and
output the runway identification (presenting results of the analysis).
These limitations, as drafted, is a simple process that, under their broadest reasonable interpretation, covers performance of the limitations in the mind or by a human. The Examiner notes that under MPEP 2106.04(a)(2)(III), the courts consider a mental process (thinking) that “can be performed in the human mind, or by a human using a pen and paper" to be an abstract idea. CyberSource Corp. v. Retail Decisions, Inc., 654 F.3d 1366, 1372, 99 USPQ2d 1690, 1695 (Fed. Cir. 2011). As the Federal Circuit explained, "methods which can be performed mentally, or which are the equivalent of human mental work, are unpatentable abstract ideas the ‘basic tools of scientific and technological work’ that are open to all.’" 654 F.3d at 1371, 99 USPQ2d at 1694 (citing Gottschalk v. Benson, 409 U.S. 63, 175 USPQ 673 (1972)). See also Mayo Collaborative Servs. v. Prometheus Labs. Inc., 566 U.S. 66, 71, 101 USPQ2d 1961, 1965 ("‘[M]ental processes[] and abstract intellectual concepts are not patentable, as they are the basic tools of scientific and technological work’" (quoting Benson, 409 U.S. at 67, 175 USPQ at 675)); Parker v. Flook, 437 U.S. 584, 589, 198 USPQ 193, 197 (1978) (same).
Further, under MPEP 2106.04(a)(2)(I), the courts consider a mathematical concept “as laws of nature, and at other times described these concepts as judicial exceptions without specifying a particular type of exception.” Mathematical concepts need not be expressed in mathematical symbols because “[w]ords used in a claim operating on data to solve a problem can serve the same purpose as a formula.” In re Grams, 888 F.2d 835, 837 and n.1, 12 USPQ2d 1824, 1826 and n.1 (Fed. Cir. 1989). See: Gottschalk v. Benson, 409 U.S. 63, 64, 175 USPQ 673, 674 (1972)
As such, a person could mentally match the extracted features with the features of known runways. Regarding claim 20, the mathematical concept is simply computing the spatial distribution and pattern to determine the missing points. The mere nominal recitation that the various steps are being executed by the generic computer component(s), for example, processor, system, memory, etc. does not take the limitations out of the mental process grouping. Thus, the claims recite a mental process and mathematical concept.
STEP 2A (PRONG 2): Does the claim recite additional elements that integrate the judicial exception into a practical application? NO, the claims do not recite additional elements that integrate the judicial exception into a practical application.
With regard to STEP 2A (prong 2), whether the claim recites additional elements that integrate the judicial exception into a practical application, the guidelines provide the following exemplary considerations that are indicative that an additional element (or combination of elements) may have integrated the judicial exception into a practical application:
an additional element reflects an improvement in the functioning of a computer, or an improvement to other technology or technical field;
an additional element that applies or uses a judicial exception to affect a particular treatment or prophylaxis for a disease or medical condition;
an additional element implements a judicial exception with, or uses a judicial exception in conjunction with, a particular machine or manufacture that is integral to the claim;
an additional element effects a transformation or reduction of a particular article to a different state or thing; and
an additional element applies or uses the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort designed to monopolize the exception.
While the guidelines further state that the exemplary considerations are not an exhaustive list and that there may be other examples of integrating the exception into a practical application, the guidelines also list examples in which a judicial exception has not been integrated into a practical application:
an additional element merely recites the words “apply it” (or an equivalent) with the judicial exception, or merely includes instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea;
an additional element adds insignificant extra-solution activity to the judicial exception; and
an additional element does no more than generally link the use of a judicial exception to a particular technological environment or field of use.
Independent claims 1, 11, and 20 do not recite any of the exemplary considerations that are indicative of an abstract idea having been integrated into a practical application. Independent claims 1, 11, and 20 discloses a system, memory, and processor, generate a runway identification based on the mask that matches the extracted features, which are generic computer components and/or insignificant pre/post-solution extra activity that do not add a meaningful limitation to the abstract idea because they amount to simply implementing the abstract idea in a method.
These limitations are recited at a high level of generality (i.e. as a general action or change being taken based on the results of the acquiring step) and amounts to mere post solution actions, which is a form of insignificant extra-solution activity. Further, the claims are claimed generically and are operating in their ordinary capacity such that they do not use the judicial exception in a manner that imposes a meaningful limit on the judicial exception. Accordingly, even in combination, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea.
STEP 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception? No, the claims do not recite additional elements that amount to significantly more than the judicial exception.
With regard to STEP 2B, whether the claims recite additional elements that provide significantly more than the recited judicial exception, the guidelines specify that the pre-guideline procedure is still in effect. Specifically, that examiners should continue to consider whether an additional element or combination of elements:
adds a specific limitation or combination of limitations that are not well-understood, routine, conventional activity in the field, which is indicative that an inventive concept may be present; or
simply appends well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception, which is indicative that an inventive concept may not be present.
Independent claim(s) 1, 11, and 20 do not recite any additional elements that are not well-understood, routine or conventional. The use of a generic computer elements is routine, well-understood and conventional process that is performed by computers,
Thus, since independent claims 1, 11, and 20 are: (a) directed toward an abstract idea, (b) do not recite additional elements that integrate the judicial exception into a practical application, and (c) do not recite additional elements that amount to significantly more than the judicial exception, it is clear that independent claims 1, 11, and 20 are not eligible subject matter under 35 U.S.C 101.
Regarding claims 2-10, and 12-19: the additional limitations do not integrate the mental process into practical application or add significantly more to the mental process and mathematical concept.
In detail claims 2–10 depend from Claim 1 and claims 12-19 depend from claim 11 and add:
feature extractor is a machine perception system. (claims 2 and 12)
machine perception system is a vision transformer. (claims 3 and 13)
machine perception system is a convolutional neural network. (claims 4 and 14)
applying down-convolutional layers and up-convolutional layers to the images. (claims 5 and 15)
specification of extracted features are locations of interest points. (claims 6 and 16)
generating the runway identification by matching the extracted features with known runway features. (claims 7 and 17)
specify what the interest points or features correspond to. (claims 8 and 18)
classify pixels as runway or no runway. (claims 9 and 19)
generate localization data. (claim 10)
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 2, 4, 6-8, 10, 11, 12, 14, 16-18, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Chen et al. (US 2022/0234752 A1) (hereinafter, “Chen”) in view of Evans et al. (US 2022/0198703 A1) (“hereinafter, “Evans”).
Regarding claim 1, Chen discloses a computing system comprising (Paragraph [0007] “FIG. 1 is a schematic diagram of a computer vision system for assisting a landing decision”):
at least one input sensor comprising one or a plurality of cameras (Paragraph [0013] “This embodiment of the system 10 includes, without limitation, a camera 14, a navigation/airport database 16, a processing system 20, an output system 22, aircraft sensors 24,”);
processing circuitry (Paragraph [0013] “processing system 20”); and
a memory storing a runway database and executable instructions that, in response to execution by the processing circuitry, cause the processing circuitry to (Paragraph [0017] “the computer vision system 10 includes a navigation database 16 that stores navigation data used by various avionics systems. The navigation database 16 provides data 52 for the real world position of a target runway.”; Paragraph [0020] “System 10 further includes the processing system 20 including one or more processors that are configured to execute computer programming instructions stored on non-transitory memory”):
collect a plurality of images related to at least an environment from the one or the plurality of cameras (Paragraph [0015] “although other types of camera 14 are contemplated. Other vision cameras are possible including stereo cameras, infrared cameras, etc. The camera 14 can include a plurality of cameras including combinations of different types of cameras such as those described above. The images are generated via single camera or real-time fusion of multiple cameras.”);
execute a feature extractor to extract features for the plurality of images (Paragraph [0022] “object finding function returns a positive or negative indication concerning whether a runway has been found and returns a region of interest (ROI) outline (e.g. a rectangle) encompassing the identified runway. The ROI may be a little larger and contain the convex hull of the runway. The runway identification sub-module 32 further processes, e.g. by contrast filtering and/or edge detection, the ROI to determine runway pixel positions (or other location data in image space) 56 describing the location of the runway in image space”; Paragraph [0026] “The data 52 may be three or four corners of the runway in some embodiments but other trackable features of the runway may be used provided the location of corresponding features can be extracted by the computer vision processing module 26.”);
generate a runway identification at least based on the extracted features (Paragraph [0022] “feature or pattern matching algorithms could be used to identify the runway. The object finding function returns a positive or negative indication concerning whether a runway has been found and returns a region of interest (ROI) outline”; Paragraph [0044] “In step 230, data 52 for the real world position of the runway is retrieved from the navigation database”) [by matching the extracted features with known runway and runway-associated features of a known runway in the runway database]; and
output the runway identification (Paragraph [0022] “three or four corner points of the runway in image or pixel coordinates are output by the runway identification sub-module 32.”).
However, Chen fails to teach by matching the extracted features with known runway and runway-associated features of a known runway in the runway database.
Evans teaches by matching the extracted features with known runway and runway-associated features of a known runway in the runway database (Paragraph [0024] “applying the mask to a corner detector to detect interest points on the mask and thereby the runway or the runway marking in the image; matching the interest points on the runway or the runway marking in the image, to corresponding points on the runway or the runway marking that have known runway-framed local coordinates; and performing a perspective-n-point (PnP) estimation, using the interest points and the known runway-framed local coordinates”).
Therefore, it would have been obvious to one of ordinary skill of the art before the effective filing date to modify Chen’s reference to include by matching the extracted features with known runway and runway-associated features of a known runway in the runway database taught by Evans reference. The motivation for doing so would have been to determine the position of an aircraft relative to the runway as suggested by Evans (see Evans, Paragraph [0024]).
Further, one skilled in the art could have combined the elements described above by known methods with no change to the respective functions, and the combination would have yielded nothing more that predictable results. Therefore, it would have been obvious to combine Evans with Chen to obtain the invention specified in claim 1.
Regarding claim 2, which claim 1 is incorporated, Chen discloses wherein the feature extractor is a machine perception system (Paragraph [0022] “Runway identification sub-module 32 processes the video frame data 50 and executes an object finding function or algorithm tuned to identify a runway in the video frame data 50. Numerous runway detection algorithms could be constructed… a machine learning classifier is used that is trained with images including a runway. One example classifier is the Cascade Classifier…object finding function returns a positive or negative indication concerning whether a runway has been found and returns a region of interest (ROI) outline (e.g. a rectangle) encompassing the identified runway.”).
Regarding claim 4, which claim 2 is incorporated, Chen fails to teach wherein the machine perception system is a convolutional neural network.
Evans teaches wherein the machine perception system is a convolutional neural network (Paragraph [0067] “The machine learning model may include a CNN, as well as a fully-connected dense network and an n-DOF regression, such as a 6DOF regression or a 2DOF regression.”).
Therefore, it would have been obvious to one of ordinary skill of the art before the effective filing date to modify Chen’s reference to include wherein the machine perception system is a convolutional neural network taught by Evans’ reference. The motivation for doing so would have been to process full-resolution images as suggested by Evans (see Evans, Paragraphs [0067]).
Further, one skilled in the art could have combined the elements described above by known methods with no change to the respective functions, and the combination would have yielded nothing more that predictable results. Therefore, it would have been obvious to combine Evans with Chen to obtain the invention specified in claim 4.
Regarding claim 6, which claim 1 is incorporated, Chen fails to teach wherein the extracted features are estimated locations of interest points of runways based on probability estimations at geographic locations.
Evans teaches wherein the extracted features are estimated locations of interest points of runways based on probability estimations at geographic locations ([0090] …runway 202 may be assumed to be a rectangular, level plane at unknown runway-framed local coordinates (x, 0, z), and an unknown orientation along the z/x axis. The runway may also be assumed to have an unknown width. The relation between real world position and rotation and projected representation, then, may be described with… (a, b)=pixel position (x, y); (c, d)=screen center (x, y); o=scaling factor; (p, q, r)=rotation angles around (x, y, z); x=reference point world runway-framed local coordinate x; and z=runway-framed local coordinate z. When reducing roll angle to zero (rotation of the input image to level the horizon), it can be shown that only two points (e.g. two threshold center points) may be needed for a single solution for the lateral and (if the runway length is known) the vertical angular deviation”).
Therefore, it would have been obvious to one of ordinary skill of the art before the effective filing date to modify Chen’s reference to include wherein the extracted features are estimated locations of interest points of runways based on probability estimations at geographic locations taught by Evans’ reference. The motivation for doing so would have been to determine the position of an aircraft relative to the runway as suggested by Evans (see Evans, Paragraph [0024]).
Further, one skilled in the art could have combined the elements described above by known methods with no change to the respective functions, and the combination would have yielded nothing more that predictable results. Therefore, it would have been obvious to combine Evans with Chen to obtain the invention specified in claim 6.
Regarding claim 7, which claim 6 is incorporated, Chen fails to teach wherein the runway identification is generated by matching interest points or features in the extracted features with interest points or features of the known runway in the runway database. Evans teaches wherein the runway identification is generated by matching interest points or features in the extracted features with interest points or features of the known runway in the runway database (Paragraph [0024] “applying the mask to a corner detector to detect interest points on the mask and thereby the runway or the runway marking in the image; matching the interest points on the runway or the runway marking in the image, to corresponding points on the runway or the runway marking that have known runway-framed local coordinates; and performing a perspective-n-point (PnP) estimation, using the interest points and the known runway-framed local coordinates”).
Therefore, it would have been obvious to one of ordinary skill of the art before the effective filing date to modify Chen’s reference to include wherein the runway identification is generated by matching interest points or features in the extracted features with interest points or features of the known runway in the runway database taught by Evans’ reference. The motivation for doing so would have been to determine the position of an aircraft relative to the runway as suggested by Evans (see Evans, Paragraph [0024]).
Further, one skilled in the art could have combined the elements described above by known methods with no change to the respective functions, and the combination would have yielded nothing more that predictable results. Therefore, it would have been obvious to combine Evans with Chen to obtain the invention specified in claim 7.
Regarding claim 8, which claim 7 is incorporated, Chen discloses wherein the interest points or features in the runway database correspond to at least one of threshold markings (Paragraph [0022] “feature or pattern matching algorithms could be used to identify the runway. The object finding function returns a positive or negative indication concerning whether a runway has been found and returns a region of interest (ROI) outline”), aiming point markings (Note that the claim only requires one of threshold markings, aiming point markings, designation markings, side stripes, or thresholds of registered runways or runway-associated features), designation markings (Paragraph [0017] “data 52 provided by the navigation database 16 may include a runway centerline, a length of the runway, a width of the runway, coordinates for corner points of the runway, coordinates for a center point of the runway and other data describing dimensions and location of the runway.”), side stripes (Note that the claim only requires one of threshold markings, aiming point markings, designation markings, side stripes, or thresholds of registered runways or runway-associated features), or thresholds of registered runways or runway-associated features (Paragraph [0030] “the runway threshold position is taken for the position of the runway and the length of the runway is retrieved from the navigation database”).
Regarding claim 10, which claim 1 is incorporated, Chen discloses wherein the at least one input sensor comprises at least an altimeter or a magnetometer (Paragraph [0043] “when the aircraft altitude is less than 1000 feet. The aircraft altitude may be determined based on altitude data from a radio altimeter included in the aircraft sensors 24.”);
sensor data from the at least the altimeter or the magnetometer is used to generate localization data (Paragraph [0043] “The aircraft position deviation 64 can then be determined to allow a go-around decision to be output when the deviation is too great.”); and
the localization data is taken into account when generating the runway identification (Paragraph [0043] “when the aircraft 12 is determined to be within a predetermined proximity of the target runway based on a location of the aircraft 12 from the global positioning system module 28 and/or from the aircraft sensors 24.”).
Regarding claim 11, Chen discloses computing method comprising: collecting a plurality of images related to at least an environment from one or a plurality of cameras (Paragraph [0015] “although other types of camera 14 are contemplated. Other vision cameras are possible including stereo cameras, infrared cameras, etc. The camera 14 can include a plurality of cameras including combinations of different types of cameras such as those described above. The images are generated via single camera or real-time fusion of multiple cameras.”);
executing a feature extractor to extract features for the plurality of images (Paragraph [0022] “object finding function returns a positive or negative indication concerning whether a runway has been found and returns a region of interest (ROI) outline (e.g. a rectangle) encompassing the identified runway. The ROI may be a little larger and contain the convex hull of the runway. The runway identification sub-module 32 further processes, e.g. by contrast filtering and/or edge detection, the ROI to determine runway pixel positions (or other location data in image space) 56 describing the location of the runway in image space”; Paragraph [0026] “The data 52 may be three or four corners of the runway in some embodiments but other trackable features of the runway may be used provided the location of corresponding features can be extracted by the computer vision processing module 26.”);
generating a runway identification at least based on the extracted features (Paragraph [0022] “feature or pattern matching algorithms could be used to identify the runway. The object finding function returns a positive or negative indication concerning whether a runway has been found and returns a region of interest (ROI) outline”; Paragraph [0044] “In step 230, data 52 for the real world position of the runway is retrieved from the navigation database”) [by matching the extracted features with known runway features of a known runway in a runway database]; and
outputting the runway identification (Paragraph [0022] “three or four corner points of the runway in image or pixel coordinates are output by the runway identification sub-module 32.”).
However, Chen fails to teach by matching the extracted features with known runway features of a known runway in a runway database.
Evans teaches by matching the extracted features with known runway features of a known runway in a runway database (Paragraph [0024] “applying the mask to a corner detector to detect interest points on the mask and thereby the runway or the runway marking in the image; matching the interest points on the runway or the runway marking in the image, to corresponding points on the runway or the runway marking that have known runway-framed local coordinates; and performing a perspective-n-point (PnP) estimation, using the interest points and the known runway-framed local coordinates”).
Therefore, it would have been obvious to one of ordinary skill of the art before the effective filing date to modify Chen’s reference to include by matching the extracted features with known runway features of a known runway in a runway database taught by Evans’ reference. The motivation for doing so would have been to determine the position of an aircraft relative to the runway as suggested by Evans (see Evans, Paragraph [0024]).
Further, one skilled in the art could have combined the elements described above by known methods with no change to the respective functions, and the combination would have yielded nothing more that predictable results. Therefore, it would have been obvious to combine Evans with Chen to obtain the invention specified in claim 11.
Regarding claim 12 (drawn to a method), claim 12 is rejected the same as claim 2 and the arguments similar to that presented above for claim 2 are equally applicable to the claim 12, and all the other limitations similar to claim 2 are not repeated herein, but incorporated by reference.
Regarding claim 14 (drawn to a method), claim 14 is rejected the same as claim 4 and the arguments similar to that presented above for claim 4 are equally applicable to the claim 14, and all the other limitations similar to claim 4 are not repeated herein, but incorporated by reference.
Regarding claim 16 (drawn to a method), claim 16 is rejected the same as claim 6 and the arguments similar to that presented above for claim 6 are equally applicable to the claim 16, and all the other limitations similar to claim 6 are not repeated herein, but incorporated by reference.
Regarding claim 17 (drawn to a method), claim 17 is rejected the same as claim 7 and the arguments similar to that presented above for claim 7 are equally applicable to the claim 17, and all the other limitations similar to claim 7 are not repeated herein, but incorporated by reference.
Regarding claim 18 (drawn to a method), claim 18 is rejected the same as claim 8 and the arguments similar to that presented above for claim 8 are equally applicable to the claim 18, and all the other limitations similar to claim 8 are not repeated herein, but incorporated by reference.
Regarding claim 20, Chen discloses A computing system comprising (Paragraph [0007] “FIG. 1 is a schematic diagram of a computer vision system for assisting a landing decision”):
at least one input sensor comprising one or a plurality of cameras (Paragraph [0013] “This embodiment of the system 10 includes, without limitation, a camera 14, a navigation/airport database 16, a processing system 20, an output system 22, aircraft sensors 24,”);
processing circuitry (Paragraph [0013] “processing system 20”); and
a memory storing a runway database and executable instructions that, in response to execution by the processing circuitry cause the processing circuitry to (Paragraph [0017] “the computer vision system 10 includes a navigation database 16 that stores navigation data used by various avionics systems. The navigation database 16 provides data 52 for the real world position of a target runway.”; Paragraph [0020] “System 10 further includes the processing system 20 including one or more processors that are configured to execute computer programming instructions stored on non-transitory memory”):
collect a plurality of images related to at least a portion of an environment from the one or the plurality of cameras (Paragraph [0015] “Other vision cameras are possible including stereo cameras, infrared cameras, etc. The camera 14 can include a plurality of cameras including combinations of different types of cameras such as those described above. The images are generated via single camera or real-time fusion of multiple cameras.”);
generate a runway identification (Paragraph [0022] “feature or pattern matching algorithms could be used to identify the runway. The object finding function returns a positive or negative indication concerning whether a runway has been found and returns a region of interest (ROI) outline”; Paragraph [0044] “In step 230, data 52 for the real world position of the runway is retrieved from the navigation database”) [based on the mask that matches the extracted features; and
output the runway identification].
However, Chen fails to teach execute a convolutional neural network to extract features for the plurality of images; perform an analysis to match interest points in the extracted features to interest points on a runway or runway marking in a mask in the runway database; analyze spatial distributions and expected patterns of runway features to infer likely locations of missing interest points in the extracted features.
Evans teaches execute a convolutional neural network to extract features for the plurality of images (Paragraph [0067] “The machine learning model may include a CNN, as well as a fully-connected dense network and an n-DOF regression, such as a 6DOF regression or a 2DOF regression.”);
perform an analysis to match interest points in the extracted features to interest points on a runway or runway marking in a mask in the runway database (Paragraph [0024] “applying the mask to a corner detector to detect interest points on the mask and thereby the runway or the runway marking in the image; matching the interest points on the runway or the runway marking in the image, to corresponding points on the runway or the runway marking that have known runway-framed local coordinates; and performing a perspective-n-point (PnP) estimation, using the interest points and the known runway-framed local coordinates”);
analyze spatial distributions and expected patterns of runway features to infer likely locations of missing interest points in the extracted features (Paragraph [0103] “ In some implementations in which a portion of the runway 202 is clipped (not visible) in the image, the mask 410 may include up to six edges, as shown in FIG. 7 for a mask 700… the pose-estimation engine may be configured to determine all intersection points of all of the edges of the mask, and only consider those having a vanishing point near the horizon as probable sides of the runway.”).
Therefore, it would have been obvious to one of ordinary skill of the art before the effective filing date to modify Chen’s reference to include a convolutional neural network to extract features for the plurality of images; perform an analysis to match interest points in the extracted features to interest points on a runway or runway marking in a mask in the runway database; analyze spatial distributions and expected patterns of runway features to infer likely locations of missing interest points in the extracted features taught by Evans’ reference. The motivation for doing so would have been to process full-resolution images, determine the position of an aircraft relative to the runway, and control an accuracy parameter so that an intended number of edges are detected as suggested by Evans (see Evans, Paragraphs [0024], [0067], and [0103]).
Further, one skilled in the art could have combined the elements described above by known methods with no change to the respective functions, and the combination would have yielded nothing more that predictable results. Therefore, it would have been obvious to combine Evans with Chen to obtain the invention specified in claim 20.
Claims 3 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over Chen et al. (US 20220234752 A1) (hereinafter, “Chen”) in view of Evans et al. (US 20220198703 A1) (“hereinafter, “Evans”), and further in view of Datla et al. ("A multimodal semantic segmentation for airport runway delineation in panchromatic remote sensing images." Fourteenth International Conference on Machine Vision (ICMV 2021). Vol. 12084. SPIE, 2022.) (hereinafter, “Datla”).
Regarding claim 3, which claim 2 is incorporated, Chen and Evans fail to teach wherein the machine perception system is a vision transformer.
Datla teaches wherein the machine perception system is a vision transformer (Page 4, last paragraph “We use ViT [16] as encoder containing 12 Transformers. The hybrid encoder is designed by combining ResNet-50 and ViT. Our network provides promising results by training for 75 epochs comprising of 210 iterations with batch size of 8 samples”).
Therefore, it would have been obvious to one of ordinary skill of the art before the effective
filing date to modify Chen in view of Evans to include wherein the machine perception system is a vision transformer taught by Datla’s reference. The motivation for doing so would have been to exercise the innate self-attention capability of Transformers as suggested by Datla (see Datla, Page 2, paragraph 2).
Further, one skilled in the art could have combined the elements described above by known methods with no change to the respective functions, and the combination would have yielded nothing more that predictable results. Therefore, it would have been obvious to combine Datal with Chen and Evans to obtain the invention specified in claim 3.
Regarding claim 13 (drawn to a method), claim 13 is rejected the same as claim 3 and the arguments similar to that presented above for claim 3 are equally applicable to the claim 13, and all the other limitations similar to claim 3 are not repeated herein, but incorporated by reference.
Claims 5 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Chen et al. (US 20220234752 A1) (hereinafter, “Chen”) in view of Evans et al. (US 20220198703 A1) (“hereinafter, “Evans”), and further in view of Badrinarayanan ("Segnet: A deep convolutional encoder-decoder architecture for image segmentation", IEEE transactions on pattern analysis and machine intelligence 39.12 (2017): 2481-2495).
Regarding claim 5, which claim 4 is incorporated, Chen and Evans fail to teach wherein a plurality of down-convolutional layers with ReLU activation and max pooling are applied to the plurality of images, followed by applying a plurality of up-convolutional layers to the plurality of images.
Badrinarayanan teaches wherein a plurality of down-convolutional layers with ReLU activation and max pooling are applied to the plurality of images (Page 2485 [left column paragraph 1] “Each encoder in the encoder network performs convolution with a filter bank to produce a set of feature maps. These are then batch normalized [50], [51]). Then an element-wise rectified-linear non-linearity (ReLU) max(0,x) is applied. Following that, max-pooling with a 2×2 window and stride 2 (non-overlapping window) is performed and the resulting output is sub-sampled by a factor of 2.”), followed by applying a plurality of up-convolutional layers to the plurality of images (Page 2485 [left column paragraph 2] “The appropriate decoder in the decoder network upsamples its input feature map(s) using the memorized max-pooling indices from the corresponding encoder feature map(s).”).
Therefore, it would have been obvious to one of ordinary skill of the art before the effective
filing date to modify Chen in view of Evans to include wherein a plurality of down-convolutional layers with ReLU activation and max pooling are applied to the plurality of images, followed by applying a plurality of up-convolutional layers to the plurality of images taught by Badrinarayanan’s reference. The motivation for doing so would have been to produce features that are useful for accurate boundary localization as suggested by Badrinarayanan (see Badrinarayanan, Introduction [left column paragraph 1]).
Further, one skilled in the art could have combined the elements described above by known methods with no change to the respective functions, and the combination would have yielded nothing more that predictable results. Therefore, it would have been obvious to combine Badrinarayanan with Chen and Evans to obtain the invention specified in claim 5.
Regarding claim 15 (drawn to a method), claim 15 is rejected the same as claim 5 and the arguments similar to that presented above for claim 5 are equally applicable to the claim 15, and all the other limitations similar to claim 5 are not repeated herein, but incorporated by reference.
Claims 9 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Chen et al. (US 20220234752 A1) (hereinafter, “Chen”) in view of Evans et al. (US 20220198703 A1) (“hereinafter, “Evans”), and further in view of Gong et al. ("A survey of techniques for detection and tracking of airport runways", 44th AIA A Aerospace Sciences Meeting and Exhibit. 2006) (hereinafter, Gong).
Regarding claim 9, which claim 1 is incorporated, Chen and Evans fail to teach wherein the feature extractor is executed to classify each pixel in at least a portion of the plurality of images as runway or not runway.
Gong teaches wherein the feature extractor is executed to classify each pixel in at least a portion of the plurality of images as runway or not runway (Page 8, paragraph 4 “In step 2, each pixel is compared to a threshold value, and is then assigned to either foreground or background of the image. In step 3, the upper portion of the image is removed from further consideration. This part of the image contains the sky, which was very bright in appearance. The lower portion of the image becomes the region of interest (ROI) for the remaining steps. Step 4 assigns a unique label to each set of connected foreground points within the ROI. This allows each connected set of pixels, known as a region, to be analyzed separately. In step 5, small regions are removed from further consideration, based on the expected size of a runway in the image.”).
Therefore, it would have been obvious to one of ordinary skill of the art before the effective
filing date to modify Chen in view of Evans to include wherein the feature extractor is executed to classify each pixel in at least a portion of the plurality of images as runway or not runway taught by Gong’s reference. The motivation for doing so would have been to assign a pixel to either the foreground or the background, to determine the region of interest as suggested by Gong (see Gong, Page 8, paragraph 4).
Further, one skilled in the art could have combined the elements described above by known methods with no change to the respective functions, and the combination would have yielded nothing more that predictable results. Therefore, it would have been obvious to combine Gong with Chen and Evans to obtain the invention specified in claim 9.
Regarding claim 19 (drawn to a method), claim 19 is rejected the same as claim 9 and the arguments similar to that presented above for claim 9 are equally applicable to the claim 19, and all the other limitations similar to claim 9 are not repeated herein, but incorporated by reference.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Chen et al. (US 8,711,007 B2) discloses a system for displaying information about a runway once the runway has been identified.
Mueller et al. (US 7,898,463 B1) discloses a runway identification system using a radar system configured to receive a runway characteristic signal from a transponder.
Seah et al. (US 8,576,113 B1) discloses a runway identification system that receives electromagnetic energy from the reflector or sources placed at the end of the runway and display the runway identification in the aircraft.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to UROOJ FATIMA whose telephone number is (571)272-2096. The examiner can normally be reached M-F 8:00-5:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Henok Shiferaw can be reached at (571) 272-4637. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/UROOJ FATIMA/Examiner, Art Unit 2676
/Henok Shiferaw/Supervisory Patent Examiner, Art Unit 2676