DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted on April 30, 2024, July 10, 2025 and December 19, 2025 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-2, 4-13 and 15-19 are rejected under 35 U.S.C. 103 as being unpatentable over Park, Tae Ha, and Simone D’Amico. "Robust multi-task learning and online refinement for spacecraft pose estimation across domain gap." Advances in Space Research 73.11 (2024): 5726-5740. (hereinafter Park), and further in view of Armstrong, William, Spencer Draktontaidis, and Nicholas Lui. Semantic Image Segmentation of Imagery of Unmanned Spacecraft Using Synthetic Data. Technical Report, 2021. (hereinafter Armstrong).
Regarding independent claim 1, Park discloses A method for satellite (abstract, “These tasks are all related to detection and pose estimation of a target spacecraft from an image, such as prediction of pre-defined satellite keypoints, direct pose regression, and binary seg mentation of the satellite foreground;” see also Figure 1), the method comprising:
at a satellite classification system, receiving a test image depicting a satellite, the satellite having a hardware component configuration that is unknown to the satellite classification system (Figure 1 exemplifies the entire system; the system is trained to perform segmentation on the satellite images);
inputting the test image to a satellite classification model (see Figure 1, input image in lower left), the satellite classification model trained, based at least in part on a plurality of training satellite images, to generate output image segmentation maps for input satellite images (page 5372, right column, “SPNv2 is trained and evaluated on SPEED+ (Park etal.,2022;Parketal.,2021b) which comprises data from three distinct domains: synthetic, lightbox and sunlamp. The synthetic domain consists of 59,960 computer-generated images of the Tango spacecraft from the PRISMA mission(D’Amicoetal.,2013). On the other hand, lightbox and sunlamp respectively contain 6,740 and 2,791Hardware-In-the-Loop (HIL) images of the mockup model of the same spacecraft captured in the high-fidelity robotic simulation environment of the Testbed for Rendezvous and Optical Navigation (TRON) facility at the Space Rendezvous Laboratory (SLAB) of Stanford University.”); and
outputting, from the satellite classification model:
an output image segmentation map for the test image (Figure 1, element “segmentation”), the output image segmentation map including a plurality of map pixels corresponding to a plurality of image pixels in the test image, wherein pixel values of the plurality of map pixels classify corresponding image pixels of the test image as depicting (Figure 1, element “segmentation”);
one or more position parameters for the satellite (Figure 1, element “rotation”, and “translation”); and
one or more attitude parameters for the satellite (page 5728, left column, “Spacecraft Pose Estimation. The first ML-based approach to pose estimation of a known target spacecraft is Spacecraft Pose Network (SPN)” page 5731, right column, “Then, at inference, only the outputs of hH and hE are used to predict the pose.”).
Park fails to explicitly disclose as further recited. However, Armstrong discloses A method for satellite component classification (page 2, left column, “To address this, we have generated a prototype syn thetic image dataset labelled for semantic segmentation of 2D images of unmanned spacecraft, and are endeavouring to train a performant deep learning image segmentation model using the same, with the ultimate goal of enabling further research in the area of autonomous spacecraft rendezvous.”), the method comprising:
an output image segmentation map for the test image (Figure 2, “corresponding segmentation mask”), the output image segmentation map including a plurality of map pixels corresponding to a plurality of image pixels in the test image, wherein pixel values of the plurality of map pixels classify corresponding image pixels of the test image as depicting different hardware components of(table 1; Figure 2-5; page 3, left column, “We worked with an industry expert to define eleven class labels for the segmentation task”).
Park is directed toward, “This work presents Spacecraft Pose Network v2 (SPNv2), a Convolutional Neural Network (CNN) for pose estimation of noncooperative spacecraft across domain gap. SPNv2 is a multi-scale, multi-task CNN which consists of a shared multi-scale feature encoder and multiple prediction heads that perform different tasks on a shared feature output. These tasks are all related to detection and pose estimation of a target spacecraft from an image, such as prediction of pre-defined satellite keypoints, direct pose regression, and binary seg mentation of the satellite foreground (abstract).” Armstrong is directed toward, “we have generated a prototype synthetic image dataset labelled for semantic segmentation of 2D images of unmanned spacecraft, and are endeavouring to train a performant deep learning image segmentation model using the same, with the ultimate goal of enabling further research in the area of autonomous spacecraft rendezvous (page 2, left column).” As can be easily seen by one of ordinary skill in the art before the effective filing date of the claimed invention, Park and Armstrong are directed toward similar methods of endeavor of image analysis of spacecraft. Further, Armstrong allows for more specific segmentation as to the components within the satellite, and not just the satellite as a whole. It would be obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention that users may need to know information on specific parts, such as if a part is broken is it more useful to know which part it is, as opposed to only knowing something on the satellite is broken. Thus, it would have been obvious to a person having ordinary skill in the art before the effective filing date to incorporate the teaching of Armstrong in order to provide a more specific output for a reviewing user to more efficiently act on.
Regarding dependent claim 2, the rejection of claim 1 is incorporated herein. Additionally, Park further discloses wherein the satellite classification model includes a segmentation head, a position head, and an attitude head (Abstract, “multiple prediction heads”).
Park and Armstrong fail to explicitly disclose wherein during training of the satellite classification model, a multiplicative increase is applied to a segmentation error of the segmentation head prior to summing the segmentation error with a position error and an attitude error for gradient descent optimization.
However, performing a different method of error combination and gradient descent optimization are further known by one of ordinary skill in the art. Further, Park does disclose methods of analyzing the error of the system outputs as seen on page 5727, right column. Further, gradient descent optimization is known to one of ordinary skill in the art to be computationally efficient, adaptable, and have stable convergence. Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Park an Armstrong in order to adjust the error methods and further utilize gradient descent optimization for machine learning optimization.
Regarding dependent claim 4, the rejection of claim 1 is incorporated herein. Additionally, Armstrong further discloses wherein, for image pixels in the test image depicting hardware components of a same component type, corresponding map pixels in the output image segmentation map have a same pixel value (page 3, left column, “we programmatically assigned RGB color values in the generated images to their corresponding class labels, resulting in single-channel images with pixel values equal to the cardinal number associated with each category[15];” see also Figure 2-4).
It would have been obvious to a person having ordinary skill in the art before the effective filing date to incorporate the teaching of Armstrong in order to ensure a reader would clearly understand areas of the same type of entity are in fact the same; for example, all thrusters are colored in red, allows a user to easily understand everything red, is the same entity.
Regarding dependent claim 5, the rejection of claim 1 is incorporated herein. Additionally, Armstrong further discloses wherein, for image pixels in the test image depicting different instances of a same component type, corresponding map pixels in the output image segmentation map representing the different instances have different pixel values (Table 1; the main thrusters and rotational thrusters are both read as the same component type being thrusters, but mapped differently in that they are different instances of thrusters)
It would have been obvious to a person having ordinary skill in the art before the effective filing date to incorporate the teaching of Armstrong in order to ensure a reader would clearly understand exactly what they are looking at. Said differently, a user may have an interest in a right thruster, versus a left thruster and thus representing these differently would allow a user to accurately understand the segmentation.
Regarding dependent claim 6, the rejection of claim 1 is incorporated herein. Additionally, Park further discloses further comprising inputting an instantaneous field of view (IFOV) to the satellite classification model (all cameras are read as having an IFOV as this represents the scene that the pixel covers; page 5726, left column, “The pose information can be extracted from a sequence of images captured by the monocular camera”), and wherein the one or more position parameters are generated based at least in part on the IFOV (page 5726, left column, “The pose information can be extracted from a sequence of images captured by the monocular camera;” the position parameters are read as the pose).
Regarding dependent claim 7, the rejection of claim 1 is incorporated herein. Additionally, Park further discloses wherein training the satellite classification model based at least in part on the plurality of training satellite images includes applying one or more image perturbation operations to a training satellite image of the plurality of training satellite images (See table 1, “List of data augmentations and equivalent commands in Albumentations;” page 5730, right column, “SPNv2 is trained with extensive data augmentation (Shorten and Khoshgoftaar, 2019) to mitigate the overfit ting to the synthetic images. The augmentations are implemented with the Albumentations library (Buslaev et al., 2020), and the list of employed augmentations and their equivalent Albumentations commands are provided in Table 1.”).
Regarding dependent claim 8, the rejection of claim 7 is incorporated herein. Additionally, Park further discloses wherein the one or more image perturbation operations are selected from rescaling a training satellite depicted in the training satellite image, translating a position of the training satellite depicted in the training satellite image, rotating the training satellite, adding one or more simulated glints to the training satellite, adding quantized noise to the training image (Table 1, “Noise”), and modifying pixel values of one or more pixels of the training image via one or more mathematical transformation functions (Examiner Note: only “One or more” required).
Regarding dependent claim 9, the rejection of claim 1 is incorporated herein. Additionally, Park further discloses wherein training the satellite classification model based at least in part on the plurality of training satellite images includes adding one or more pixels of the plurality of training satellite images to an exclusion set of pixels that are ignored during training (page 5370, right column, “Moreover, random erase (Zhong et al., 2020) and sun flare augentations, which simulate the occlusion of satellite parts due to extreme shadowing or the sun lamp’s direct sunlight, are implemented after modification to localize the effect within the bounding box of the target satellite instead of the entire image frame;” see also Table 1).
Regarding dependent claim 10, the rejection of claim 1 is incorporated herein. Additionally, Armstrong further discloses further comprising outputting, from the satellite classification model, predicted material properties for one or more hardware component surfaces of the satellite depicted by the test image (page 2 right column – page 3, left column, “Generation of the semantic masks was then accomplished by removing all light sources from the Blender scene and modifying the shaders, material properties of each component, and rendering settings such that the ray tracer projected only the appropriate color value onto each pixel of the rendered image based on the camera’s perspective.”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Armstrong in order to ensure the most realistic output is made for a user to review.
Regarding dependent claim 11, the rejection of claim 1 is incorporated herein. Additionally, Park further discloses wherein the satellite classification model is a deep neural network (DNN) (page 5728, right column, “This section describes the offline robust training of SPNv2 on the synthetic dataset, which is achieved by com bination of a multi-scale, multi-task CNN architecture design, extensive data augmentation and domain randomization.”… “The main SPNv2 architecture visualized in Fig. 1 closely follows EfficientPose (Bukschat and Vetter, 2020) based on the EfficientDet (Tan et al., 2020) feature encoder, which comprises the EfficientNet (Tan and Le, 2019) backbone and Bi-directional Feature Pyramid Network (BiFPN) to fuse features from different scales.”).
Regarding independent claim 12, the rejection of claim 1 is incorporated herein. Additionally, Park discloses A satellite classification system (abstract, “These tasks are all related to detection and pose estimation of a target spacecraft from an image, such as prediction of pre-defined satellite keypoints, direct pose regression, and binary seg mentation of the satellite foreground;” see also Figure 1), comprising:
a logic subsystem (abstract, “This work presents Spacecraft Pose Network v2 (SPNv2), a Convolutional Neural Network (CNN) for pose estimation of noncooperative spacecraft across domain gap. SPNv2 is a multi-scale, multi-task CNN which consists of a shared multi-scale feature encoder and multiple prediction heads that perform different tasks on a shared feature output;” CNNs must be executed on a computer with their programmed logic); and
a storage subsystem holding instructions executable by the logic subsystem to (abstract, “This work presents Spacecraft Pose Network v2 (SPNv2), a Convolutional Neural Network (CNN) for pose estimation of noncooperative spacecraft across domain gap. SPNv2 is a multi-scale, multi-task CNN which consists of a shared multi-scale feature encoder and multiple prediction heads that perform different tasks on a shared feature output;” CNNs must be executed on a computer with their programmed logic and for the programmed logic to be called to use, it must be stored):
receive a test image depicting a satellite, the satellite having a hardware component configuration that is unknown to the satellite classification system (Figure 1 exemplifies the entire system; the system is trained to perform segmentation on the satellite images);
input the test image to a satellite classification model (see Figure 1, input image in lower left), the satellite classification model trained, based at least in part on a plurality of training satellite images, to generate output image segmentation maps for input satellite images (page 5372, right column, “SPNv2 is trained and evaluated on SPEED+ (Park etal.,2022;Parketal.,2021b) which comprises data from three distinct domains: synthetic, lightbox and sunlamp. The synthetic domain consists of 59,960 computer-generated images of the Tango spacecraft from the PRISMA mission(D’Amicoetal.,2013). On the other hand, lightbox and sunlamp respectively contain 6,740 and 2,791Hardware-In-the-Loop (HIL) images of the mockup model of the same spacecraft captured in the high-fidelity robotic simulation environment of the Testbed for Rendezvous and Optical Navigation (TRON) facility at the Space Rendezvous Laboratory (SLAB) of Stanford University.”); and
output, from the satellite classification model:
an output image segmentation map for the test image (Figure 1, element “segmentation”), the output image segmentation map including a plurality of map pixels corresponding to a plurality of image pixels in the test image, and wherein pixel values of the plurality of map pixels classify corresponding image pixels of the test image as depicting (Figure 1, element “segmentation”);
one or more position parameters for the satellite (Figure 1, element “rotation”, and “translation”); and
one or more attitude parameters for the satellite (page 5728, left column, “Spacecraft Pose Estimation. The first ML-based approach to pose estimation of a known target spacecraft is Spacecraft Pose Network (SPN)” page 5731, right column, “Then, at inference, only the outputs of hH and hE are used to predict the pose.”).
Park fails to explicitly disclose as further recited. However, Armstrong discloses an output image segmentation map for the test image (page 2, left column, “To address this, we have generated a prototype syn thetic image dataset labelled for semantic segmentation of 2D images of unmanned spacecraft, and are endeavouring to train a performant deep learning image segmentation model using the same, with the ultimate goal of enabling further research in the area of autonomous spacecraft rendezvous.” Figure 2, “corresponding segmentation mask”), the output image segmentation map including a plurality of map pixels corresponding to a plurality of image pixels in the test image, and wherein pixel values of the plurality of map pixels classify corresponding image pixels of the test image as depicting different hardware components of the satellite (table 1; Figure 2-5; page 3, left column, “We worked with an industry expert to define eleven class labels for the segmentation task”).
Park is directed toward, “This work presents Spacecraft Pose Network v2 (SPNv2), a Convolutional Neural Network (CNN) for pose estimation of noncooperative spacecraft across domain gap. SPNv2 is a multi-scale, multi-task CNN which consists of a shared multi-scale feature encoder and multiple prediction heads that perform different tasks on a shared feature output. These tasks are all related to detection and pose estimation of a target spacecraft from an image, such as prediction of pre-defined satellite keypoints, direct pose regression, and binary seg mentation of the satellite foreground (abstract).” Armstrong is directed toward, “we have generated a prototype synthetic image dataset labelled for semantic segmentation of 2D images of unmanned spacecraft, and are endeavouring to train a performant deep learning image segmentation model using the same, with the ultimate goal of enabling further research in the area of autonomous spacecraft rendezvous (page 2, left column).” As can be easily seen by one of ordinary skill in the art before the effective filing date of the claimed invention, Park and Armstrong are directed toward similar methods of endeavor of image analysis of spacecraft. Further, Armstrong allows for more specific segmentation as to the components within the satellite, and not just the satellite as a whole. It would be obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention that users may need to know information on specific parts, such as if a part is broken is it more useful to know which part it is, as opposed to only knowing something on the satellite is broken. Thus, it would have been obvious to a person having ordinary skill in the art before the effective filing date to incorporate the teaching of Armstrong in order to provide a more specific output for a reviewing user to more efficiently act on.
Regarding dependent claim 13, the rejection of claim 12 is incorporated herein. Additionally, Park further discloses wherein the satellite classification model includes a segmentation head, a position head, and an attitude head (Abstract, “multiple prediction heads”).
Park and Armstrong fail to explicitly disclose wherein during training of the satellite classification model, a multiplicative increase is applied to a segmentation error of the segmentation head prior to summing the segmentation error with a position error and an attitude error for gradient descent optimization.
However, performing a different method of error combination and gradient descent optimization are further known by one of ordinary skill in the art. Further, Park does disclose methods of analyzing the error of the system outputs as seen on page 5727, right column. Further, gradient descent optimization is known to one of ordinary skill in the art to be computationally efficient, adaptable, and have stable convergence. Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Park an Armstrong in order to adjust the error methods and further utilize gradient descent optimization for machine learning optimization.
Regarding dependent claim 15, the rejection of claim 12 is incorporated herein. Additionally, Armstrong further discloses wherein, for image pixels in the test image depicting hardware components of a same component type, corresponding map pixels in the output image segmentation map have a same pixel value (page 3, left column, “we programmatically assigned RGB color values in the generated images to their corresponding class labels, resulting in single-channel images with pixel values equal to the cardinal number associated with each category[15];” see also Figure 2-4).
It would have been obvious to a person having ordinary skill in the art before the effective filing date to incorporate the teaching of Armstrong in order to ensure a reader would clearly understand areas of the same type of entity are in fact the same; for example, all thrusters are colored in red, allows a user to easily understand everything red, is the same entity.
Regarding dependent claim 16, the rejection of claim 12 is incorporated herein. Additionally, Armstrong further discloses wherein, for image pixels in the test image depicting different instances of a same component type, corresponding map pixels in the output image segmentation map representing the different instances have different pixel values (Table 1; the main thrusters and rotational thrusters are both read as the same component type being thrusters, but mapped differently in that they are different instances of thrusters)
It would have been obvious to a person having ordinary skill in the art before the effective filing date to incorporate the teaching of Armstrong in order to ensure a reader would clearly understand exactly what they are looking at. Said differently, a user may have an interest in a right thruster, versus a left thruster and thus representing these differently would allow a user to accurately understand the segmentation.
Regarding dependent claim 17, the rejection of claim 12 is incorporated herein. Additionally, Park further discloses wherein training the satellite classification model based at least in part on the plurality of training satellite images includes applying one or more image perturbation operations to a training satellite image of the plurality of training satellite images (See table 1, “List of data augmentations and equivalent commands in Albumentations;” page 5730, right column, “SPNv2 is trained with extensive data augmentation (Shorten and Khoshgoftaar, 2019) to mitigate the overfit ting to the synthetic images. The augmentations are implemented with the Albumentations library (Buslaev et al., 2020), and the list of employed augmentations and their equivalent Albumentations commands are provided in Table 1.”), and wherein the one or more image perturbation operations are selected from rescaling a training satellite depicted in the training satellite image, translating a position of the training satellite depicted in the training satellite image, rotating the training satellite, adding one or more simulated glints to the training satellite, adding quantized noise to the training image (Table 1, “Noise”), and modifying pixel values of one or more pixels of the training image (Examiner Note: only “One or more” required).
Regarding dependent claim 18, the rejection of claim 12 is incorporated herein. Additionally, Park further discloses wherein training the satellite classification model based at least in part on the plurality of training satellite images includes adding one or more pixels of the plurality of training satellite images to an exclusion set of pixels that are ignored during training (page 5370, right column, “Moreover, random erase (Zhong et al., 2020) and sun flare augentations, which simulate the occlusion of satellite parts due to extreme shadowing or the sun lamp’s direct sunlight, are implemented after modification to localize the effect within the bounding box of the target satellite instead of the entire image frame;” see also Table 1).
Regarding dependent claim 19, the rejection of claim 12 is incorporated herein. Additionally, Armstrong further discloses further comprising outputting, from the satellite classification model, predicted material properties for one or more hardware component surfaces of the satellite depicted by the test image (page 2 right column – page 3, left column, “Generation of the semantic masks was then accomplished by removing all light sources from the Blender scene and modifying the shaders, material properties of each component, and rendering settings such that the ray tracer projected only the appropriate color value onto each pixel of the rendered image based on the camera’s perspective.”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Armstrong in order to ensure the most realistic output is made for a user to review.
Claim(s) 3, 14 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Park further in view of Armstrong, and further in view of U.S. Publication No. 2014/0313345 to Conard et al. (hereinafter Conard).
Regarding dependent claim 3, the rejection of claim 1 is incorporated herein. Additionally, Park and Armstrong in the combination as a whole fail to explicitly disclose wherein the satellite classification model is further trained to output a classification of the satellite depicted in the test image, thereby classifying the satellite as one of a plurality of recognized satellite types.
However, Conard discloses wherein the satellite classification model is further trained to output a classification of the satellite depicted in the test image, thereby classifying the satellite as one of a plurality of recognized satellite types (abstract, “ The visual inspection subsystem is configured to visually inspect an object of interest selected from the one or more detected flying objects. ” … “the processor is configured to receive one or more images from the visual inspection subsystem, and identify a characteristic of the object of interest from the one or more images;” paragraph 0030, “The identification processor 60 may include one or more image detection algorithms 62 that may visually identify one or more characteristics of the object 12 within the image. These characteristics may be used to classify the object according to at least one of a family, a genus, a species, a make, and a model. ”).
As noted above, Park and Armstrong are directed toward similar methods of endeavor of image analysis of spacecraft. Further, Conard is directed toward “A system for visually identifying a flying object (abstract).” As can be easily seen by one of ordinary skill in the art Park, Armstrong and Conard are all directed toward similar methods of endeavor of image processing of flying objects. Further, Conard allows for classification of an object make or model as a whole (paragraph 0030). It would be obvious to one of ordinary skill in the art before the effective filing date that often users are interested in what type of entity is flying, in order to prepare accurate repair situations. Said differently, if the detected satellite is model A, it may need different parts than if it were model B. Thus, it would have been obvious to a person having ordinary skill in the art before the effective filing date to incorporate the teaching of Conard in order to ensure a user is aware of as much detailed information as possible when analyzing the flying object.
Regarding dependent claim 14, the rejection of claim 12 is incorporated herein. Additionally, Park and Armstrong in the combination as a whole fail to explicitly disclose wherein the satellite classification model is further trained to output a classification of the satellite depicted in the test image, thereby classifying the satellite as one of a plurality of recognized satellite types.
However, Conard discloses wherein the satellite classification model is further trained to output a classification of the satellite depicted in the test image, thereby classifying the satellite as one of a plurality of recognized satellite types (abstract, “ The visual inspection subsystem is configured to visually inspect an object of interest selected from the one or more detected flying objects. ” … “the processor is configured to receive one or more images from the visual inspection subsystem, and identify a characteristic of the object of interest from the one or more images;” paragraph 0030, “The identification processor 60 may include one or more image detection algorithms 62 that may visually identify one or more characteristics of the object 12 within the image. These characteristics may be used to classify the object according to at least one of a family, a genus, a species, a make, and a model. ”).
As noted above, Park and Armstrong are directed toward similar methods of endeavor of image analysis of spacecraft. Further, Conard is directed toward “A system for visually identifying a flying object (abstract).” As can be easily seen by one of ordinary skill in the art Park, Armstrong and Conard are all directed toward similar methods of endeavor of image processing of flying objects. Further, Conard allows for classification of an object make or model as a whole (paragraph 0030). It would be obvious to one of ordinary skill in the art before the effective filing date that often users are interested in what type of entity is flying, in order to prepare accurate repair situations. Said differently, if the detected satellite is model A, it may need different parts than if it were model B. Thus, it would have been obvious to a person having ordinary skill in the art before the effective filing date to incorporate the teaching of Conard in order to ensure a user is aware of as much detailed information as possible when analyzing the flying object.
Regarding independent claim 20, the rejection of claim 1 and 3 apply directly. Additionally, Park further discloses A method for satellite (abstract, “These tasks are all related to detection and pose estimation of a target spacecraft from an image, such as prediction of pre-defined satellite keypoints, direct pose regression, and binary seg mentation of the satellite foreground;” see also Figure 1), the method comprising:
at a satellite classification system, receiving a test image depicting a satellite, the satellite having a hardware component configuration that is unknown to the satellite classification system (Figure 1 exemplifies the entire system; the system is trained to perform segmentation on the satellite images);
inputting the test image to a satellite classification model (see Figure 1, input image in lower left), the satellite classification model trained, based at least in part on a plurality of training satellite images, to generate output image segmentation maps for input satellite images(page 5372, right column, “SPNv2 is trained and evaluated on SPEED+ (Park etal.,2022;Parketal.,2021b) which comprises data from three distinct domains: synthetic, lightbox and sunlamp. The synthetic domain consists of 59,960 computer-generated images of the Tango spacecraft from the PRISMA mission(D’Amicoetal.,2013). On the other hand, lightbox and sunlamp respectively contain 6,740 and 2,791Hardware-In-the-Loop (HIL) images of the mockup model of the same spacecraft captured in the high-fidelity robotic simulation environment of the Testbed for Rendezvous and Optical Navigation (TRON) facility at the Space Rendezvous Laboratory (SLAB) of Stanford University.”);
outputting, from the satellite classification model, an output image segmentation map for the test image (Figure 1, element “segmentation”), one or more position parameters for the satellite (Figure 1, element “rotation”, and “translation”), and one or more attitude parameters for the satellite (page 5728, left column, “Spacecraft Pose Estimation. The first ML-based approach to pose estimation of a known target spacecraft is Spacecraft Pose Network (SPN)” page 5731, right column, “Then, at inference, only the outputs of hH and hE are used to predict the pose.”), the output image segmentation map including a plurality of map pixels corresponding to a plurality of image pixels in the test image, and wherein pixel values of the plurality of map pixels classify corresponding image pixels of the test image as depicting (Figure 1, element “segmentation”).
Park fails to explicitly disclose as further recited. However, Armstrong discloses A method for satellite component classification (page 2, left column, “To address this, we have generated a prototype syn thetic image dataset labelled for semantic segmentation of 2D images of unmanned spacecraft, and are endeavouring to train a performant deep learning image segmentation model using the same, with the ultimate goal of enabling further research in the area of autonomous spacecraft rendezvous.”), the method comprising:
the output image segmentation map including a plurality of map pixels corresponding to a plurality of image pixels in the test image (Figure 2, “corresponding segmentation mask”), and wherein pixel values of the plurality of map pixels classify corresponding image pixels of the test image as depicting different hardware components of the satellite (table 1; Figure 2-5; page 3, left column, “We worked with an industry expert to define eleven class labels for the segmentation task”).
Park is directed toward, “This work presents Spacecraft Pose Network v2 (SPNv2), a Convolutional Neural Network (CNN) for pose estimation of noncooperative spacecraft across domain gap. SPNv2 is a multi-scale, multi-task CNN which consists of a shared multi-scale feature encoder and multiple prediction heads that perform different tasks on a shared feature output. These tasks are all related to detection and pose estimation of a target spacecraft from an image, such as prediction of pre-defined satellite keypoints, direct pose regression, and binary seg mentation of the satellite foreground (abstract).” Armstrong is directed toward, “we have generated a prototype synthetic image dataset labelled for semantic segmentation of 2D images of unmanned spacecraft, and are endeavouring to train a performant deep learning image segmentation model using the same, with the ultimate goal of enabling further research in the area of autonomous spacecraft rendezvous (page 2, left column).” As can be easily seen by one of ordinary skill in the art before the effective filing date of the claimed invention, Park and Armstrong are directed toward similar methods of endeavor of image analysis of spacecraft. Further, Armstrong allows for more specific segmentation as to the components within the satellite, and not just the satellite as a whole. It would be obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention that users may need to know information on specific parts, such as if a part is broken is it more useful to know which part it is, as opposed to only knowing something on the satellite is broken. Thus, it would have been obvious to a person having ordinary skill in the art before the effective filing date to incorporate the teaching of Armstrong in order to provide a more specific output for a reviewing user to more efficiently act on.
Park and Armstrong in the combination as a whole fail to explicitly disclose as further recited. However, Conard discloses outputting a classification of the satellite depicted in the test image, thereby classifying the satellite as one of a plurality of recognized satellite types (abstract, “ The visual inspection subsystem is configured to visually inspect an object of interest selected from the one or more detected flying objects. ” … “the processor is configured to receive one or more images from the visual inspection subsystem, and identify a characteristic of the object of interest from the one or more images;” paragraph 0030, “The identification processor 60 may include one or more image detection algorithms 62 that may visually identify one or more characteristics of the object 12 within the image. These characteristics may be used to classify the object according to at least one of a family, a genus, a species, a make, and a model. ”).
As noted above, Park and Armstrong are directed toward similar methods of endeavor of image analysis of spacecraft. Further, Conard is directed toward “A system for visually identifying a flying object (abstract).” As can be easily seen by one of ordinary skill in the art Park, Armstrong and Conard are all directed toward similar methods of endeavor of image processing of flying objects. Further, Conard allows for classification of an object make or model as a whole (paragraph 0030). It would be obvious to one of ordinary skill in the art before the effective filing date that often users are interested in what type of entity is flying, in order to prepare accurate repair situations. Said differently, if the detected satellite is model A, it may need different parts than if it were model B. Thus, it would have been obvious to a person having ordinary skill in the art before the effective filing date to incorporate the teaching of Conard in order to ensure a user is aware of as much detailed information as possible when analyzing the flying object.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
U.S. Patent No. 12,340,576 to Desai et al. discloses, “A system and method are disclosed for determining a classification and sub-classification of an aircraft. The system receives an aerial image of a geographic area that includes one or more aircrafts. The system inputs the aerial image into a machine learning model. The system receives an output from the machine learning model for each aircraft of the one or more aircrafts. Based on the output for each aircraft, the system determines a set of geometric measurements. The system compares the set of geometric measurements to a plurality of known sets of geometric measurements. Based on the comparison, the system identifies a known set of geometric measurements from the plurality of known sets of geometric measurements. The known set is mapped by a database to a sub-classification. The system outputs the sub-classification. (abstract)”
Contact
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Courtney J. Nelson whose telephone number is (571)272-3956. The examiner can normally be reached Monday - Friday 8:00 - 4:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, John Villecco can be reached at 571-272-7319. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/COURTNEY JOAN NELSON/Primary Examiner, Art Unit 2661