DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Remarks
Claims 1-7 and 9-30 are pending. Claim 8 has been canceled.
Response to Arguments
Applicant’s arguments, see page 8 of 10, filed 26 November 2025, with respect to the 35 USC 101 rejection of claims 26-30 have been fully considered and are persuasive. The rejection under 35 USC 101 of claims 26-30 has been withdrawn.
Applicant’s arguments, see page 8 through page 10, filed 26 November 2025, with respect to the 35 USC 103 rejection of claims 1-2, 5-12, 15-21, and 24-30 have been fully considered and are persuasive. The rejection under 35 USC 103 of claims 1-7 and 9-30 has been withdrawn. (Examiner’s note: the 35 USC 103 rejection for claims 3-4, 13-14, and 22-23 have also been withdrawn due to the arguments to the claims on which they depend. However, upon further consideration, a new ground(s) of rejection is made in view of Zheng in view of Lee as shown below as necessitated by amendment.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claim(s) 1, 5-7, 9-12, 15-20, and 24-30 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zheng et al., US 11967103 A1 (hereinafter Zheng) in view of Lee, U.S. Patent Application Publication US 20210326685 (hereinafter Lee).
Regarding claim 1, Zheng teaches a computing system comprising:
at least one processor that executes program instructions (figure 2, col. 16, lines 28-61, implemented in computer hardware and data processing apparatus); and
memory for storing the program instructions, wherein the program instructions comprise a first artificial neural network (ANN) (figure 2, col. 16, lines 28-61, computer storage medium);
wherein the first ANN (figure 2, training engine 240; while Zheng does not use the term ANN or NN for the training engine, the training engine of Zheng performs the same function as the first ANN of the instant invention as described in paragraph 27 of the instant specification; specifically the training engine of Zheng computes a lost between ) is configured to receive concatenated input data that is generated by combining unlabeled input data (figure 2, unlabeled point cloud data 208) and a prediction data (figure 2, training data point classification network output 266) generated by a second ANN (Training pose estimation neural network 258) that has been pre-trained with labeled data (Labeled image data 204 via Training engine 240) and that generated the prediction by processing the unlabeled input data (col. 9, lines 51-65, “[f]or each pair of labeled image 214 and unlabeled point cloud 218, the training engine 240 also computes a value of a second loss function that evaluates a measure of difference between the training data point classification network output 266 and a target data point classification output, i.e., the second pseudo label,” which is what the instant invention is doing as described in paragraph 27 of the instant specification), wherein the first ANN maps the concatenated input data to a latent representation of the concatenated input data (col. 7, line 60 – col 8, line 9 and fig. 2, the output of the training engine 240 is fed to NN 252 which processes fused representation into intermediate representations, therefore the concatenated input to a fused representations which is a latent representation of the input to the training engine 240), wherein the first ANN maps the latent representation of the concatenated input data to a reconstruction of the concatenated input data (col 9. Lines 39-65, data to sub neural network A 252 is a reconstruction of the concatenated input data), and
wherein the computing system adapts learning features of the second ANN in an artificial intelligence model based on an output of the first ANN (col.8, line 48 – col. 8, line 52, current parameter values are updated with output from training).
While Zheng teaches the functionality as recited in the claims, Zheng does not explicitly recite a first ANN. The “training engine” of Zheng does perform all of the functionality that the first ANN performs as recited in the claim. However, even if Zheng’s training engine is not considered an ANN, Lee teaches a training engine that is an ANN (figure 1, ANN training engine; paragraph 26, training engine implemented with an ANN).
Therefore, it would have been obvious to one of ordinary skill in the art at the time the invention was filed to modify Zheng by making the training engine an ANN training engine as taught by Lee. One of ordinary skill would have been motivated to do so because it would improve the system of Zheng because it would provide stable execution accuracy which would be desired in Zheng. ANNs are also know for providing significant advantages such as an ability to learn from large datasets, model complex relationships, and perform well in noisy environments.
Regarding claim 5, Zheng together with Lee taught the computing system according to claim 1, as described above. Zheng as combined with Lee further teaches wherein the computing system adapts the learning features of the second ANN based on the latent representation of the concatenated input data (Zheng: figure 2, updated parameter values 238 are made based on Fused Representation output and intermediate output using feedback via training data point classification network output 266 and training engine 240).
Regarding claim 6, Zheng together with Lee taught the computing system according to claim 1, as described above. Zheng as combined with Lee further teaches wherein the computing system adapts the learning features of the second ANN based on the reconstruction of the concatenated input data (Zheng: figure 2, updated parameter values 238 are made based on Fused Representation output and intermediate output using feedback via training data point classification network output 266 and training engine 240).
Regarding claim 7, Zheng together with Lee taught the computing system according to claim 1, as described above. Zheng as combined with Lee further teaches wherein the computing system is configured to run a plurality of artificial neural networks that process the input data an output of the second ANN in parallel to generate a score (Zheng: Col. 18, lines 1-7 and col. 18 lines 19-23, Zheng contemplates that the system can be modified to include parallel processing).
Regarding claim 9, Zheng together with Lee taught the computing system according to claim 1, as described above. Zheng as combined with Lee further teaches wherein the computing system uses the output of the first ANN (Lee, training engine using an ANN) to adapt the learning features in a third ANN in the artificial intelligence model (Zheng: figure 2, data point classification neural network is a third ANN that uses the output of the first ANN via NN 258; figure 2, alternatively sub neural network B can be a third ANN that uses the output of the first ANN).
Regarding claim 10, Zheng together with Lee taught the computing system according to claim 1, as described above. Zheng together with Lee further teaches wherein the computing system adapts the learning features of the second ANN based on a comparison between the reconstruction of the concatenated input data generated by the first ANN and an output of the artificial intelligence model (Zheng: figure 2, perception subsystem 120 using updated parameters 238 from reconstruction by using from training engine 240; Lee training engine as an ANN). In short, the feedback provides adaptation of the learning features between the reconstruction of the input data and the output.
Regarding claim 11, Zheng together with Lee teaches the computing system according to claim 1 as described above. Zheng further teaches wherein the computing system performs data augmentation on the unlabeled input data by changing features of the unlabeled input data to generate additional data for the first ANN (Lee: training engine using an ANN) to process to generate the latent representation (Zheng: Figure 2, col. 7, line 60 through col.8, line 9, and Col. 9, lines 20-38, for a first pseudo label and second pseudo is generated for unlabeled data to be sent to sub neural network as fused representation).
Regarding claim 12, Zheng together with Lee teaches the computing system according to claim 1 as described above. Zheng as combined with Lee further teaches wherein the computing system performs data augmentation on the latent representation to generate additional input data that is provided to the first ANN (Lee, training engine using an ANN), and wherein the first ANN generates a revised latent representation based on the additional input data. (Zheng: Figure 2, col.7, line 60 through col.8 line 9, fused data is augmented via sub neural network A, sub neural network B, as well as data point classification neural network, and updated parameters via Neural Network parameters 230).
Regarding claim 15, Zheng together with Lee teaches the computing system according to claim 1 as described above. Zheng as combined with Lee further teaches wherein the computing system is configured to run a third ANN that maps third input data to an additional latent representation, and wherein the artificial intelligence model processes an output of the third ANN to generate the concatenated input data for the first ANN (Zheng: figure 2, sub neural network B 254).
Regarding claim 16, Zheng together with Lee teaches the computing system according to claim 1 as described above. Zheng as combined with Lee further teaches wherein the concatenated input data comprises images, and wherein the computing system uses the output of the first ANN (Lee, training engine using an ANN) to adapt the learning features of the second ANN to identify classes in the images (Zheng: col 1, line 52 through col. 2, line 5, obtaining an image of an environment; col. 9, lines 50-65, labeled image and unlabeled point cloud data; figure 2, training engine receives input from data and from a second ANN).
Regarding claim 17, Zheng together with Lee teaches the computing system according to claim 16 as described above. Zheng as combined with Lee further teaches wherein the prediction is generated by processing the images (Zheng: Fig. 2 the input data is initially image data).
Regarding claim 18, Zheng together with Lee teaches the computing system according to claim 17 as described above. Zheng as combined with Lee further teaches wherein the output of the first ANN indicates a predicted error in the prediction (Zheng: col. 9 lines 39-50, output of training engine 240 may be mean squared error)
Regarding claim 19, Zheng together with Lee teaches the computing system according to claim 17 as described above. Zheng as combined with Lee further teaches wherein the first ANN (training engine as an ANN) comprises an autoencoder (col. 11, lines 46 through col. 12, line 7, the fused representation of image and point cloud data is generated by projecting the dimension of the point could data then concatenated with three-dimensional location, this is functionally what an autoencoder does – dimensionally reduction of data then create data based the reduced data).
Regarding claims 20 and 26, they recite limitation that are the method and non-transitory computer readable medium of claim 1 and therefore rejected using the same rationale for the rejection of claim 1. Further, regarding claim 20 and 26, both Zheng and Lee together teach adapting learning features of the first artificial neural network in an artificial intelligence model based on an output of the second neural network (Zheng: figure 2, output of sub neural network B 254 and output of data point classification neural network 256 to change the output of training engine 240. Thus the learning features of the first ANN are adjusted based on an output of the second NN).
Regarding claim 24, it recites limitation that are the method and non-transitory computer readable medium of claim 5 and therefore rejected using the same rationale used for the rejection of claim 5. Specifically, Zheng teaches feedback from a second ANN is used to adapt learning features of a first ANN in figure 2.
Regarding claim 25, it recites limitation that are the method and non-transitory computer readable medium of claim 11 and therefore rejected using the same rationale used for the rejection of claim 11. Further, Zheng shows in figure 2 that feedback provides adapting learning features for the first and second NNs.
Regarding claim 27, Zheng together Lee teaches the method according claim 26 above. Specifically, Zheng shows in figure 2 that the feedback from a second NN that uses the concatenated input and is thus reconstruction input. Also, Lee in figure 1 teaches that a second NN changes the weights applied to a first NN. This provides the advantage of adapting the system to make accurate predictions as data changes.
Regarding claim 28, it recites limitation that are the method and non-transitory computer readable medium of claim 11 and therefore rejected using the same rationale used for the rejection of claim 11. Further, Zheng teaches providing feedback to augment data for both the first ANN and the second ANN.
Regarding claim 29, it recites limitation that are the method and non-transitory computer readable medium of claim 15 and therefore rejected using the same rationale used for the rejection of claim 15.
Regarding claim 30, it recites limitation that are the method and non-transitory computer readable medium of claim 19 and therefore rejected using the same rationale used for the rejection of claim 19.
Claim(s) 2 and 21 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zheng et al., US 11967103 A1 (hereinafter Zheng) in view of Lee, U.S. Patent Application Publication US 20210326685 (hereinafter Lee) and in further view of Hu, U.S. Patent 10121104 (hereinafter Hu).
Regarding claim 2, Zheng together with Lee teaches the computing system according to claim 1, as described above. Zheng further teaches wherein the computing system adapts the learning features by adjusting parameters associated with nodes of the second ANN in the artificial intelligence model based on the output of the first ANN.
However, Zheng does not explicitly teach recite what parameters are adjusted. It is contemplated that the most common parameters in Zheng that would be adjusted would weight associated with nodes because the system uses neural networks. However, Zheng remains silent with what parameters are adjusted.
Hu, teaches wherein a computing system adapts the learning features by adjusting weights associated with nodes of the second ANN in the artificial intelligence model based on the output of the first ANN (figure 1A, feedback from element 108 to 104a; col. 4, lines 16-22, using feedback to update one or more of its configurations (one or more weights of ML model 104a); figure 4, feedback from element 108 to ML Model A; feedback from 402 to ML Model B; col. 12 lines 1-15, update one or more of its configurations).
It would have been obvious to one of ordinary skill in the art at the time the invention was filed to include adjusting weights associated with nodes because weights are a common parameter that is easily adjusted allowing accurate results as needed (col. 3, lines 1-11).
Regarding claim 21, they recite limitations that are the method and non-transitory computer readable medium of substantially to claim 2 and therefore rejected using the same rationale as the rejection of claim 2. Further, Zheng shows that output from the second NN is used to adjusts the first ANN in figure 2. As combined with Hu, the adjustments to the first ANN are adjusted by adjusting weights based on the output of the second ANN (Figure 1).
Claim(s) 19 and 30 is/are also alternatively rejected under 35 U.S.C. 103 as being unpatentable over Zheng et al., US 11967103 A1 (hereinafter Zheng) in view of Lee, U.S. Patent Application Publication US 20210326685 (hereinafter Lee) and in further view of Prokhorov, U.S. Patent Application Publication 20100274433 (hereinafter Prokhorov).
Regarding claim 19, Zheng together with Lee teaches the computing system according to claim 17 as described above. Zheng as combined with Lee further teaches wherein the first ANN (training engine as an ANN) comprises an autoencoder (col. 11, lines 46 through col. 12, line 7, the fused representation of image and point cloud data is generated by projecting the dimension of the point could data then concatenated with three-dimensional location, this is functionally what an autoencoder does – dimensionally reduction of data then create data based the reduced data).
However, even if Zheng combined with Lee does not teach wherein the first ANN comprises and autoencoder, Prokhorov teaches an ANN comprising an autoencoder (paragraph 27).
It would have been obvious to one of ordinary skill in the art at the time the invention was filed to modify the first ANN taught by Zheng combined with Lee to comprise an autoencoder because doing so would provide the advantage of learning efficient coding which create efficient data representation that would be desirable (Prokhorov: paragraph 27).
Regarding claim 30, it recites limitation that are the method and non-transitory computer readable medium of claim 19 and therefore rejected using the same rationale used for the rejection of claim 19 in using Zheng together with Lee and Prokhorov.
Claim(s) 3 and 22 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zheng et al., US 11967103 A1 (hereinafter Zheng) in view of Lee, U.S. Patent Application Publication US 20210326685 (hereinafter Lee) and in further view of Kubo et al., U.S. Patent 10515312 (hereinafter Kubo).
Regarding claim 3, Zheng together with Lee teaches the computing system according to claim 1 as described above. Zheng as combined with Lee further teaches wherein the computing system adapts the learning features from the second ANN in the artificial intelligence model based on the output of the first ANN (Zheng: figure 2).
However, Zheng together with Lee does not specifically disclose the adapting by removing nodes.
However, Kubo teaches removing nodes from an ANN in the artificial intelligence model (col. 2, lines 11-16, individual nodes from the artificial neural map be deactivated).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the computing system of Zheng and Lee by including removing nodes from the second ANN in the artificial intelligence models based on the output of the first ANN as taught by Kubo. One of ordinary skill would be motivated to do so because removing nodes that are not needed makes the network more compact without sacrificing accuracy while at the same time requires less storage space, using less bandwidth, and provides improvement in performance thereby making the overall system more efficient.
Regarding claim 22, it recites limitation that are the method and non-transitory computer readable medium of claim 3 and therefore rejected using the same rationale used for the rejection of claim 3, except that in claim 22 the adapting features are with regard to the first ANN. Zheng together with Lee teaches providing feedback (Zheng: figure 2: 264 and 266 to first ANN) to the first ANN that have been updated therefore in the same manner the removal of nodes would be done by applying the teaching of Kubo to the combination of Zheng and Lee.
Claim(s) 4 and 23 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zheng et al., US 11967103 A1 (hereinafter Zheng) in view of Lee, U.S. Patent Application Publication US 20210326685 (hereinafter Lee) and in further view of Wood, U.S. Patent 4912652 (hereinafter Wood).
Regarding claim 4, Zheng together with Lee teaches the computing system according to claim 1 as described above. Zheng as combined with Lee further teaches wherein the computing system adapts the learning features of the second ANN in the artificial intelligence model based on the output of the first ANN (figure 1, updating parameters of the second ANN via neural network parameters 230).
However, neither Zheng nor Lee specifically discloses adapting the learning features are by adjusting thresholds. However, Wood teaches adjusting learning features of a neural network by adjusting thresholds (col. 6, lines 23-25, adjusting thresholds (offsets); col. 3, lines 24-29; col. 7, lines 11-20, adjusting thresholds (offsets)).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the computing system of Zheng and Lee by including the adjusting of threshold when adapting the learning features of the second ANN in the artificial intelligence model based on the output of the first ANN as taught by Wood. One would have been motivated because adjusting the thresholds allow the neural network to reach a solution with significantly fewer iterations thereby making the neural network more efficient (Wood: col. 5, lines 23-27).
Regarding claim 23, it recites limitation that are the method and non-transitory computer readable medium of claim 4 and therefore rejected using the same rationale used for the rejection of claim 4 that in claim 22 the adapting of learning features is of the first ANN. Because Zheng teaches that the parameters that the second ANN are adjusted and fed to the first ANN (Zheng: figure 2, updating parameters of the second ANN via neural network parameters 230 and corresponding text), the parameters of the first ANN would also be adjusted. It would follow that if combined with Wood that the parameters adjusted would be the thresholds.
Claim(s) 13 and 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zheng et al., US 11967103 A1 (hereinafter Zheng) in view of Lee, U.S. Patent Application Publication US 20210326685 (hereinafter Lee) and in further view of Dupont et al., U.S. Patent Application Publication 20210165939 (hereinafter Dupont).
Regarding claim 13, Zheng together with Lee teaches the computing system according to claim 1 as described above. Zheng as combined with Lee teaches wherein the first ANN maps third data to latent distribution (Zheng: figure 2, output of the training engine is a fused representation which is a latent distribution using labeled input data, unlabeled point cloud data, and data from sub neural network b). However, neither Zheng nor Lee explicitly discloses wherein the first ANN maps third data to a continuous disentangled latent distribution. Underlined emphasized to show what Zheng and Lee do not explicitly teach.
However, Dupont teaches wherein an ANN maps input data to a continuous disentangled latent distribution (paragraph 67, augmenting a Variational Autoencoder framework with a joint latent distribution to learn disentangled continuous and discrete representations).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the computing system of Zheng and Lee by mapping input data to a continuous disentangled latent distribution as taught by Dupont. One would have been motivated to do so because Dupont teaches that doing so would improve reconstruction quality without reducing disentanglement (paragraph 6).
Regarding claim 14, Zheng together with Lee and Dupont teaches the computer system of claim 13 as described above. Zheng together with Lee and Dupont further teaches wherein the computing system performs data augmentation by generating samples in an area where at least two classes overlap in the continuous disentanglement latent distribution (Zheng: figure 2, the first ANN in the form of a training engine takes at least two classes that overlap – labeled image data, unlabled point cloud data, and training point classification network output), wherein the computing system provides the samples to the first ANN as fourth input data (Zheng: Figure 2 where in Zheng’s fourth data is training pose estimation network output) and wherein the first ANN generates a revised continuous disentangled latent distribution based at least in part on the fourth input data (figure 2, Zheng: The training engine takes labeled image data 204, unlabled point cloud data 208, training data point classification network output 266, and training pose estimation network output 264 and provides a fused representation to sub neural network 252).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
US Patent Publication Application 20250061330 to Sanchez teaches using unlabeled and labeled data in an AI system.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to James K. Trujillo whose telephone number is (571)272-3677. The examiner can normally be reached M-F 8:00-4:30.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Dede Zecher can be reached at (571) 272-7771. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
JAMES K. TRUJILLO
Supervisory Patent Examiner
Art Unit 2151
/James Trujillo/ Supervisory Patent Examiner, Art Unit 2151