DETAILED ACTION
This Office Action is in response for Application # 18/071,058 filed on November 29, 2022 in which claims 1-14 are presented for examination.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Acknowledgment is made of applicant's claim for foreign priority based on an application filed in REPUBLIC OF KOREA on 07/19/2022. It is noted, however, that applicant has not filed a certified copy of the KR10-2022-0088937 application as required by 37 CFR 1.55.
Status of claims
Claims 1-14 are pending, of which claims 1-14 are rejected under 35 U.S.C. 103.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-14 are rejected under 35 U.S.C. 103 as being unpatentable over Hwang et al. US 2023/0274526 A1 (hereinafter ‘Hwang’) in view of Haidar et al. US 2022/0335303 A1 (hereinafter ‘Haidar’).
As per claim 1, Hwang disclose, An electronic apparatus for lightweight of a three dimensional (3D) object detection model (Hwang: paragraph 0016: disclose 3D points obtained by light ‘lightweight’ detection and ranging (lidar) and paragraph 0018: disclose by training deep learning model for detecting objects) based on knowledge distillation (Hwang: paragraph 0014: disclose method using knowledge distillation for semi-supervised learning. Examiner would also discuss about knowledge distillation is view of secondary art below), the electronic device comprising (Hwang: paragraph 0048: disclose computer ‘electronic’ processing device):
wherein the first feature map and the second feature map are extracted through input point cloud data (Hwang: paragraph 0020: disclose labeled and unlabeled feature vectors are point clouds data)
a self-attention module configured to acquire a plurality of pieces of detection information from a plurality of detection heads for 3D object detection (Hwang: paragraph 0016: disclose 3D points which are obtained and by detection and ranging (lidar)), respectively, using the first feature map and the second feature map (Hwang: paragraph 0020: disclose feature vector for labeled and unlabeled point clouds), and perform knowledge distillation using a relation-aware self-attention calculated based on the acquired plurality of pieces of detection information (Hwang: paragraph 0016: disclose automatic labeling ‘self-attention’ method for unlabeled point clouds among the set of point clouds).
It is noted, however, Hwang did not specifically detail the aspects of
a backbone network module configured to perform knowledge distillation such that a first feature map of a teacher network and a second feature map of a student network are made identical to as each other as recited in claim 1.
On the other hand, Haider achieved the aforementioned limitations by providing mechanisms of
a backbone network module configured to perform knowledge distillation (Haider: paragraph 0004: disclose knowledge distillation to transfer the knowledge of a large trained neural network model, which examiner equates to a backbone network module) such that a first feature map (Haider: paragraph 0008: disclose intermediate representations are referred as feature maps) of a teacher network (Haider: paragraph 0021: disclose intermediate layers of a teacher which is input layer of the teacher) and a second feature map (Haider: paragraph 0008: disclose intermediate representations are referred as feature maps) of a student network are made identical to as each other (Haider: paragraph 0021: disclose intermediate layers of a student which is input layer of the student).
Haidar and Hwang are analogous art because they are from the “same field of endeavor” and both from the same “problem-solving area”. Namely, they are both from the field of “Machine Learning Systems”.
It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to combine the systems of Haidar and Hwang because they are both directed to machine learning systems and both are from the same field of endeavor. The skilled person would therefore regard it as a normal option to include the restriction features of Hwang with the method described by Haidar in order to solve the problem posed.
The motivation for doing so would have been for knowledge distillation techniques that enable the use of intermediate representations of the teacher to train the student (Haider: paragraph 0019).
Therefore, it would have been obvious to combine Hwang with Haidar to obtain the invention as specified in instant claim 1.
As per claim 2, most of the limitations of this claim have been noted in the rejection of claim 1 above.
It is noted, however, Hwang did not specifically detail the aspects of
generate first compressed data and second compressed data from the first feature map and the second feature map, respectively, using an encoder; and perform the knowledge distillation such that the first compressed data and the second compressed data are made identical to each other as recited in claim 2.
On the other hand, Haider achieved the aforementioned limitations by providing mechanisms of
generate first compressed data and second compressed data from the first feature map and the second feature map, respectively, using an encoder (Haider: paragraph 0057: disclose bidirectional encoder); and perform the knowledge distillation such that the first compressed data and the second compressed data are made identical to each other (Haider: paragraph 0004: disclose compression techniques to teacher and student model and paragraph 0070: disclose compressed using Knowledge Distillation ‘KD’ between similarity of teacher and student model).
As per claim 3, most of the limitations of this claim have been noted in the rejection of claims 1 and 2 above.
It is noted, however, Hwang did not specifically detail the aspects of
reconstruct the second feature map from the first compressed data and reconstruct the first feature map from the second compressed data using a decoder as recited in claim 3.
On the other hand, Haider achieved the aforementioned limitations by providing mechanisms of
reconstruct the second feature map from the first compressed data and reconstruct the first feature map from the second compressed data using a decoder (Haider: paragraph 0004: disclose compression techniques to teacher and student model and paragraph 0070: disclose compressed using Knowledge Distillation ‘KD’ between similarity of teacher and student model. Examiner argues that the encoder and decoder are needed for compression).
As per claim 4, most of the limitations of this claim have been noted in the rejection of claims 1, 2 and 3 above.
It is noted, however, Hwang did not specifically detail the aspects of
allow an auto-encoder including the encoder and the decoder to be shared between the teacher network and the student network to perform the knowledge distillation as recited in claim 4.
On the other hand, Haider achieved the aforementioned limitations by providing mechanisms of
allow an auto-encoder including the encoder and the decoder to be shared between the teacher network and the student network to perform the knowledge distillation (Haider: paragraph 0004: disclose compression techniques to teacher and student model and paragraph 0070: disclose compressed using Knowledge Distillation ‘KD’ between similarity of teacher and student model. Examiner argues that the encoder and decoder are needed for compression).
As per claim 5, most of the limitations of this claim have been noted in the rejection of claim 1 above. In addition, Hwang disclose, acquire a plurality of pieces of first detection information from a plurality of first detection heads connected to the teacher network using the first feature map (Hwang: paragraph 0020: disclose feature vector for labeled and unlabeled point clouds); and acquire a plurality of pieces of second detection information from a plurality of second detection heads connected to the student network using the second feature map (Hwang: paragraph 0016: disclose automatic labeling ‘self-attention’ method for unlabeled point clouds among the set of point clouds).
As per claim 6, most of the limitations of this claim have been noted in the rejection of claims 1 and 5 above. In addition, Hwang disclose, calculate an inter-relation attention between the plurality of pieces of first detection information and an intra-relation attention between a plurality of pieces of third detection information each of which is obtained from a corresponding one of the plurality of first detection heads and related to a different object (Hwang: paragraph 0020: disclose feature vector for labeled and unlabeled point clouds); and calculate an inter-relation attention between the plurality of pieces of second detection information and an intra-relation attention between a plurality of pieces of fourth detection information each of which is obtained from a corresponding one of the plurality of second detection heads and related to a different object (Hwang: paragraph 0016: disclose automatic labeling ‘self-attention’ method for unlabeled point clouds among the set of point clouds).
As per claim 7, most of the limitations of this claim have been noted in the rejection of claims 1, 5 and 6 above. In addition, Hwang disclose, wherein the self-attention module is configured to perform the knowledge distillation using the relation-aware self- attention that is obtained by fusing the inter-relation attention and the intra-relation attention of the teacher network (Hwang: paragraph 0016: disclose automatic labeling ‘self-attention’ method for unlabeled point clouds among the set of point clouds).
As per claim 8, Hwang disclose, A method of performing weight-lightening on a three dimensional (3D) (Hwang: paragraph 0016: disclose 3D points obtained by light ‘lightweight’ detection and ranging (lidar) and paragraph 0018: disclose by training deep learning model for detecting objects) object detection model based on knowledge distillation in an electronic apparatus, the method comprising: remaining limitations in this claim 8 are similar to the limitations in claim 1. Therefore, examiner rejects these remaining limitations under the same rationale as limitations rejected under claim 1.
As per claim 9, limitations of this claim are similar to claim 2. Therefore, examiner rejects claim 9 limitations under the same rationale as claim 2.
As per claim 10, limitations of this claim are similar to claim 3. Therefore, examiner rejects claim 10 limitations under the same rationale as claim 3.
As per claim 11, limitations of this claim are similar to claim 4. Therefore, examiner rejects claim 11 limitations under the same rationale as claim 4.
As per claim 12, limitations of this claim are similar to claim 5. Therefore, examiner rejects claim 12 limitations under the same rationale as claim 5.
As per claim 13, limitations of this claim are similar to claim 6. Therefore, examiner rejects claim 13 limitations under the same rationale as claim 6.
As per claim 14, limitations of this claim are similar to claim 7. Therefore, examiner rejects claim 14 limitations under the same rationale as claim 7.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
US Pub. US 2023/0108621 A1 disclose “METHOD AND SYSTEM FOR GENERATING VISUAL FEATURE MAP”
US Pub. US 2022/0004803 A1 disclose “SEMANTIC RELATION PRESERVING KNOWLEDGE DISTILLATION FOR IMAGE-TO-IMAGE TRANSLATION”
Any inquiry concerning this communication or earlier communications from the examiner should be directed to PAVAN MAMILLAPALLI whose telephone number is (571)270-3836. The examiner can normally be reached on M-F. 8am - 4pm, EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ann J Lo can be reached on (571) 272-9767. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/PAVAN MAMILLAPALLI/
Primary Examiner, Art Unit 2159