DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
The amendments filed 12/17/2025 have been entered.
Claims 1-6 and 8-10 remain pending within the application.
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claims 1-6 and 8-10 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. The disclosure does not provide adequate structure to perform “controlling object detection, segmentation, or estimation of a distance from a camera to an object, the object detection, the segmentation, and the estimation being performed using the second prediction model after the training” in claims 1, 9, and 10.
Regarding the limitation “controlling object detection, segmentation, or estimation of a distance from a camera to an object, the object detection, the segmentation, and the estimation being performed using the second prediction model after the training” in claims 1, 9, and 10, the disclosure cites “First prediction model 21 and second prediction model 22 are, for example, neural network models and perform prediction on input data. The prediction is, for example, classification here but may be object detection, segmentation, estimation of a distance from a camera to an object, or the like. Note that behavior may be a correct answer/an incorrect answer or a class when the prediction is the classification, may be a size or a positional relation of a detection frame instead of or together with the correct answer/the incorrect answer or the class when the prediction is the object detection, may be a class, a size, or a positional relation of a region when the prediction is the segmentation, and may be length of an estimated distance when the prediction is the distance estimation.” (Specification, page 12, second paragraph), Fig. 6 and “In this way, information processing device 300 including obtainer 310 that obtains sensing data from sensor 400,controller 320 that controls processing using second prediction model 22 after training, and outputter 330 that outputs the data based on the prediction result, which is an output of second prediction model 22, may be provided.” (Specification, page 18 last paragraph to page 19 first paragraph) as support for this limitation. The disclosure does not describe controlling object detection, segmentation, or estimation of a distance from a camera to an object, but rather discloses outputting predictions such as object detection, segmentation, or estimation of a distance from a camera to an object after training. There is no mention of controlling said tasks post training, as stated in the amended claims.
Dependent claims 2-6 and 8 inherit the deficiency from claim 1 and therefore are rejected on the same basis.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-6, and 8-10 are rejected under 35 U.S.C. 103 as being unpatentable over Tang et al. (“Finger vein verification using a Siamese CNN”), hereafter Tang, in view of Li et al. (Pub. No.: US 2019/0287515 A1), hereafter Li.
Regarding claim 1, Tang discloses:
An information processing method to be executed by a computer, the information processing method comprising (Tang, page 313, left column, first paragraph, lines “implementation of depthwise separable convolution in TensorFlow” teaches implementing a Tensorflow, which is executed by a computer),
obtaining first data belonging to a first type and second data belonging to a second type different from the first type (Tang, Fig. 7 and page 310, paragraph below equation (9), lines 1-2 “X1 and X2 refer to the finger vein images which are the inputs of the two branches of the network” and page 312, paragraph below Table 5, lines 2-5 “Every image in each training set is paired with one genuine image and one impostor image, which were randomly chosen from the rest of the same training set, to form one positive sample pair and one negative sample pair” teaches obtaining first data belonging to a first type and second data belonging to a second type different from the first type, i.e., positive and negative samples which are different types),
calculating a first prediction result by inputting the first data into a first prediction model (Tang, Fig. 7 teaches calculating a first prediction result from a teacher model, i.e., a first prediction model, by inputting the first data into the model),
calculating a second prediction result by inputting the first data into a second prediction model (Tang, Fig. 7, and Fig. 8 teaches calculating a second prediction result from a student model, i.e., a second prediction model, by inputting the first data into the model),
calculating a third prediction result by inputting the second data into the second prediction model (Tang, Fig, 7, and Fig. 8 teaches calculating a third prediction result by inputting the second data into the student model, i.e., the second prediction model),
calculating a first error between the first prediction result and the second prediction result (Tang, Fig, 7 teaches KD loss as the first error between the prediction results),
calculating a second error between the second prediction result and the third prediction result that have been outputted from the second prediction model (Tang, page 310, right column, paragraph 1, lines 2-6 “Referring to Fig. 7, first a guided layer is chosen from the ‘student model’ as the ‘dividing line’, and the part before it is defined as the bottom part of ‘student model’; meanwhile, the part behind it is defined as the top part of ‘student model’” and Fig. 7 teaches MC loss as the second error between the two prediction results that have been outputted from the same student model),
training the second prediction model by machine learning, based on the first error and the second error (Tang, Fig. 7 and Fig. 8 teaches training the lightweight CNN student model based on optimization of the KD and MC loss).
wherein the training results in causing a behavior that differs between the first prediction model and the second prediction model to become closer to each other for a certain prediction target, the behavior being a respective output of the first prediction model and the second prediction model with respect to each of a plurality of inputs of the first prediction model and the second prediction model (Tang, abstract, lines 5-9 “they construct a Siamese structure combining with a modified contrastive loss function for training the above CNN, which effectively improves the network's performance. Finally, considering the feasibility of deploying the above CNN on embedded devices, they construct a lightweight CNN with depthwise separable convolution and adopt a knowledge distillation method to learn the knowledge from the pretrained-weights based CNN,” teaches that the training results cause a behavior that differs between the first prediction model and the second prediction model to become closer to each other for a certain prediction target with a contrastive loss for training, through an output of the first prediction model and the second prediction model),
the training includes: calculating a training parameter by which the first error … and the second error …(Tang, Fig. 7 and equation (9) teaches calculating a training parameter which uses the first and second error),
updating the second prediction model using the training parameter calculated (Tang, Fig. 7 and Fig. 8 teaches updating the student model using the using the calculated training parameters),
the information processing method further comprises: controlling object detection, segmentation, or estimation of a distance from a camera to an object, the object detection, the segmentation, and the estimation being performed using the second prediction model after the training (Tang, Fig. 5 and page 307, left column, paragraph 2, lines 11-13 “we propose a novel method based on the Siamese CNN to better meet the requirements of finger vein verification” teaches testing the trained model and controlling predictions of finger vein verification, i.e. object detection).
Tang discloses calculating a training parameter with the first error and the second error, but does not explicitly teach decreasing the first error and increasing the first error.
Li teaches:
calculating … the first error decreases and the second error increases (Li, Fig. 8 teaches decreasing a first error in elements 808 and 810, and increasing a second error in element 812).
Tang and Li are analogous art because they are from the same field of endeavor, teacher-student learning.
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Tang to include calculating the first error decreases and the second error increases, based on the teachings of Li. The motivation for doing so would have been to improve the results under different conditions (Li, paragraph [0077], lines 2-3).
Regarding claim 2, Tang, in view of Li, discloses the information processing method according to claim 1. Tang further discloses:
wherein the first type and the second type are classes (Tang, page 308, right column, paragraph below equation (2), line 12 “vein images belong either to class 1 or class 2” teaches the types to be classes).
Regarding claim 3, Tang, in view of Li, discloses the information processing method according to claim 1. Tang further discloses:
wherein the first prediction model has a configuration different from a configuration of the second prediction model (Tang, Figure 7 teaches first prediction model to have a configuration different from a configuration of the second prediction model, as shown by the configuration of the student and teacher model displayed).
Regarding claim 4, Tang, in view of Li, discloses the information processing method according to claim 1. Tang further discloses:
wherein the first prediction model has a processing accuracy different from a processing accuracy of the second prediction model (Tang, Fig. 7, Fig. 10 and page 310, right column, first paragraph, lines 8-9 “The first step is the pretraining process for the bottom part of the ‘student model’” teaches the processing accuracy of the second prediction model to be different from the first prediction model).
Regarding claim 5, Tang, in view of Li, discloses the information processing method according to claim 3. Tang further discloses:
wherein the second prediction model is obtained by making the first prediction model lighter (Tang, Fig. 7 and page 307, left column, second paragraph, lines 22-26 “we first developed and trained a pretrained weights based CNN, whose knowledge was then transferred to a newly built lightweight CNN by a knowledge distillation method, which made the final finger vein verification CNN model small but effective.” teaches obtaining the second prediction model by making the first prediction model lighter).
Regarding claim 6, Tang, in view of Li, discloses the information processing method according to claim 4. Tang further discloses:
wherein the second prediction model is obtained by making the first prediction model lighter (Tang, Fig. 7 and page 307, left column, second paragraph, lines 22-26 “we first developed and trained a pretrained weights based CNN, whose knowledge was then transferred to a newly built lightweight CNN by a knowledge distillation method, which made the final finger vein verification CNN model small but effective.” teaches obtaining the second prediction model by making the first prediction model lighter).
Regarding claim 8, Tang, in view of Li, discloses the information processing method according to claim 1. Tang further discloses:
wherein the first prediction model and the second prediction model are neural network models (Tang, Fig. 7 teaches the prediction models to be neural networks).
Regarding claim 9, Tang discloses:
An information processing system comprising: a memory configured to store a program; a processor configured to execute the program and control the information processing system to: (Tang, page 311, left column, paragraph 1, last 3 lines “All of the testings are conducted using a system with four cores each with a 4.00 GHz Intel I7-6850K central processing unit (CPU) and an NVIDIA GTX 1080 graphics processing unit.”),
obtains first data belonging to a first type and second data belonging to a second type different from the first type (Tang, page 313, left column, first paragraph, lines “implementation of depthwise separable convolution in TensorFlow” teaches implementing a Tensorflow, which is executed by a computer processor, and Fig. 7 and page 310, paragraph below equation (9), lines 1-2 “X1 and X2 refer to the finger vein images which are the inputs of the two branches of the network” and page 312, paragraph below Table 5, lines 2-5 “Every image in each training set is paired with one genuine image and one impostor image, which were randomly chosen from the rest of the same training set, to form one positive sample pair and one negative sample pair” teaches obtaining first data belonging to a first type and second data belonging to a second type different from the first type, i.e., positive and negative samples which are different types),
calculates a first prediction result by inputting the first data into a first prediction model (Tang, Fig. 7 teaches calculating a first prediction result from a teacher model, i.e., a first prediction model, by inputting the first data into the model),
calculates a second prediction result by inputting the first data into the second prediction model (Tang, Fig. 7, and Fig. 8 teaches calculating a second prediction result from a student model, i.e., a second prediction model, by inputting the first data into the model),
calculates a third prediction result by inputting the second data into the second prediction model (Tang, Fig, 7, and Fig. 8 teaches calculating a third prediction result by inputting the second data into the student model, i.e., the second prediction model),
calculates a first error between the first prediction result and the second prediction result (Tang, Fig, 7 teaches KD loss as the first error between the prediction results),
calculates a second error between the second prediction result and the third prediction result that have been outputted from the second prediction model (Tang, page 310, right column, paragraph 1, lines 2-6 “Referring to Fig. 7, first a guided layer is chosen from the ‘student model’ as the ‘dividing line’, and the part before it is defined as the bottom part of ‘student model’; meanwhile, the part behind it is defined as the top part of ‘student model’” and Fig. 7 teaches MC loss as the second error between the two prediction results that have been outputted from the same student model),
trains the second prediction model by machine learning, based on the first error and the second error (Tang, page 313, left column, first paragraph, lines “implementation of depthwise separable convolution in TensorFlow” teaches implementing a Tensorflow, which is executed by a computer processor, and Fig. 7 and page 310, paragraph below equation (9), lines 1-2 “X1 and X2 refer to the finger vein images which are the inputs of the two branches of the network” and Fig. 7 and Fig. 8 teaches training the lightweight CNN student model based on optimization of the KD and MC loss, which are weighted losses in equation (9)).
wherein the training results in causing a behavior that differs between the first prediction model and the second prediction model to become closer to each other for a certain prediction target, the behavior being a respective output of the first prediction model and the second prediction model with respect to each of a plurality of inputs of the first prediction model and the second prediction model (Tang, abstract, lines 5-9 “they construct a Siamese structure combining with a modified contrastive loss function for training the above CNN, which effectively improves the network's performance. Finally, considering the feasibility of deploying the above CNN on embedded devices, they construct a lightweight CNN with depthwise separable convolution and adopt a knowledge distillation method to learn the knowledge from the pretrained-weights based CNN,” teaches that the training results cause a behavior that differs between the first prediction model and the second prediction model to become closer to each other for a certain prediction target with a contrastive loss for training, through an output of the first prediction model and the second prediction model).
the training includes: calculating a training parameter by which the first error … and the second error …(Tang, Fig. 7 and equation (9) teaches calculating a training parameter which uses the first and second error),
updating the second prediction model using the training parameter calculated (Tang, Fig. 7 and Fig. 8 teaches updating the student model using the using the calculated training parameters),
the processor is further configured to execute the program and control the information processing system to: control object detection, segmentation, or estimation of a distance from a camera to an object, the object detection, the segmentation, and the estimation being performed using the second prediction model after the training (Tang, Fig. 5 and page 307, left column, paragraph 2, lines 11-13 “we propose a novel method based on the Siamese CNN to better meet the requirements of finger vein verification” teaches testing the trained model and controlling predictions of finger vein verification, i.e. object detection).
Tang discloses calculating a training parameter with the first error and the second error, but does not explicitly teach decreasing the first error and increasing the first error.
Li teaches:
calculating … the first error decreases and the second error increases (Li, Fig. 8 teaches decreasing a first error in elements 808 and 810, and increasing a second error in element 812).
Tang and Li are analogous art because they are from the same field of endeavor, teacher-student learning.
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Tang to include calculating the first error decreases and the second error increases, based on the teachings of Li. The motivation for doing so would have been to improve the results under different conditions (Li, paragraph [0077], lines 2-3).
Regarding claim 10, Tang discloses:
An information processing device comprising: a memory configured to store a program; a processor configured to execute the program and control the information processing device to: (Tang, page 311, left column, paragraph 1, last 3 lines “All of the testings are conducted using a system with four cores each with a 4.00 GHz Intel I7-6850K central processing unit (CPU) and an NVIDIA GTX 1080 graphics processing unit.”),
obtains … data (Tang, page 313, left column, first paragraph, lines “implementation of depthwise separable convolution in TensorFlow” teaches implementing a Tensorflow, which is executed by a computer, and Table 3 teaches obtaining data),
obtains a prediction result by inputting the … data into a second prediction model (Tang, Fig. 7 teaches obtaining prediction results by inputting data into the second student model),
outputs data based on the prediction result obtained (Tang, Fig. 7 and page 307, right column, paragraph 3, lines 24-25 “The final output dimension of the proposed CNN” teaches outputting data based on the prediction result obtained),
wherein the processor is further configured to execute the program and train the second prediction model by machine learning based on a first error and a second error (Tang, Fig. 7 and Fig. 8 teaches training the lightweight CNN student model based on optimization of the KD and MC loss),
the first error is an error between a first prediction result and a second prediction result (Tang, Fig, 7 teaches KD loss as the first error between the prediction results),
the second error is an error between the second prediction result and a third prediction result (Tang, Fig, 7 teaches MC loss as the second error between the prediction results),
the processor is further configured to execute the program and calculate the first prediction result by inputting first data into a first prediction model (Tang, Fig. 7 teaches calculating a first prediction result from a teacher model, i.e., a first prediction model, by inputting the first data into the model),
the processor is further configured to execute the program and calculate the second prediction result by inputting the first data into the second prediction model, the second prediction result being outputted from the second prediction model (Tang, page 310, right column, paragraph 1, lines 2-6 “Referring to Fig. 7, first a guided layer is chosen from the ‘student model’ as the ‘dividing line’, and the part before it is defined as the bottom part of ‘student model’; meanwhile, the part behind it is defined as the top part of ‘student model’” and Fig. 7, and Fig. 8 teaches calculating a second prediction result from a student model, i.e., a second prediction model, by inputting the first data into the model),
the processor is further configured to execute the program and calculate the third prediction result by inputting second data into the second prediction model (Tang, Fig, 7, and Fig. 8 teaches calculating a third prediction result by inputting the second data into the student model, i.e., the second prediction model),
the first data is data belonging to a first type, and the second data is data belonging to a second type different from the first type (Tang, Fig. 7 and page 310, paragraph below equation (9), lines 1-2 “X1 and X2 refer to the finger vein images which are the inputs of the two branches of the network” and page 312, paragraph below Table 5, lines 2-5 “Every image in each training set is paired with one genuine image and one impostor image, which were randomly chosen from the rest of the same training set, to form one positive sample pair and one negative sample pair” teaches obtaining first data belonging to a first type and second data belonging to a second type different from the first type, i.e., positive and negative samples which are different types).
wherein the training results in causing a behavior that differs between the first prediction model and the second prediction model to become closer to each other for a certain prediction target, the behavior being a respective output of the first prediction model and the second prediction model with respect to each of a plurality of inputs of the first prediction model and the second prediction model (Tang, abstract, lines 5-9 “they construct a Siamese structure combining with a modified contrastive loss function for training the above CNN, which effectively improves the network's performance. Finally, considering the feasibility of deploying the above CNN on embedded devices, they construct a lightweight CNN with depthwise separable convolution and adopt a knowledge distillation method to learn the knowledge from the pretrained-weights based CNN,” teaches that the training results cause a behavior that differs between the first prediction model and the second prediction model to become closer to each other for a certain prediction target with a contrastive loss for training, through an output of the first prediction model and the second prediction model),
the second prediction model is updated using a training parameter calculated to cause the first error … and the second error … (Tang, Fig. 7, Fig. 8, and equation (9) teaches calculating a training parameter which uses the first and second error and Fig. 7 and Fig. 8 teaches updating the student model using the using the calculated training parameters),
the processor is further configured to control object detection, segmentation, or estimation of a distance from a camera to an object, the object detection, the segmentation, and the estimation being performed using the second prediction model after the training (Tang, Fig. 5 and page 307, left column, paragraph 2, lines 11-13 “we propose a novel method based on the Siamese CNN to better meet the requirements of finger vein verification” teaches testing the trained model and controlling predictions of finger vein verification, i.e. object detection).
Tang discloses the second prediction model is updated using a training parameter calculated to cause the first error … and the second error …, but does not explicitly teach decreasing the first error and increasing the first error.
Li teaches:
calculating … the first error decreases and the second error increases (Li, Fig. 8 teaches decreasing a first error in elements 808 and 810, and increasing a second error in element 812).
Tang and Li are analogous art because they are from the same field of endeavor, teacher-student learning.
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Tang to include calculating the first error decreases and the second error increases, based on the teachings of Li. The motivation for doing so would have been to improve the results under different conditions (Li, paragraph [0077], lines 2-3).
Tang discloses obtains data, but does not explicitly disclose the data to be from a sensor.
Li discloses:
obtains sensing data (Li, Fig. 5 and Fig. 9 teaches obtaining sensing data from the sensor in element 921 on Fig. 9).
Tang discloses obtains a prediction result by inputting the data into a second prediction model, but does not explicitly disclose the data to be from a sensor.
Li discloses:
obtains a prediction result by inputting the sensing data into a second prediction model (Li, Fig. 5 and Fig. 9 teaches obtaining prediction results by inputting sensing data into a second prediction model).
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Tang to include obtains a prediction result by inputting the sensing data into a second prediction model, and obtains sensing data based on the teachings of Li. The motivation for doing so would have been to improve model robustness (Li, paragraph [0017], lines 7-8).
Response to Arguments
Applicant's arguments filed 12/17/2025 have been fully considered with regards to the 35 U.S.C. 101 rejection, and they are persuasive. The rejections are withdrawn.
Applicant's arguments filed 12/17/2025 have been fully considered with regards to the 35 U.S.C. 102/103 rejection, but they are not persuasive.
The applicant asserts on page 15-16 of the remarks “Applicant respectfully submits that, for training using the structure as illustrated in Fig. 7 of Tang, a neural network including the bottom part and the top part shown in the middle portion of Fig. 7 is required, in addition to the trained teacher model in the upper portion of Fig. 7 and the lightweight CNN in the lower portion of Fig. 7. Specifically, a prediction result is output from each of the bottom part and the top part.”.
The Examiner respectfully disagrees with this assertion, as Tang explicitly states that this middle model is the lightweight student model, with the dividing line defining the top and bottom part of the student model (page 310, right column, paragraph 1, lines 1-6 “…the lightweight CNN (the ‘student model’) in the training stage. Referring to Fig. 7, first a guided layer is chosen from the ‘student model’ as the ‘dividing line’, and the part before it is defined as the bottom part of ‘student model’; meanwhile, the part behind it is defined as the top part of ‘student model’”). Thus, the lightweight student model here refers to both the lightweight model and the middle model in Fig. 7.
Applicant further argues that the claims “do not require any prediction model other than the first prediction model and the second prediction model.” However, the claims are not so limited. There is no direct requirement that the second and third prediction results come only from the second model, the only requirement is that the results have to be calculated at least by including inputting the 1st and 2nd data into the second prediction model but the claims do not exclude, for example, use of another model to also assist in the calculation.
Therefore the examiner finds both that (1) the art teaches the claims even under the narrow view argued by applicant but also (2) that the claims are broader than applicant argues.
The applicant asserts on page 16 of the remarks “The Examiner's assertions regarding "attacking references individually where the rejections are based on combinations of references" ... it is entirely appropriate to refer to the Examiner's assertions regarding Li on its own in order to explain why Li does not cure the Examiner's stated deficiencies of Tang”.
The Examiner respectfully disagrees with the above assertion, as the applicant states in the response filed 08/19/2025 that “Applicant respectfully submits that paragraph [0126] and Fig. 8 of Li indeed describe minimizing and maximizing output errors between models. However, unlike the disclosure and claims of the present application, Li fails to describe decreasing a first error between a first prediction result obtained when first data is input to a first prediction model and a second prediction result obtained when the same first data is input into a second prediction model”. This is a clear piecemeal analysis of the rejection, as Tang explicitly discloses calculating a first prediction result by inputting the first data into a first prediction model (Fig. 7 teaches calculating a first prediction result from a teacher model, i.e., a first prediction model, by inputting the first data into the model) and calculating a second prediction result by inputting the first data into a second prediction model (Fig. 7, and Fig. 8 teaches calculating a second prediction result from a student model, i.e., a second prediction model, by inputting the first data into the model). These limitations need not be taught by Li, as they are already present in Tang. Furthermore, Tang discloses calculating a training parameter with the first error and the second error (Tang, Fig. 7 and equation (9) teaches calculating a training parameter which uses the first and second error), but does not explicitly teach decreasing the first error and increasing the first error. Li teaches this embodiment, and the applicant appears to agree with this interpretation in the response filed 08/19/2025.
Thus, the examiner maintains that the applicant’s arguments are attacking references individually where the rejections are based on combinations of references, by highlighting features taught by Tang to attack Li.
Claims 9 and 10 are substantially similar to claim 1, and thus are rejected on the same basis.
Claims dependent on independent claims do not overcome the deficiencies of the rejected independent claims.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to HUMAIRA ZAHIN MAUNI whose telephone number is (703)756-5654. The examiner can normally be reached Monday - Friday, 9 am - 5 pm (ET).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, MATT ELL can be reached at (571) 270-3264. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/H.Z.M./Examiner, Art Unit 2141
/MATTHEW ELL/Supervisory Patent Examiner, Art Unit 2141