DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
The amendments filed 11/25/2025 have been entered.
Claims 1-15 remain pending within the application.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are:
"an acquisition unit that acquires information indicating a dropout rate in training of a model" in claim 13. This element is interpreted under 35 U.S.C. 112(f) as a processor (Fig. 3 and ¶[0101] “The control unit 40 is implemented by, for example, a central processing unit (CPU), a micro processing unit (MPU), or the like executing various programs… the control unit 40 includes an acquisition unit 41, a determination unit 42, a reception unit 43, a generation unit 44, and a provision unit 45.”), with the algorithm described in the specification (¶[0102-0103] “The acquisition unit 41 acquires the learning data used for the training of the model. For example, when once various pieces of data to be used as the learning data and labels assigned to the various pieces of data are received from the terminal apparatus 3, the acquisition unit 41 registers the received data and labels in the learning data database 31 as the learning data... The acquisition unit 41 acquires information indicating the dropout rate.”).
“a determination unit that determines a unit size of a hidden layer…” in claim 13. This element is interpreted under 35 U.S.C. 112(f) as a processor (Fig. 3 and ¶[0101] “The control unit 40 is implemented by, for example, a central processing unit (CPU), a micro processing unit (MPU), or the like executing various programs… the control unit 40 includes an acquisition unit 41, a determination unit 42, a reception unit 43, a generation unit 44, and a provision unit 45.”), with the algorithm described in the specification (¶[0079] and ¶[0097] “determine the unit size of the embedding layer of the first-type partial model by using a function indicating a relationship between the dropout rate and the unit size of the embedding layer”).
“a training unit that trains the model having the hidden layer” in claim 13. See 35 U.S.C. 112 rejection below for further comments.
“a generation unit that generates the model having a size based on the dropout rate” in claim 13. This element is interpreted under 35 U.S.C. 112(f) as a processor (Fig. 3 and ¶[0101] “The control unit 40 is implemented by, for example, a central processing unit (CPU), a micro processing unit (MPU), or the like executing various programs… the control unit 40 includes an acquisition unit 41, a determination unit 42, a reception unit 43, a generation unit 44, and a provision unit 45.”), with the algorithm described in the specification (¶[0110] “The generation unit 44 generates the model by performing batch normalization after dropout based on the dropout rate.”).
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim 13 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
The claim limitation “a training unit that trains the model having the hidden layer” in independent claim 13 invokes 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. However, the written description fails to disclose the corresponding structure, material, or acts for performing the entire claimed function and to clearly link the structure, material, or acts to the function. There is no mention of a training unit that trains a model in the disclosure. The specification is devoid of adequate structure to perform the claimed function.
Therefore, the claim is indefinite and is rejected under 35 U.S.C. 112(b) or pre-AIA 35 U.S.C. 112, second paragraph.
Applicant may:
(a) Amend the claim so that the claim limitation will no longer be interpreted as a limitation under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph;
(b) Amend the written description of the specification such that it expressly recites what structure, material, or acts perform the entire claimed function, without introducing any new matter (35 U.S.C. 132(a)); or
(c) Amend the written description of the specification such that it clearly links the structure, material, or acts disclosed therein to the function recited in the claim, without introducing any new matter (35 U.S.C. 132(a)).
If applicant is of the opinion that the written description of the specification already implicitly or inherently discloses the corresponding structure, material, or acts and clearly links them to the function so that one of ordinary skill in the art would recognize what structure, material, or acts perform the claimed function, applicant should clarify the record by either:
(a) Amending the written description of the specification such that it expressly recites the corresponding structure, material, or acts for performing the claimed function and clearly links or associates the structure, material, or acts to the claimed function, without introducing any new matter (35 U.S.C. 132(a)); or
(b) Stating on the record what the corresponding structure, material, or acts, which are implicitly or inherently set forth in the written description of the specification, perform the claimed function. For more information, see 37 CFR 1.75(d) and MPEP §§ 608.01(o) and 2181.
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claim 13 is rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention.
As described above, the disclosure does not provide adequate structure to perform the claimed function of “a training unit that trains the model having the hidden layer” in claim 13. The specification does not demonstrate that applicant has made an invention that achieves the claimed function because the invention is not described with sufficient detail such that one of ordinary skill in the art can reasonably conclude that the inventor had possession of the claimed invention. (FP 7.31.01.)
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-15 are rejected under 35 U.S.C. 103 as being unpatentable over Goel et al. (Pub. No.: US 2016/0307098 A1), hereafter Goel, in view of Wang et al. (“Jumpout : Improved Dropout for Deep Neural Networks with ReLUs”), hereafter Wang.
Regarding claim 1, Goel discloses:
An information processing method executed by a computer, the information processing method comprising (Goel, Figs. 2, 3, and 5, and ¶[0043]),
acquiring information indicating a dropout rate in training of a model (Goel, Fig. 4, element 402 and ¶[0053] teaches selecting an initial annealing schedule with a dropout rate, which is input into the system, as acquiring information indicating a dropout rate in training of a model),
… a unit size of a hidden layer (Goel, Fig. 4 element 410 , ¶[0019], and ¶[0090] teaches adjusting the percentage of nodes to be dropped in model hidden layers, i.e. adjusting the unit size of the remaining layer) based on a function expressing a correlation between the dropout rate and the unit size of the hidden layer (Goel, ¶[0065] and ¶[0067] teaches the unit size to be based on functions (4), (6), and (7), where a correlation between dropout rate and a “probability distribution over… the number of active units in a layer of unit” is given),
wherein using the function reduces time for … the unit size as compared to … the unit size without using the function (Goel, ¶[0008] teaches the time for adjusting the unit size for regular dropout to be suboptimal compared to using the function),
training the model having the hidden layer with the … unit size; and generating a trained model having the hidden layer with the … unit size (Goel, ¶[0054] teaches training and generating trained model(s) using the dropout training used to adjust unit size of layers),
wherein the trained model has improved generalization performance on unseen data as compared to a model trained without using the correlation (Goel, ¶[0058) teaches the trained model has improved generalization performance on unseen data during test time as compared to a model trained without using the correlation).
While Goel teaches adjusting a unit size of a hidden layer based on a function expressing a correlation between the dropout rate and the unit size of the hidden layer, wherein using the function reduces time for … the unit size as compared to … the unit size without using the function, training the model having the hidden layer with the determined unit size; and generating a trained model having the hidden layer with the … unit size, they do not explicitly recite determining the unit size through the adjustment.
Wang discloses:
determining a unit size of a hidden layer (Wang, page 5, section 3.2 “Modification II: Dropout Rate adapted to the number of Activated Neurons”, paragraph 2, lines 8-16 “the fraction of active neurons in layer j is
PNG
media_image1.png
32
282
media_image1.png
Greyscale
… we normalize the dropout rate by qj+ and use an actual dropout rate of p’j = pj/qj+ ” explicitly teaches determining a unit size of a hidden layer by determining the active neurons in the layer),
training …model having the hidden layer with the determined unit size; and generating a trained model having the hidden layer with the determined unit size (Wang, Figure 2 and pages 4-5, section 3.2 teaches training and generating models having the determined unit size).
Goel and Wang are analogous art because they are from the same field of endeavor, dropout learning and neural network.
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Goel to include determining the unit size, based on the teachings of Wang. One of ordinary skill in the art would have been motivated to make this modification in order “to better control the behavior of dropout for different layers and across various training stages”, as suggested by Wang (page 5, section 3.2, paragraph 2, lines 13-14).
Regarding claim 2, Goel, in view of Wang, discloses the information processing method according to claim 1. Goel further discloses:
generating the model including a hidden layer based on the dropout rate (Goel, Fig. 1A, Fig. 1B and ¶[0040-0042] teaches generating hidden nodes of hidden layers based on dropout rate during dropout training).
Regarding claim 3, Goel, in view of Wang, discloses the information processing method according to claim 2. Goel further discloses:
generating the model including a hidden layer having a size determined based on the dropout rate (Goel, Fig. 1B and ¶[0042] teaches generating hidden nodes of hidden layers, i.e. the size of hidden layers, based on dropout rate during dropout training).
Regarding claim 4, Goel, in view of Wang, discloses the information processing method according to claim 3. Goel further discloses:
generating the model including a hidden layer having a size determined based on a correlation between the dropout rate and the size of the hidden layer (Goel, Fig. 4, ¶[0005] and ¶[0024] teaches sizes of layers, i.e. number of layers/nodes, as network parameters and training to be based on a correlation between dropout rate and all parameters, which include the sizes of hidden layers).
Regarding claim 5, Goel, in view of Wang, discloses the information processing method according to claim 4. Goel further discloses:
generating the model based on a positive correlation between the dropout rate and the size of the hidden layer (Goel, ¶[0022] and ¶[0067] teaches maximizing the performance of the model as generating the model based on a positive correlation between the dropout rate and the size of the hidden layer).
Regarding claim 6, Goel, in view of Wang, discloses the information processing method according to claim 4. Goel further discloses:
generating the model including a hidden layer having a size determined using a function having the dropout rate and the size of the hidden layer as variables (Goel, Fig. 4, 408, and ¶[0065-0067] teaches a hidden layer having a size determined using a function having the dropout rate and the size of the hidden layer as variables).
Regarding claim 7, Goel, in view of Wang, discloses the information processing method according to claim 6. Goel further discloses:
generating the model based on a target size specified based on the function, the target size being a size of the hidden layer corresponding to the dropout rate (Goel, Fig. 4, element 410-412 and ¶[0057] teaches generating the model based on a target size, i.e. fixed percentage of outputs, specified based on the function, the target size being a size of the hidden layer corresponding to the dropout rate).
Regarding claim 8, Goel, in view of Wang, discloses the information processing method according to claim 7. Goel further discloses:
generating the model including a hidden layer having a size within a predetermined range from the target size (Goel, Fig. 4, element 410-412 and ¶[0057] teaches a hidden layer having a size within a predetermined range from the target size).
Regarding claim 9, Goel, in view of Wang, discloses the information processing method according to claim 8. Goel further discloses:
generating the model including a hidden layer having a size with a highest accuracy among a plurality of sizes within a predetermined range from the target size (Goel, ¶[0020-0021] teaches a maximized generalization performance for a percentage of input/output neurons as generating the model including a hidden layer having a size with a highest accuracy, i.e. measurement of how well a learning machine generalizes to unseen (nontraining) data, among a plurality of sizes within a predetermined range from the target size).
Regarding claim 10, Goel, in view of Wang, discloses the information processing method according to claim 9. Goel further discloses:
generating a plurality of models corresponding to a plurality of sizes within a predetermined range from the target size, respectively, are trained, and one model having a highest accuracy among the plurality of models as the model (Goel, ¶[0042] teaches training an ensemble of models as a plurality of models corresponding to a plurality of sizes within a predetermined range from the target size, to improve generalization performance).
Regarding claim 11, Goel, in view of Wang, discloses the information processing method according to claim 1. Goel further discloses:
generating the model by performing batch normalization after dropout based on the dropout rate (Goel, ¶[0052] and ¶[0078] teaches dropouts per minibatches as performing batch normalization after dropout based on the dropout rate).
Regarding claim 12, Goel, in view of Wang, discloses the information processing method according to claim 1. Goel further discloses:
the model includes an embedding layer in which an input is embedded (Goel, ¶[0040] hidden layers that transform inputs from input layers as embedding layer in which an input is embedded).
Regarding claim 13, Goel discloses:
An information processing apparatus comprising (Goel, Figs. 2, 3, and 5, and ¶[0043]),
an acquisition unit that acquires information indicating a dropout rate in training of a model (as per Claim interpretation of an acquisition unit that acquires information indicating a dropout rate in training of a model cited above: Goel, Fig. 4, element 402 and ¶[0053] teaches selecting an initial annealing schedule with a dropout rate, which is input into the system, as acquiring information indicating a dropout rate in training of a model),
a determination unit that … a unit size of a hidden layer based on a function expressing a correlation between the dropout rate and the unit size of the hidden layer (as per Claim interpretation of a determination unit that determines a unit size cited above: Goel, Fig. 4 element 410 , ¶[0019], and ¶[0090] teaches adjusting the percentage of nodes to be dropped in model hidden layers, i.e. adjusting the unit size of the remaining layer and ¶[0065] and ¶[0067] teaches the unit size to be based on functions (4), (6), and (7), where a correlation between dropout rate and a “probability distribution over… the number of active units in a layer of unit” is given),
wherein using the function reduces time for … the unit size as compared to … the unit size without using the function (Goel, ¶[0008] teaches the time for adjusting the unit size for regular dropout to be suboptimal compared to using the function),
a training unit that trains the model having the hidden layer with the … unit size; and a generation unit that generates a trained model having the hidden layer with the … unit size (as per Claim interpretation of a generation unit that generates a trained model cited above: Goel, Fig. 4, and ¶[0054] teaches training and generating trained model(s) using the dropout training used to adjust unit size of layers),
wherein the trained model has improved generalization performance on unseen data as compared to a model trained without using the correlation (Goel, ¶[0058) teaches the trained model has improved generalization performance on unseen data during test time as compared to a model trained without using the correlation).
While Goel teaches adjusting a unit size of a hidden layer based on a function expressing a correlation between the dropout rate and the unit size of the hidden layer, wherein using the function reduces time for … the unit size as compared to … the unit size without using the function, training the model having the hidden layer with the determined unit size; and generating a trained model having the hidden layer with the … unit size, they do not explicitly recite determining the unit size from this adjustment.
Wang discloses:
determining a unit size of a hidden layer (as per Claim interpretation of a determination unit that determines a unit size cited above: Wang, page 5, section 3.2 “Modification II: Dropout Rate adapted to the number of Activated Neurons”, paragraph 2, lines 8-16 “the fraction of active neurons in layer j is
PNG
media_image1.png
32
282
media_image1.png
Greyscale
… we normalize the dropout rate by qj+ and use an actual dropout rate of p’j = pj/qj+ ” explicitly teaches determining a unit size of a hidden layer by determining the active neurons in the layer),
training …model having the hidden layer with the determined unit size; and generating a trained model having the hidden layer with the determined unit size (Wang, Figure 2 and pages 4-5, section 3.2 teaches training and generating models having the determined unit size).
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Goel to include determining the unit size, based on the teachings of Wang. One of ordinary skill in the art would have been motivated to make this modification in order “to better control the behavior of dropout for different layers and across various training stages”, as suggested by Wang (page 5, section 3.2, paragraph 2, lines 13-14).
Claim 14 is substantially similar to claim 1, and thus is rejected on the same basis as claim 1.
Regarding claim 15, Goel, in view of Wang, discloses the information processing method according to claim 1. Goel further discloses:
wherein training the model comprises: performing forward propagation through the hidden layer (Goel, ¶[0066] teaches a forward pass of training as performing forward propagation through the hidden layer),
applying dropout to nodes of the hidden layer based on the dropout rate (Goel, ¶[0019] and [0066] teaches a applying dropout to nodes of the hidden layer based on the dropout rate),
updating weights through backpropagation (Goel, ¶[0067] and ¶[0077] teaches updating weights through back propagation).
While Goel does not disclose performing batch normalization after applying the dropout, Wang discloses:
performing batch normalization after applying the dropout (Wang, Section 3.3 “Modification III: Rescale Outputs to work with Batch Normalization”, paragraph 2, lines 1-2 “We consider one possible setting of combining dropout layers with BN layers” teaches performing batch normalization after applying the dropout).
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Goel to include performing batch normalization after applying the dropout, based on the teachings of Wang. One of ordinary skill in the art would have been motivated to make this modification in order “to better control the behavior of dropout for different layers and across various training stages”, as suggested by Wang (page 5, section 3.2, paragraph 2, lines 13-14).
Response to Arguments
Applicant's arguments filed 11/25/2025 have been fully considered with regards to the 35 U.S.C. 112(f) interpretation, but they are not persuasive.
The applicant asserts on page 1 of the remarks the amended claim 13 “provide sufficient structure and acts that perform the claimed functions, which under 35 U.S.C. § 112(f) is sufficient to indicate that Applicant does not wish the claim limitations to be interpreted under section 112(f)” .The Examiner respectfully disagrees, as the claim limitations still meet the 3 prong analysis for determining whether a claim limitation invokes 35 U.S.C. 112(f) (See MPEP §§ 2181). The amended claim recites elements expressed as a means or step for performing a specified function (ex. “an acquisition unit that acquires information"), without the recital of structure, material, or acts in support thereof (ex. no mention of what the acquisition unit is and how the information is acquired by this unit), and thus are construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. As a result, the 35 U.S.C. 112(f) interpretation is maintained.
Applicant's arguments filed 11/25/2025 have been fully considered with regards to the 35 U.S.C. 101 rejection, and they are persuasive. The rejections are withdrawn.
Applicant's arguments filed 11/25/2025 have been fully considered with regards to the 35 U.S.C. 102/103 rejection.
Applicant’s arguments with respect to claims 1-15 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's
Disclosure:
Duyck et al. (“Modified Dropout for Training Neural Network”) teaches dropout training and neural networks.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to HUMAIRA ZAHIN MAUNI whose telephone number is (703)756-5654. The examiner can normally be reached Monday - Friday, 9 am - 5 pm (ET).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, MATT ELL can be reached at (571) 270-3264. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/H.Z.M./Examiner, Art Unit 2141
/MATTHEW ELL/Supervisory Patent Examiner, Art Unit 2141