DETAILED ACTION
This action is in response to the claims filed on Oct. 6th, 2022. A summary of this action:
Claims 1-16 have been presented for examination.
Priority claim is denied
Claims 1, 5, 9, 13 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement
Claim 2, 6-7, 10, 14-15 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite
Claims 12 and 14-15 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite
Claim 12 and 14-15 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement
Claim(s) 1-5, 8-13, 16 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Anil, Rohan, et al. "Large scale distributed neural network training through online distillation." arXiv preprint arXiv:1804.03235 (2018).
Claims 6-7 and 14-15 are not rejected under § 102/103, as this would require a highly speculative interpretation in view of the § 112(b) issue, wherein the instant disclosure sheds no clarity on how to remedy this issue (i.e. a definite interpretation is not possible without considerable speculation on the scope of the claim). See the § 112(b) rejection below for clarification
This action is non-final
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Consideration of Prior Art Cited in Parent Application
As this is a continuation of prior US application, see MPEP § 609.02(II)(A)(2): “The examiner will consider information which has been considered by the Office in a parent application (other than an international application; see subsection I., above) when examining: (A) a continuation application filed under 37 CFR 1.53(b), (B) a divisional application filed under 37 CFR 1.53(b), or (C) a continuation-in-part application filed under 37 CFR 1.53(b). A listing of the information need not be resubmitted in the continuing application unless the applicant desires the information to be printed on the patent.”
Also see MPEP § 2001.06(b): “If the application under examination is identified as a continuation, divisional, or continuation-in-part of an earlier application, the examiner will consider the prior art properly cited in the earlier application. See MPEP § 609 and MPEP § 719.05, subsection (II)(A), example J. The examiner must indicate in the first Office action whether the prior art in a related earlier application has been reviewed. Accordingly, no separate citation of the same prior art need be made in the later application, unless applicant wants a listing of the prior art printed on the face of the patent.”
See MPEP § 707.05(a): “Additionally, copies of references cited in continuation applications if they had been previously cited in the parent application are not furnished”
The prior art in the instant parent application has been reviewed for the prosecution of the instant application.
Priority
Applicant’s claim for the benefit of a prior-filed application under 35 U.S.C. 119(e) or under 35 U.S.C. 120, 121, 365(c), or 386(c) is acknowledged. Applicant has not complied with one or more conditions for receiving the benefit of an earlier filing date under 35 U.S.C. 120 as follows:
The later-filed application must be an application for a patent for an invention which is also disclosed in the prior application (the parent or original nonprovisional application or provisional application). The disclosure of the invention in the parent application and in the later-filed application must be sufficient to comply with the requirements of 35 U.S.C. 112(a) or the first paragraph of pre-AIA 35 U.S.C. 112, except for the best mode requirement. See Transco Products, Inc. v. Performance Contracting, Inc., 38 F.3d 551, 32 USPQ2d 1077 (Fed. Cir. 1994).
See MPEP 2163(II)(A): "For example, in Hyatt v. Dudas, 492 F.3d 1365, 1371, 83 USPQ2d 1373, 1376-1377 (Fed. Cir. 2007), the examiner made a prima facie case by clearly and specifically explaining why applicant’s specification did not support the particular claimed combination of elements, even though applicant’s specification listed each and every element in the claimed combination. The court found the "examiner was explicit that while each element may be individually described in the specification, the deficiency was lack of adequate description of their combination" and, thus, "[t]he burden was then properly shifted to [inventor] to cite to the examiner where adequate written description could be found or to make an amendment to address the deficiency.""
Also, see MPEP 2163(I) for Lockwood v. Amer. Airlines, Inc., 107 F.3d 1565, 1572, 41 USPQ2d 1961, 1966 (Fed. Cir. 1997).
The disclosure of the prior-filed application, Application No. 15346691, fails to provide adequate support or enablement in the manner provided by 35 U.S.C. 112(a) or pre-AIA 35 U.S.C. 112, first paragraph for one or more claims of this application.
See the subject matter of the independent claims as expressly recited, as this subject matter has no basis in the 15346691 disclosures.
To clarify, see the recitations of the “actual model” and “reference model” in the present claims, which, when given their BRI in view of the disclosure (e.g. ¶¶ 43, 48, etc.) encompass, but are not limited to, models such as neural networks and other AI models.
See the 15346691, ¶¶ 58-60, wherein the term model is used in an entirely different context, e.g. “The internet model of communication”, and see the remaining portions of the disclosure of 15346691 which do not describe nor even have any contemplations of the subject matter expressly recited in the present claims (e.g. compare the drawings of the two applications). Nor does the ‘691 even mention the use of neural networks or machine learning.
As such, there is insufficient written description support in the ‘691 application as it was filed to support the instant present claims, therefore the instant present claims have the effective filing date of the day on which they, and their corresponding disclosure, were filed.
Claim Rejections - 35 USC § 112(a)
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claims 1, 5, 9, 13 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Some of the dependent claims inherit the deficiencies of the claims they depend upon, others correct it due to the nature of the following rejection, as discussed below.
See MPEP § 2163.03(II): “Regents of the Univ. of Minnesota v. Gilead Scis., Inc., 61 F.4th 1350, 2023 USPQ2d 269 (Fed. Cir. 2023). The court noted that the Board evaluated whether the earlier- filed applications provided an "ipsis verbis" disclosure of the claimed subgenus. In reviewing this evaluation, the court agreed with the Board that the earlier-filed applications recited a compendium of common organic chemical functional groups, yielding a laundry list disclosure of different moieties for every possible side chain or functional group. Id. at 1357. Thus, it was unclear how many compounds actually fell within the described genera and subgenera. Id. The court also noted that the Board then evaluated whether the prior applications "provided sufficient blaze marks to provide written description support for the ‘830 patent claims." Id. at 1357.” And MPEP § 2163.03(IV): “Fields v. Conover, 443 F.2d 1386, 170 USPQ 276 (CCPA 1971) (A broad generic disclosure to a class of compounds was not a sufficient written description of a specific compound within the class.).”, i.e. MPEP § 2163.03(V): “In re Wertheim, 541 F.2d 257, 262, 191 USPQ 90, 96 (CCPA 1976), a question as to whether a specification provides an adequate written description may arise in the context of an original claim. An original claim may lack written description support when (1) the claim defines the invention in functional language specifying a desired result but the disclosure fails to sufficiently identify how the function is performed or the result is achieved or (2) a broad genus claim is presented but the disclosure only describes a narrow species with no evidence that the genus is contemplated. See Ariad Pharms., Inc. v. Eli Lilly & Co., 598 F.3d 1336, 1349-50 (Fed. Cir. 2010) (en banc). The written description requirement is not necessarily met when the claim language appears in ipsis verbis in the specification. "Even if a claim is supported by the specification, the language of the specification, to the extent possible, must describe the claimed invention so that one skilled in the art can recognize what is claimed. The appearance of mere indistinct words in a specification or a claim, even an original claim, does not necessarily satisfy that requirement." Enzo Biochem, Inc. v. Gen-Probe, Inc., 323 F.3d 956, 968, 63 USPQ2d 1609, 1616 (Fed. Cir. 2002).”
At issue with the present independent claims, and the above identified dependent claims, is the breadth of the models disclosed.
Far from describing merely generic machine learning models, the specification provides numerous alternative embodiments (i.e. the genus of the term “model” as recited in the claims is much broader than just AI models)
E.g. see ¶¶ 66-68 which convey control system models (USPC 700; as compared to USPC 706).
¶ 70 conveys a generic model for some sort in healthcare. ¶ 71 uses the term “emulation” (as claimed) with MySQL Databases). See ¶ 74 as well.
However, the claim requires: “modifying, by the electronic device (100), the current state of the at least one actual model to emulate the target state to be achieved by the at least one actual model based on the at least one state of the reference model” – and the only methods sufficiently described to achieve this are ¶ 12: “the target state of the at least one actual model is emulated to the current state of the at least one actual model by copying the plurality of set of parameters of a neural network of the reference model.” And ¶ 46: “The target state of the at least one actual model (182a) is emulated to the current state of the at least one actual model (182a) by copying the plurality of set of parameters of a neural network of the reference model ( 184 )” – for other similar methods for neural networks, see ¶¶ 97-99.
Thus, the present claims (the ones identified above) lack sufficient written description support, for the only disclosed embodiments with sufficient written description support are those when the term “model” is limited to the genus of neural networks.
The dependent claims not rejected under § 112(a) are those that, under the BRI consistent with the disclosure, either explicitly limit the claim to neural networks, or implicitly due by requiring features only disclosed with the neural networks themselves (e.g. “weight” = ¶ 50: “The AI model may consist of a plurality of neural network layers. Each layer has a plurality of weight values”, e.g. “reinforcement learning technique” in claim 6 and its parallel would have readily been recognized by POSITA as a neural network technique; for “training” – see ¶ 3, and ¶ 48, and then ¶¶ 50-51 and elsewhere where training is only described in the context of neural networks).
The Examiner suggests amending the claim to narrow it to only embodiments with sufficient written description support (i.e. to have the claim expressly recite the models are neural networks) and the Examiner will fully examine the claims under this interpretation for compact prosecution.
Claim Rejections - 35 USC § 112(b)
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim 2, 6-7, 10, 14-15 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. The dependent claims inherit the deficiencies of the claims they depend upon.
MPEP § 2173.05(b)(IV): “A claim term that requires the exercise of subjective judgment without restriction may render the claim indefinite. In re Musgrave, 431 F.2d 882, 893, 167 USPQ 280, 289 (CCPA 1970). Claim scope cannot depend solely on the unrestrained, subjective opinion of a particular individual purported to be practicing the invention. Datamize LLC v. Plumtree Software, Inc., 417 F.3d 1342, 1350, 75 USPQ2d 1801, 1807 (Fed. Cir. 2005));”
The independent claims recite “imperfect emulation” however this is not given any weight as it is considered an intended use and not required by the independent claims, as further clarified by its requirement in the active recitations in claims 2 and 10 (see the requirements of § 112(d), as dependent claims must further limit the claims they depend upon).
The phrase “imperfect emulation” is at issue, wherein the term “imperfect” is a subjective term that renders the claim indefinite because there is no standard provided in the instant disclosure (¶¶ 2, 93, 97, 99-100, 103) for POSITA to ascertain the scope of the present claims without relying on their own unrestrained, subjective opinion when practicing the invention.
Thus, claims 2 and 10 are indefinite.
Representative Claim 6 recites (also see claim 14, and dependents of both of these claims), in part: “with an actual model of a plurality of actual models based on reward, wherein the actual model is an alternate model newly introduced and the plurality of models are existing models … emulating, by the electronic device (100), the at least one set of parameters of the plurality of set of parameters associated with remaining actual models of the plurality of actual models” – this is wholly indefinite.
To clarify, this simultaneously requires the actual model (of claim 1, see ¶ 44) to be an “alternate model newly introduced” while further requiring to be also be part of “a plurality of actual models” wherein the entire plurality is required to be “existing models”, and then the claim further recites some element of “remaining actual models of the plurality” without specifying what this element is (i.e. is it all of the existing models, or all but for the newly introduced one, etc.).
In other words, this claim is wholly indefinite because it is not clear what the required relationships between all of these elements are, i.e. is the actual model an existing model, or is it newly introduced, etc. Also, this claim does not have sufficient antecedent basis back to the recitations of actual model in the independent claim.
As the instant disclosure provides no clarifying light to inform a non-speculative interpretation of this claim (see ¶ 44, see MPEP §2143.03(I): “See also In re Wilson, 424 F.2d 1382, 165 USPQ 494 (CCPA 1970) (The Board erred because it ignored claim language that it considered to be indefinite, and reached a conclusion that the claim would have been obvious based only on the rest of the claim.). However, an examiner should not simply speculate about the meaning of the claim language and then enter an obviousness rejection in view of that speculative interpretation. In re Steele, 305 F.2d 859,134 USPQ 292 (CCPA 1962) (The "considerable speculation" by the examiner and the Board as to the scope of the claims did not provide a proper basis for an obviousness rejection.). A claim should not be rejected over prior art just because it is indefinite. Ionescu, 222 USPQ at 540 (citing Steele).”) – the Examiner suggests canceling this claim, as the Examiner sees no clear path to amend this claim with sufficient § 112(a) support from ¶ 44 to remedy the 112(b) issues.
Furthermore, the Examiner does not speculate as to what the meaning of the claim is for § 102/103 purposes, i.e. no rejection is set forth below because any rejection would be premised on considerable speculation which is not permitted (MPEP § 2143.03(I)).
Claim Interpretation – 112(f)
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are:
Claim 12: “wherein the emulation management controller (180) is configured to compare the current state of the at least one actual model with the at least one state of the reference model comprises:…” and claim 14: “The electronic device (100) as claimed in claim 9, wherein the emulation management controller (180) is further configured to:”
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 112(b)
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 12 and 14-15 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. The dependent claims inherit the deficiencies of the claims they depend upon.
The claim limitations noted above invoke 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. However, the written description fails to disclose the corresponding structure, material, or acts for performing the entire claimed function and to clearly link the structure, material, or acts to the function.
The emulation management controller # 180 lacks sufficient structure clearly tied to it.
See fig. 1, then see ¶ 47: “The emulation management controller (180) is implemented by processing circuitry such as logic gates, integrated circuits, microprocessors, microcontrollers, memory circuits, passive electronic components, active electronic components, optical components, hardwired circuits, or [noting the or in particular, i.e. it’s not an and/or] the like, and may optionally be driven by firmware. The circuits may, for example, be embodied in one or more semiconductor chips, or on substrate supports such as printed circuit boards and the like”
But then see ¶ 48: “At least one of the plurality of modules/ components of the emulation management controller (180) may be implemented through an AI model. A function associated with the AI model may be performed through memory (120) and the processor (140).”
At issue is that the specification in ¶ 47 leads to an embodiment of structure of merely “passive electronic components” as an embodiment, e.g. resistors, capacitors, inductors, and the like, which is insufficient structure to perform the claimed functionality. In other words, the full scope of the claim is indefinite for insufficient structure for all embodiments of what is claimed.
¶ 48 further leads to ambiguity about what structure is required, because ¶ 48 indicates that the software in the controller is implemented by the processor and memory expressly recited in the present claim 12 as not being the emulation controller nor part of it.
The Examiner suggests amending the claims to no longer have this controller, and rather have one or more processors coupled to the memory and configured to perform the steps claimed (see ¶ 36). A separate suggestion would be to expressly limit in the claim itself the controller to comprising the processor (¶ 47) as this would be sufficient structure in the claim itself, but only if the claim is expressly limited to this embodiment.
Also, the Examiner suggests amending claim 12 to include these suggestions, so that all of the dependent claims inherit these remedies (to clarify, only some of the dependents add a function to the nonce term in claim 12, hence the rejection is limited to these dependent claims, and the dependents of them).
Therefore, the claim is indefinite and is rejected under 35 U.S.C. 112(b) or pre-AIA 35 U.S.C. 112, second paragraph.
Applicant may:
(a) Amend the claim so that the claim limitation will no longer be interpreted as a limitation under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph;
(b) Amend the written description of the specification such that it expressly recites what structure, material, or acts perform the entire claimed function, without introducing any new matter (35 U.S.C. 132(a)); or
(c) Amend the written description of the specification such that it clearly links the structure, material, or acts disclosed therein to the function recited in the claim, without introducing any new matter (35 U.S.C. 132(a)).
If applicant is of the opinion that the written description of the specification already implicitly or inherently discloses the corresponding structure, material, or acts and clearly links them to the function so that one of ordinary skill in the art would recognize what structure, material, or acts perform the claimed function, applicant should clarify the record by either:
(a) Amending the written description of the specification such that it expressly recites the corresponding structure, material, or acts for performing the claimed function and clearly links or associates the structure, material, or acts to the claimed function, without introducing any new matter (35 U.S.C. 132(a)); or
(b) Stating on the record what the corresponding structure, material, or acts, which are implicitly or inherently set forth in the written description of the specification, perform the claimed function. For more information, see 37 CFR 1.75(d) and MPEP §§ 608.01(o) and 2181.
Claim Rejections - 35 USC § 112(a)
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claim 12 and 14-15 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. The dependent claims inherit the deficiencies of the claims they depend upon.
See the above § 112(f) invocation and corresponding § 112(b) rejection including the citations to the disclosure in the § 112(b) rejection. See MPEP §2181(IV): “When a claim containing a computer-implemented 35 U.S.C. 112(f) claim limitation is found to be indefinite under 35 U.S.C. 112(b) for failure to disclose sufficient corresponding structure (e.g., the computer and the algorithm) in the specification that performs the entire claimed function, it will also lack written description under section 112(a).” and MPEP 2181(II)(B): “When a claim containing a computer-implemented 35 U.S.C. 112(f) claim limitation is found to be indefinite under 35 U.S.C. 112(b) for failure to disclose sufficient corresponding structure (e.g., the computer and the algorithm) in the specification that performs the entire claimed function, it will also lack written description under 35 U.S.C. 112(a). See MPEP § 2163.03, subsection VI.”
As a further point of clarity, as discussed above with respect to ¶ 47, the Examiner notes that this rejection is premised on the “passive electronic components” in ¶ 47 as lacking sufficient structure in that particular embedment of the present claim, i.e. should either of the Examiner’s suggestions above be adopted, it would address these § 112 rejections corresponding to the § 112(f) invocation.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
These claims are not directed towards an abstract idea, when construed in the manner discussed above in the § 112(a) rejection.
See https://www.uspto.gov/web/offices/pac/mpep/index.html for the Dec. 5th advance notice of change in view of Ex Parte Desjardins, Appeal No. 2024-000567 (PTAB September 26, 2025, Appeals Review Panel Decision).
The present claims are analogous to those in Ex Parte Desjardins, in particular the last step amounts to an integration of a practical application (of the abstract idea of the mental process recited in the first four steps, but for the mere instructions to use a computer and commonplace software as a tool to do those steps [the “model[s]” when read in view of the disclosure are AI/ML models, preferably neural networks, akin to Ex Parte Desjardins] for it is akin to the step identified in Ex Parte Desjardins (as discussed in the MPEP update) when given a fair reading in view of the disclosure for its BRI. See ¶¶ 104-105 to clarify on improvement to technology at the prong 2 consideration in view of Ex Parte Desjardins, as this claim is not directed to an abstract idea, but rather to the software technology of machine learning.
To clarify, the instant claim 1: “modifying, by the electronic device (100), the current state of the at least one actual model to emulate the target state to be achieved by the at least one actual model based on the at least one state of the reference model.” (the Examiner noting that “model” is construed a machine learning model, preferably a neural network, in view of the disclosure) – in Ex parte Desjardins, the key limitation was: “training the machine learning model on the second training data to adjust the first values of the plurality of parameters to optimize performance of the machine learning model on the second machine learning task while protecting performance of the machine learning model on the first machine learning task”.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1-5, 8-13, 16 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Anil, Rohan, et al. "Large scale distributed neural network training through online distillation." arXiv preprint arXiv:1804.03235 (2018).
Claims 6-7 and 14-15 are not rejected under § 102/103, as this would require a highly speculative interpretation in view of the § 112(b) issue, wherein the instant disclosure sheds no clarity on how to remedy this issue (i.e. a definite interpretation is not possible without considerable speculation on the scope of the claim). See the § 112(b) rejection above for clarification
Regarding Claim 1
Anil teaches:
A method for performing imperfect emulation of a state of a model in an electronic device (100), wherein the method comprising: (Anil, abstract: “…Two neural networks trained on disjoint subsets of the data can share knowledge by encouraging each model to agree with the predictions the other model would have made. These predictions can come from a stale version of the other model so they can be safely computed using weights that only rarely get transmitted. Our second claim is that online distillation is a cost-effective way to make the exact predictions of a model dramatically more reproducible…”
determining, by an electronic device (100), a current state of at least one actual model of the electronic device (100); comparing, by the electronic device (100), the current state of the at least one actual model with at least one state of a reference model; determining, by the electronic device (100), a target state to be achieved by the at least one actual model based on the at least one state of the reference model; determining, by the electronic device (100), a deviation of the current state of the at least one actual model with respect to the target state to be achieved by the actual model; (Anil, abstract: “Two neural networks trained on disjoint subsets of the data can share knowledge by encouraging each model to agree with the predictions the other model would have made. These predictions can come from a stale version of the other model so they can be safely computed using weights that only rarely get transmitted…” – and see § 1 including page 2, ¶ 3: “In this work, we describe a simpler online variant of distillation we call co-distillation. Co-distillation trains n copies of a model in parallel by adding a term to the loss function of the ith model to match the average prediction of the other models”
to further clarify, see § 2 and algorithm 1, including in § 2: “Algorithm 1 presents the codistillation algorithm. The distillation loss term can be the squared error between the logits of the models, the KL divergence between the predictive distributions, or some other measure of agreement between the model predictions. In this work we use the cross entropy error treating the teacher predictive distribution as soft targets [example of target state for the actual model to be achieved]. In the beginning of training, the distillation term in the loss is not very useful or may even be counterproductive, so to maintain model diversity longer and to avoid a complicated loss function schedule we only enable the distillation term in the loss function once training has gotten off the ground.” – i.e. the actual model [student model of Anil] and the reference model [the teacher model of Anil] are compared for their current state (specifically, of their outputs/predictions), wherein both are trained initially alone, then later the “distillation term” is used after an initial training
and modifying, by the electronic device (100), the current state of the at least one actual model to emulate the target state to be achieved by the at least one actual model based on the at least one state of the reference model. (Anil, as cited above, then see § 2.1 of Anil: “As seen in Algorithm 1, to update the parameters of one network using codistillation one only needs the predictions of the other networks, which can be computed locally from copies of the other networks weights.” – to clarify, in § 2.1: “1. Each worker trains an independent version of the model on a locally available subset of the training data. 2. Occasionally, workers checkpoint their parameters. 3. Once this happens, other workers can load the freshest available checkpoints into memory and perform codistillation… When using codistillation to distribute training each worker only needs to very rarely read parameter checkpoints from the other models” – to clarify, footnote 1: “1 Although our implementation of codistillation exchanges model checkpoints, there are some cases where an alternative communication approach would be desirable. One obvious alternative would be to use a prediction server to communicate predictions instead of weights. Workers could read teacher predictions along with a minibatch of data and send their predictions back to the server after each update or separate, evaluation-only workers could read checkpoints and continuously update the predictions for each piece of training data. This strategy might be most appropriate in the presence of specialized forward-pass hardware. Another alternative to communicating checkpoints is to train all copies of the model in the same process which would make the most sense when the size of the model relative to the characteristics of the hardware make it almost free to to run both models. – in other words, its copying the weights for the checkpoints (#3 in § 2.1 to clarify: “sufficiently out-of-sync copies of the weights will have completely arbitrary differences that change the meaning of individual directions in feature space that are not distinguishable by measuring the loss on the training set;”), wherein Anil further suggests (i.e. renders obvious, but does not anticipate) the alternatives listed in footnote 1 (should it be found that these alternatives teach what is claimed, the Examiner notes the statutory basis would change to a § 103, wherein POSITA would have found it obvious to try the alternatives listed in footnote 1, and would have been motivated to do so because: “This [the first alternative] strategy might be most appropriate in the presence of specialized forward-pass hardware.” And “Another alternative to communicating checkpoints is to train all copies of the model in the same process which would make the most sense when the size of the model relative to the characteristics of the hardware make it almost free to to run both models” – for the second alternative
to clarify on the BRI of this limitation, see ¶ 46: “The target state of the at least one actual model (182a) is emulated to the current state of the at least one actual model (182a) by copying the plurality of set of parameters of a neural network of the reference model ( 184 ).” – e.g. Anil, as cited above for the § 102 rationale.
Regarding Claim 2
Anil teaches:
The method as claimed in claim 1, wherein the at least one actual model and the reference model comprises neural networks for performing imperfect emulations and wherein the at least one actual model and the reference model converge to perform the imperfect emulations. (Anil, as cited above for claim 1, noting in § 2: “3. using the distillation loss during training before any model has fully converged.”, and see algorithm 1 for clarification, noting the “While not converged do” loop
Regarding Claim 3
Anil teaches:
The method as claimed in claim 1, wherein the reference model is trained using one of a sensory data set and a training data set. (Anil, as cited above for claim 1, wherein both models are trained using a training data set in the above citations, see § 3.1 for a description of example training datasets it was applied to)
Regarding Claim 4
Anil teaches:
The method as claimed in claim 1, wherein comparing, by the electronic device (100), the current state of the at least one actual model with the at least one state of the reference model comprises:
providing, by the electronic device (100), an input to the reference model, wherein the input to the reference model is same an input to the at least one actual model in the current state; (Anil, as cited above for claim 1, incl. § 2: “The idea of distillation is to first train a teacher model, which traditionally is an ensemble or another high-capacity model, and then, once this teacher model is trained, train a student model with an additional term in the loss function which encourages its predictions to be similar to the predictions of the teacher model… There are many variants of distillation, for different types of teacher model, different types of loss function, and different choices for what dataset the student model trains on. For example, the student model could be trained on a large unlabeled dataset, on a held-out data set, or even on the original training set… In this paper, we use codistillation to refer to distillation performed:
1. using the same architecture for all the models; 2. using the same dataset to train all the models; and 3. using the distillation loss during training before any model has fully converged”
determining, by the electronic device (100), a weight of an output of the reference model for the provided input and a weight of an output of the at least one actual model in the current state; and comparing, by the electronic device (100), the current state of the at least one actual model with the at least one state of the reference model based on the weight of the output of the reference model and the weight of the output of the at least one actual model in the current state. (Anil, as cited above for claim 1, including § 2, including # 3 in § 2, then § 2.1: “As seen in Algorithm 1, to update the parameters of one network using codistillation one only needs the predictions of the other networks, which can be computed locally from copies of the other networks weights” - see the clarifying citations above, i.e. its comparing the predictions, based on the weights have the models having been copied)
Regarding Claim 5
Anil teaches:
The method as claimed in claim 1, wherein modifying the current state of the at least one actual model to emulate the target state to be achieved by the at least one actual model minimizes an error between the current state of the at least one actual model and the target state to be achieved by the at least one actual model. (Anil, as cited above for claim 1, wherein this is “using the distillation loss during training before any model has fully converged.”, i.e.: “The distillation loss term can be the squared error between the logits of the models, the KL divergence between the predictive distributions, or some other measure of agreement between the model predictions. In this work we use the cross entropy error treating the teacher predictive distribution as soft targets” – wherein the Examiner notes that by using terms such as “error” and “loss” POSITA would have readily inferred this was an error/loss to be minimized
Should the cross-entropy error be found not to teach this limitation under § 102, then the Examiner notes it would have been obvious to try the other listed loss terms suggested expressly by Anil under § 103.
Regarding Claim 8
Anil teaches:
The method as claimed in claim 1, wherein the target state of the at least one actual model is emulated to the current state of the at least one actual model by copying the plurality of set of parameters of a neural network of the reference model. (Anil, as cited above, for the checkpoints being the copies of the weights of the reference/teacher model as detailed in the citations to § 2.1)
Regarding Claim 9
Rejected under a similar rationale as claim 1,
An electronic device (100) for performing imperfect emulation of a state of a model, wherein the electronic device (100) comprises: a memory (120); a processor (140) coupled to the memory (120); a communicator (160) coupled to the memory (120) and the processor (140); an emulation management controller (180) coupled to the memory (120), the processor (140) and the communicator (160), and configured to: (Anil, as was cited above, teaches a computer executing software – and last paragraph of § 1: “In general, we believe the quality gains of codistillation over well-tuned offline distillation will be minor in practice and the more interesting research direction is exploring codistillation as a distributed training algorithm that uses an additional form of communication that is far more delay tolerant.” And in § 2.1: “In order to scale beyond the limits of distributed stochastic gradient descent we will need an algorithm that is far more communication efficient.” And in § 3: “In order to study the scalability of distributed training using codistillation, we need a task that is representative of important large-scale neural network training problems” – and § 3.2 ¶ 2: “We tried asynchronous SGD with 32 and 128 workers, sharding the weights across increasing numbers of parameter servers as necessary to ensure that training speed was bottlenecked by GPU computation time” – also see § 3.3 ¶ 1, i.e. there was a communicator for communications between the servers and/or GPUs (¶ 36 of the disclosure to clarify on the BRI of processor including GPU); and in ¶ 47 GPUs are an example of at least integrated circuits, active electronic components, and hardwired circuits)
Regarding Claim 10-13 and 16
These are rejected under a similar rationale as their parallel dependent claims discussed above for claim 1, and the dependents thereof.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Bodapati et al., US 11,531,846 – see the abstract, see fig. 1 and its accompanying description, then see fig. 5 and its accompanying description starting in col. 12
Gebre et al., US 2019/0156205, abstract, and see fig. 3-4 and their accompanying descriptions
Hall et al., US 2022/0344049, abstract, then see fig. 1B and its accompanying description, then see fig. 2 and its accompanying description, including see ¶¶ 10-12 as well and ¶ 124, and see table 3 for the algorithm.
Roth et al., US 2022/0366220, abstract and fig. 2-6, along with accompanying description, including see ¶ 92.
Chu et al., US 2021/0166117, abstract then fig. 5, and accompanying description.
Chen, Yiqiang, et al. "Fedhealth: A federated transfer learning framework for wearable healthcare." IEEE Intelligent Systems 35.4 (2020): 83-93. Abstract, and pages 85-88
Fawaz, Hassan Ismail, et al. "Transfer learning for time series classification." 2018 IEEE international conference on big data (Big Data). IEEE, 2018. Abstract, and fig. 2 along with accompanying description
Luo, Jun, and Shandong Wu. "Adapt to adaptation: Learning personalization for cross-silo federated learning." IJCAI: proceedings of the conference. Vol. 2022. July 2022. Abstract and § 3.2
Rokni, Seyed Ali, et al. "TransNet: Minimally supervised deep transfer learning for dynamic adaptation of wearable systems." ACM Transactions on Design Automation of Electronic Systems (TODAES) 26.1 (2020): 1-31. Abstract and fig. 1, along with accompanying description.
Shi, Yuan, and Xianze Xu. "Deep federated adaptation: An adaptative residential load forecasting approach with federated learning." Sensors 22.9 (2022): 3264. Abstract and fig. 1-3, along with accompanying description.
Wu, Qiong, Kaiwen He, and Xu Chen. "Personalized federated learning for intelligent IoT applications: A cloud-edge based framework." IEEE Open Journal of the Computer Society 1 (2020): 35-44. Abstract and §§ III-IV including subsections
Zhang, Ran, et al. "Transfer learning with neural networks for bearing fault diagnosis in changing working conditions." IEEE Access 5 (2017): 14347-14357. Abstract and § II including subsections
Zhang, Chen, et al. "A survey on federated learning." Knowledge-Based Systems 216 (2021): 106775. Abstract and fig. 1, including accompanying description.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DAVID A. HOPKINS whose telephone number is (571)272-0537. The examiner can normally be reached Monday to Friday, 10AM to 7 PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ryan Pitaro can be reached at (571) 272-4071. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/David A Hopkins/Primary Examiner, Art Unit 2188