DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statements (IDS) submitted on 09/21/23, 07/23/2024 and 09/25/2024. The submissions are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements are being considered by the examiner.
Priority
Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: " apparatus configured to: ... perform... receiving... performing.. comparing ..finding.. generating… outputting. " in claim 1.
The claim term “apparatus” in the above claim(s) is a generic place holder because it is not preceded by any structural modifier.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may:
(1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or
(2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
7. Claims 1-6, 13-16, and 18 -19 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention. Specifically:
7.1. Claim 1 recites the elements “apparatus”, is indefinite because the specifications, claims and/or drawings fail to recite sufficiently definite structure, material or acts to perform the functions relative to the “configured to”.. “... perform... receiving... performing.. comparing ..finding.. generating… outputting…result data.” respectively. Furthermore, the specification simply recites said “apparatus”, In pages 3-4, that “the apparatus including: a first training data input unit,.. a first measurement data input unit,.. first data learning unit,.. a first verification unit, .. a model transferring unit,.. a second training data input unit, etc.” and page 27, figure 6 “data collection unit”. However, no structure for this specific limitations/element “apparatus” or the respective “units”, is/are disclosed in the applicant’s specification or in figures. It is not clear from claim 1 limitation whether the “apparatus” is a system. Moreover, the specification does not disclose a structure element which could perform the above functions neither any language connecting said apparatus/units with a structure capable of performing said functions. Additionally, Figs. 6 simply depict boxes with labels of said units, which have not any structure to perform said functions. Therefore, claim limitation “apparatus” is indefinite.
Examiner interpreted the claim element “apparatus” as a battery management system (BMS.) performing the measurements.
7.2. Furthermore, claims 2-6, 13-16, and 18 -19 are also rejected because they further limit and depend on claim 1.
Claim Rejections - 35 USC § 112(a)
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
Claim(s) 1-6, 13-16, and 18 -19 are/is rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor. The aforesaid claim implements a new subject matter that is not described in the specifications. Specifically:
Regarding claim 1, Applicant's recitation does not comply with the written description
requirement for the limitation “apparatus configured to”. Particularly, claim 1 recites “apparatus configured to”, without a structure to perform the said functions “perform... receiving... performing.. comparing ..finding.. generating… outputting…result data.” Respectively.
Applicant's specification describe “apparatus” In pages 3-4,that “the apparatus including: a first training data input unit,.. a first measurement data input unit,.. first data learning unit,.. a first verification unit, .. a model transferring unit,.. a second training data input unit, etc.” and page 27, figure 6 “data collection unit”. However, no structure for these specific limitations/element “apparatus” or the respective “units”, is/are disclosed in the applicant’s specification or in figures therefore, claim 1 is rejected for lack of written description in the specification.
Dependent claims 2-6, and 18 -19 fail to cure this deficiency of independent claim 1 (set forth directly above) and are rejected accordingly.
Claim Rejections- 35 USC §101
U.S.C. §101 reads as follows:
Whoever invents or discovers any new and useful process, machine,
manufacture, or composition of matter, or any new and useful improvement thereof,
may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-6, 13-16, 18-19 are rejected under 35 U.S.C.§101 because the claimed invention is directed to judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more.
The following analysis is based on claim 1,
Regarding claim 1,
An apparatus for predicting a low-voltage failure of a secondary battery, the apparatus configured to perform:
receiving first training data of a first group of a plurality of secondary batteries measured during a first specific time period of charging, discharging, and resting processes, wherein the first group of the plurality of the secondary batteries are selected as a first training targets;
receiving first measurement data of a second group of the plurality of the secondary batteries selected during a second specific time period of charging, discharging, and resting processes, wherein the second group of the plurality of the secondary batteries are selected as a first prediction targets;
performing machine learning on the first training data of the first group of the plurality of the secondary batteries and selecting a main factor among the first training data;
comparing a second low-voltage determination prediction result of the second group of the plurality of the secondary batteries in which the first measurement data is applied to a first low-voltage prediction model of the first group of the plurality of the secondary batteries generated using the first training data with first low-voltage determination prediction result of the second group of the plurality of the secondary batteries based on the first measurement data and
finding an optimal value of a weighting factor k that maximizes a performance of the first low-voltage prediction model of the first group of the plurality of the secondary batteries to verify and optimize the low-voltage prediction model of the first group of the plurality of the secondary batteries;
receiving the optimized low voltage prediction model of the first group of the plurality of the secondary batteries and the optimal value of the weighting factor k.
receiving second training data of a third group of the plurality of the secondary batteries measured during a third specific time period of charging, discharging, and resting processes, wherein the third group of the plurality of the secondary batteries are selected as a second training targets;
receiving second measurement data of a fourth group of the plurality of the secondary batteries selected during a fourth specific period of charging, discharging, and resting processes, wherein the fourth group of the plurality of the secondary batteries are selected as a second prediction targets;
generating a second low-voltage prediction model of the third group of the plurality of the secondary batteries by performing machine learning on the optimized low-voltage prediction model of the first group of the plurality of the secondary batteries and the second training data of the third group of the plurality of the secondary batteries; and
outputting a third low-voltage determination prediction result of the third group of the plurality of the secondary batteries in which the second measurement data is applied to the second low-voltage prediction model of the third group of the plurality of the secondary batteries,
wherein process conditions of the first and second group of the plurality of the secondary batteries selected as the first training target and the first prediction target are different from process conditions of the third and fourth group of the plurality of the secondary batteries selected as the second training target and the second prediction target.
The claim limitations underlined above is abstract idea (a process)
The remaining limitations are “additional elements”.
Step 1 (Statutory Category): Yes. we determine whether the claims are to a statutory category by considering whether the claimed subject matter falls within the four statutory categories of patentable subject matter identified by 35 U.S.C. 101: Process, machine, manufacture, or composition of matter. The above claim is considered to be in a statutory category (process). Therefore, it is directed to a statutory category, i.e., a process.
Step 2 A, Prong-1 (the claim is evaluated to determine whether it is directed to a judicial-exception/abstract-idea): Yes.
In the above claim, the underlined portion constitutes an abstract idea because, under a broadest reasonable interpretation, it recites limitations that fall into/recite an abstract idea exception. Specifically, under the 2019 Revised Patent Subject Matter Eligibility Guidance, it falls into the grouping of subject matter when recited as such in a claim limitation that covers evaluation of mathematical concepts (mathematical relationships, mathematical formulas or equations, mathematical calculations).
For example, steps of “predicting a low-voltage failure of a secondary battery”, “performing machine learning on the first training data of the first group of the plurality of the secondary batteries and selecting a main factor among the first training data”;
“comparing a second low-voltage determination prediction result of the second group of the plurality of the secondary batteries in which the first measurement data is applied to a first low-voltage prediction model of the first group of the plurality of the secondary batteries generated using the first training data with first low-voltage determination prediction result of the second group of the plurality of the secondary batteries based on the first measurement data” and
“ finding an optimal value of a weighting factor k that maximizes a performance of the first low-voltage prediction model of the first group of the plurality of the secondary batteries to verify and optimize the low-voltage prediction model of the first group of the plurality of the secondary batteries”;
“generating a second low-voltage prediction model of the third group of the plurality of the secondary batteries by performing machine learning on the optimized low-voltage prediction model of the first group of the plurality of the secondary batteries and the second training data of the third group of the plurality of the secondary batteries”; and
represent mathematical concept. The above limitations represent data processing using machine learning algorithm (Specification, pages 13-14) generating/developing model and validating the model with measurement data. (Specification, pages 27-29, “The computing unit receives the stored data values for machine learning, and evaluates and selects main factors for predicting low-voltage failures, and then generates a model.) These steps represent a process that, under its broadest reasonable interpretation, it encompasses a computing unit/computer implementing abstract idea and making valuation/judgement based on the output data.
Step 2A, Prong-2 (the claim is evaluated to determine whether the judicial exception/abstract-idea is integrated into a Practical Application): No.
Claim 1 recites additional elements
“receiving first training data of a first group of a plurality of secondary batteries measured during a first specific time period of charging, discharging, and resting processes, wherein the first group of the plurality of the secondary batteries are selected as a first training targets;
receiving first measurement data of a second group of the plurality of the secondary batteries selected during a second specific time period of charging, discharging, and resting processes, wherein the second group of the plurality of the secondary batteries are selected as a first prediction targets
receiving the optimized low voltage prediction model of the first group of the plurality of the secondary batteries and the optimal value of the weighting factor k.
receiving second training data of a third group of the plurality of the secondary batteries measured during a third specific time period of charging, discharging, and resting processes, wherein the third group of the plurality of the secondary batteries are selected as a second training targets;
receiving second measurement data of a fourth group of the plurality of the secondary batteries selected during a fourth specific period of charging, discharging, and resting processes, wherein the fourth group of the plurality of the secondary batteries are selected as a second prediction targets
outputting a third low-voltage determination prediction result of the third group of the plurality of the secondary batteries in which the second measurement data is applied to the second low-voltage prediction model of the third group of the plurality of the secondary batteries,
wherein process conditions of the first and second group of the plurality of the secondary batteries selected as the first training target and the first prediction target are different from process conditions of the third and fourth group of the plurality of the secondary batteries selected as the second training target and the second prediction target.”;
are data gathering steps for the particular technological environment or field of use. Collecting receiving first, second training data of first, third group, receiving first, second measurement data of a second, fourth groups, receiving the optimized low voltage prediction model, represent mere data gathering steps and data gathering conditions only add an insignificant extra-solution activity to the judicial exception. and outputting result step merely represents insignificant post-solution activity. Furthermore, nothing in the claim reasonably indicates that anything other than a generic computer (i.e., "computing unit") needs to be used to carry out the abstract idea. The above additional elements, considered individually and in combination with the other claim elements do not reflect an improvement to other technology or technical field, and do not integrate the judicial exception into a practical application such as normal/defect determination and updating new models to improve accuracy of prediction of low voltage failure
of the secondary battery as disclosed in the specification page 28. Therefore, the claims are directed to a judicial exception and require further analysis under the Step 2B.
Step 2B (the claim is evaluated to determine whether recites additional elements that amount to an inventive concept, or also, the additional elements are significantly more than the recited the judicial-exception/abstract-idea): No. the additional element(s) are just insignificant extra-solution activity and insignificant post-solution activity which are simply routine and conventional steps previously known to the pertinent industry. Therefore, the claim does not include additional element(s) significantly more, and/or, does not amount to more than the judicial-exception/abstract-idea itself and the claim is not patent eligible.
claims 2-6, 13-16, 18-19 are rejected under 35 U.S.C. 101 because claims depend on claim 1, therefore, has the abstract idea of claim 1 and also has the routine and conventional structure above of claim 1. In addition, claims 2-6, 13-16, 18-19 further recite the elements which are simply more standard computational, mathematical-calculation to data gathering /generate data and/ or a model, and. Furthermore, claims 2-6, 13-16, 18-19 do not include additional elements that are sufficient to amount to significantly more than the judicial exception.
Regarding Claim 7,
A method of predicting a low-voltage failure of a secondary battery, the method comprising:
inputting first training data of a first group of a plurality of secondary batteries measured during a first specific time period of charging, discharging, and resting processes, wherein the first group of the plurality of the secondary batteries are selected as a first training targets;
generating a first low-voltage prediction model of the first group of the plurality of the secondary batteries by performing machine learning on the first training data and selecting a main factor among the first training data;
inputting first measurement data of a second group of the plurality of the secondary batteries selected during a second specific time period of charging, discharging, and resting processes, wherein the second group of the plurality of the secondary batteries are selected as a first prediction targets;
comparing a low-voltage determination prediction result of the second group of the plurality of the secondary batteries in which the first measurement data is applied to the first low- voltage prediction model with a first low-voltage determination prediction result of the secondary battery of the first measurement data and finding an optimal value of a weighting factor k that maximizes a performance of the first low-voltage prediction model to verify and optimize the first low-voltage prediction model;
transferring the optimized first low voltage prediction model and the optimal value of the weighting factor k;
inputting second training data of a third group of the plurality of the secondary batteries measured during a third specific time period of charging, discharging, and resting processes, wherein the third group of the plurality of the
generating a second low-voltage prediction model of the third group of the plurality of the secondary batteries by performing machine learning on the transferred optimized low voltage prediction model of the first group of the plurality of the secondary batteries and the second training data;
inputting second measurement data of a fourth group of the plurality of the secondary batteries selected during a fourth specific period of charging, discharging, and resting processes, wherein the fourth group of the plurality of the secondary batteries are selected as a second prediction targets; and
outputting a second low-voltage determination prediction result of the third group of the plurality of the secondary batteries in which the second measurement data is applied to the second low-voltage prediction model of the third group of the plurality of the secondary batteries,
wherein process conditions of the first and second group of the plurality of the secondary batteries selected as the first training target and the first prediction target are different from process conditions of the third and fourth group of the plurality of the secondary batteries selected as the second training target and the second prediction target.
The claim limitations underlined above is abstract idea (a process). The remaining limitations are “additional elements”.
Step 1 (Statutory Category): Yes. we determine whether the claims are to a statutory category by considering whether the claimed subject matter falls within the four statutory categories of patentable subject matter identified by 35 U.S.C. 101: Process, machine, manufacture, or composition of matter. The above claim is considered to be in a statutory category (process). Therefore, it is directed to a statutory category, i.e., a process.
Step 2 A, Prong-1 (the claim is evaluated to determine whether it is directed to a judicial-exception/abstract-idea): Yes.
In the above claim, the underlined portion constitutes an abstract idea because, under a broadest reasonable interpretation, it recites limitations that fall into/recite an abstract idea exception. Specifically, under the 2019 Revised Patent Subject Matter Eligibility Guidance, it falls into the grouping of subject matter when recited as such in a claim limitation that covers evaluation of mathematical concepts (mathematical relationships, mathematical formulas or equations, mathematical calculations).
For example, steps of
“A method of predicting a low-voltage failure of a secondary battery,
“generating a first low-voltage prediction model of the first group of the plurality of the secondary batteries by performing machine learning on the first training data and selecting a main factor among the first training data”;
“comparing a low-voltage determination prediction result of the second group of the plurality of the secondary batteries in which the first measurement data is applied to the first low- voltage prediction model with a first low-voltage determination prediction result of the secondary battery of the first measurement data and finding an optimal value of a weighting factor k that maximizes a performance of the first low-voltage prediction model to verify and optimize the first low-voltage prediction model”
“generating a second low-voltage prediction model of the third group of the plurality of the secondary batteries by performing machine learning on the transferred optimized low voltage prediction model of the first group of the plurality of the secondary batteries and the second training data”;
represent mathematical concept. The above limitations represent data processing using machine learning algorithm (Specification, pages 13-14) generating/developing model and validating the model with measurement data. (Specification, pages 27-29, “The computing unit receives the stored data values for machine learning, and evaluates and selects main factors for predicting low-voltage failures, and then generates a model.) These steps represent a process that, under its broadest reasonable interpretation, it encompasses a computing unit/computer implementing abstract idea and making valuation/judgement based on the output data.
Step 2A, Prong-2 (the claim is evaluated to determine whether the judicial exception/abstract-idea is integrated into a Practical Application): No.
Claim 7 recites additional elements
“inputting first training data of a first group of a plurality of secondary batteries measured during a first specific time period of charging, discharging, and resting processes, wherein the first group of the plurality of the secondary batteries are selected as a first training targets”
“inputting first measurement data of a second group of the plurality of the secondary batteries selected during a second specific time period of charging, discharging, and resting processes, wherein the second group of the plurality of the secondary batteries are selected as a first prediction targets”;
“transferring the optimized first low voltage prediction model and the optimal value of the weighting factor k”;
“inputting second training data of a third group of the plurality of the secondary batteries measured during a third specific time period of charging, discharging, and resting processes, wherein the third group of the plurality of the secondary batteries are selected as a second training targets”;
“outputting a second low-voltage determination prediction result of the third group of the plurality of the secondary batteries in which the second measurement data is applied to the second low-voltage prediction model of the third group of the plurality of the secondary batteries”,
“wherein process conditions of the first and second group of the plurality of the secondary batteries selected as the first training target and the first prediction target are different from process conditions of the third and fourth group of the plurality of the secondary batteries selected as the second training target and the second prediction target.”
are data gathering steps for the particular technological environment or field of use. Collecting receiving first, second training data of first, third group, receiving first, second measurement data of a second, fourth groups, receiving the optimized low voltage prediction model, represent mere data gathering steps and data gathering conditions only add an insignificant extra-solution activity to the judicial exception. and outputting result step merely represents insignificant post-solution activity. Furthermore, nothing in the claim reasonably indicates that anything other than a generic computer (i.e., "computing unit") needs to be used to carry out the abstract idea. The above additional elements, considered individually and in combination with the other claim elements do not reflect an improvement to other technology or technical field, and do not integrate the judicial exception into a practical application such as normal/defect determination and updating new models to improve accuracy of prediction of low voltage failure of the secondary battery as disclosed in the specification page 28. Therefore, the claims are directed to a judicial exception and require further analysis under the Step 2B.
Step 2B (the claim is evaluated to determine whether recites additional elements that amount to an inventive concept, or also, the additional elements are significantly more than the recited the judicial-exception/abstract-idea): No. the additional element(s) are just insignificant extra-solution activity and insignificant post-solution activity which are simply routine and conventional steps previously known to the pertinent industry. Therefore, the claim does not include additional element(s) significantly more, and/or, does not amount to more than the judicial-exception/abstract-idea itself and the claim is not patent eligible.
claims 8-12 are rejected under 35 U.S.C. 101 because claims depend on claim 1, therefore, has the abstract idea of claim 7 and also has the routine and conventional structure above of claim 7. In addition, claims 8-12 further recite the elements which are simply more standard computational, mathematical-calculation to data gathering /generate data and/ or a model, and. Furthermore, claims 8-12 do not include additional elements that are sufficient to amount to significantly more than the judicial exception.
Regarding Claim 17,
A non-transitory machine-readable medium comprising machine-readable instructions encoded thereon for performing a method of predicting the low-voltage failure of the second battery, the method comprising:
inputting first training data of a first group of a plurality of secondary batteries measured during a first specific time period of charging, discharging, and resting processes, wherein the first group of the plurality of the secondary batteries are selected as first training targets;
generating a first low-voltage prediction model of the first group of the plurality of the secondary batteries by performing machine learning on the first training data and selecting a main factor among the first training data;
inputting first measurement data of a second group of the plurality of the secondary batteries selected during a second specific time period of charging, discharging, and resting processes, wherein the second group of the plurality of the secondary batteries are selected as first prediction targets;
comparing a low-voltage determination prediction result of the second group of the plurality of the secondary batteries in which the first measurement data is applied to the first low-voltage prediction model with a first low-voltage determination prediction result of the secondary battery of the first measurement data and finding an optimal value of a weighting factor k that maximizes a performance of the first low-voltage prediction model to verify and optimize the first low-voltage prediction model;
transferring the optimized first low voltage prediction model and the optimal value of the weighting factor k;
inputting second training data of a third group of the plurality of the secondary batteries measured during a third specific time period of charging, discharging, and resting processes, wherein the third group of the plurality of the secondary batteries are selected as second training targets;
generating a second low-voltage prediction model of the third group of the plurality of the secondary batteries by performing machine learning on the transferred optimized low-voltage prediction model of the first group of the plurality of the secondary batteries and the second training data;
inputting second measurement data of a fourth group of the plurality of the secondary batteries selected during a fourth specific period of charging, discharging, and resting processes, wherein the fourth group of the plurality of the secondary batteries are selected as second prediction targets; and
outputting a second low-voltage determination prediction result of the third group of the plurality of the secondary batteries in which the second measurement data is applied to the second low-voltage prediction model of the third group of the plurality of the secondary batteries,
wherein process conditions of the first and second group of the plurality of the secondary batteries selected as the first training target and the first prediction target are different from process conditions of the third and fourth group of the plurality of the secondary batteries selected as the second training target and the second prediction target.
The claim limitations underlined above is abstract idea (a process). The remaining limitations are “additional elements”.
Step 1 (Statutory Category): Yes. we determine whether the claims are to a statutory category by considering whether the claimed subject matter falls within the four statutory categories of patentable subject matter identified by 35 U.S.C. 101: Process, machine, manufacture, or composition of matter. The above claim is considered to be in a statutory category (process). Therefore, it is directed to a statutory category, i.e., a process.
Step 2 A, Prong-1 (the claim is evaluated to determine whether it is directed to a judicial-exception/abstract-idea): Yes.
In the above claim, the underlined portion constitutes an abstract idea because, under a broadest reasonable interpretation, it recites limitations that fall into/recite an abstract idea exception. Specifically, under the 2019 Revised Patent Subject Matter Eligibility Guidance, it falls into the grouping of subject matter when recited as such in a claim limitation that covers evaluation of mathematical concepts (mathematical relationships, mathematical formulas or equations, mathematical calculations).
For example, steps of
“predicting a low-voltage failure of a secondary battery”, “performing machine learning on the first training data of the first group of the plurality of the secondary batteries and selecting a main factor among the first training data”;
“generating a first low-voltage prediction model of the first group of the plurality of the secondary batteries by performing machine learning on the first training data and selecting a main factor among the first training data;”
“comparing a low-voltage determination prediction result of the second group of the plurality of the secondary batteries in which the first measurement data is applied to the first low-voltage prediction model with a first low-voltage determination prediction result of the secondary battery of the first measurement data and finding an optimal value of a weighting factor k that maximizes a performance of the first low-voltage prediction model to verify and optimize the first low-voltage prediction model.”
“generating a second low-voltage prediction model of the third group of the plurality of the secondary batteries by performing machine learning on the transferred optimized low-voltage prediction model of the first group of the plurality of the secondary batteries and the second training data;”
represent mathematical concept. The above limitations represent data processing using machine learning algorithm (Specification, pages 13-14) generating/developing model and validating the model with measurement data. (Specification, pages 27-29, “The computing unit receives the stored data values for machine learning, and evaluates and selects main factors for predicting low-voltage failures, and then generates a model.) These steps represent a process that, under its broadest reasonable interpretation, it encompasses a computing unit/computer implementing abstract idea and making valuation/judgement based on the output data.
Step 2A, Prong-2 (the claim is evaluated to determine whether the judicial exception/abstract-idea is integrated into a Practical Application): No.
Claim 17 recites additional elements
“A non-transitory machine-readable medium comprising machine-readable instructions encoded thereon for performing” which is a memory, a processor, and means for processing the data signal are conventional components of a computer means for data acquisition and processing. The limitation is just insignificant components of a general data acquisition which fail to amount to significantly more than the judicial exception.
“inputting first training data of a first group of a plurality of secondary batteries measured during a first specific time period of charging, discharging, and resting processes, wherein the first group of the plurality of the secondary batteries are selected as first training targets”;
“inputting first measurement data of a second group of the plurality of the secondary batteries selected during a second specific time period of charging, discharging, and resting processes, wherein the second group of the plurality of the secondary batteries are selected as first prediction targets”
“transferring the optimized first low voltage prediction model and the optimal value of the weighting factor k”;
“inputting second training data of a third group of the plurality of the secondary batteries measured during a third specific time period of charging, discharging, and resting processes, wherein the third group of the plurality of the secondary batteries are selected as second training targets”;
“inputting second measurement data of a fourth group of the plurality of the secondary batteries selected during a fourth specific period of charging, discharging, and resting processes, wherein the fourth group of the plurality of the secondary batteries are selected as second prediction targets”; and
“outputting a second low-voltage determination prediction result of the third group of the plurality of the secondary batteries in which the second measurement data is applied to the second low-voltage prediction model of the third group of the plurality of the secondary batteries”,
“wherein process conditions of the first and second group of the plurality of the secondary batteries selected as the first training target and the first prediction target are different from process conditions of the third and fourth group of the plurality of the secondary batteries selected as the second training target and the second prediction target.”
are data gathering steps for the particular technological environment or field of use. Collecting receiving first, second training data of first, third group, receiving first, second measurement data of a second, fourth groups, receiving the optimized low voltage prediction model, represent mere data gathering steps and data gathering conditions only add an insignificant extra-solution activity to the judicial exception. and outputting result step merely represents insignificant post-solution activity. Furthermore, nothing in the claim reasonably indicates that anything other than a generic computer (i.e., "computing unit") needs to be used to carry out the abstract idea. The above additional elements, considered individually and in combination with the other claim elements do not reflect an improvement to other technology or technical field, and do not integrate the judicial exception into a practical application such as normal/defect determination and updating new models to improve accuracy of prediction of low voltage failure
of the secondary battery as disclosed in the specification page 28. Therefore, the claims are directed to a judicial exception and require further analysis under the Step 2B.
Step 2B (the claim is evaluated to determine whether recites additional elements that amount to an inventive concept, or also, the additional elements are significantly more than the recited the judicial-exception/abstract-idea): No. the additional element(s) are just insignificant extra-solution activity and insignificant post-solution activity which are simply routine and conventional steps previously known to the pertinent industry. Therefore, the claim does not include additional element(s) significantly more, and/or, does not amount to more than the judicial-exception/abstract-idea itself and the claim is not patent eligible.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-19 are rejected under 35 U.S.C. 103 as being unpatentable over Obeid et al. hereinafter (Obeid, IDS ref.). “Supervised learning for early and accurate battery terminal voltage collapse detection”, IET Circuits, Devices & Systems, IET journals, 27 February 2020. and in view of Tan et al. “Transfer Learning with Long Short-Term Memory Network for State-of-Health Prediction of Lithium-Ion Batteries”, IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, VOL. 67, NO. 10, OCTOBER 2020.
Regarding Claim 1, Obeid teaches,
An apparatus (Obeid, Page 352, left column, “actual runtime operation of the proposed algorithm may happen on a BMS”) for predicting a low-voltage failure of a secondary battery (Obeid, Figure 10, Page 348, left col. lower middle paragraph, “we approach the problem of battery failure prediction from a pattern recognition perspective. Also, for this paper, by ' battery failure' we mean 'battery terminal voltage collapse. The battery's terminal voltage patterns are monitored”. Using an algorithm by the BMS see Page 350, Right col. Middle paragraph, “the supervised learning-based battery terminal voltage collapse detection methodology Algorithm 1 (see below)”. Figure 6”. Rechargeable batteries, particularly the lithium-ion (Li-ion) batteries reads on “secondary battery” (Obeid, Page347, left col. Top paragraph introduction:” Rechargeable batteries, particularly the lithium-ion (Li-ion) batteries,” NOTE: It is well known in the art that for Electric Vehicle /(EV) “rechargeable batteries/ Lithium-ion batteries” are used as “secondary batteries”. ) the apparatus configured to perform:
receiving first training data (Obeid, Figure 6, Page 350, Right Col. Algorithm 1, Step 1: “Obtain the simulated training data” of a first group of a plurality of secondary batteries (Obeid, Figure 6, Page 348, Right Col. top paragraph, “the training is conducted based on data simulated using a mathematical model for a 4 V, 850 mAh Li-ion battery” NOTE: “4 V, 850 mAh Li-ion battery” represents a first group of plurality of battery used for training data. See Page 352, right column top paragraph, “Sixteen different batteries were discharged through varying loads, and their terminal voltage was observed throughout the process.” this algorithm can be applied to plurality of same group of batteries.) measured during a first specific time period of charging, discharging, and resting processes (Obeid, page 353, Table 3, “sampling period (SPS)” reads on “time period of charging, discharging”. Table 3 discloses different time for different batteries, (Obeid, Page 352, Right col. Bottom paragraph, “Table 3 gives a comprehensive summary of the performance of the NN with seven batteries (battery numbers. 2, 4, 5, 6, 8, 9, 14).”), wherein the first group of the plurality of the secondary batteries are selected as a first training targets (NOTE: Battery number 1 is first group training target trained by NN. see (Obeid, Page 352, Bottom paragraph, “we used transfer learning; i.e. after training the NN on the simulated data, we trained it an additional time on battery number 1”)
receiving first measurement data of a second group of the plurality of the secondary batteries (Obeid, Figure 6, Page 350, Right Col. Algorithm 1, Step 3-5: Step 3: Capture segments of data with overlapping windows: obtain N windows, Step 4: for i= [l: N]do, and Step 5: dataset: - raw data”) selected during a second specific time period of charging, discharging, and resting processes (Obeid, page 353, Table 3, “sampling period (SPS)” reads on “time period of charging, discharging”. Table 3 discloses different time for different batteries,)
wherein the second group of the plurality of the secondary batteries are selected as a first prediction targets (Obeid, Page 352, 4.2, Right col. bottom paragraph, Table 3 gives a comprehensive summary of the performance of the NN with seven batteries (battery numbers. 2, 4, 5, 6, 8, 9, 14). we used transfer learning; i.e. after training the NN on the simulated data, we trained it an additional time on battery number 1 (training target). Then we used the trained NN to test the seven batteries mentioned above”. NOTE: each batteries 2, 4,5,6,8,14 are individual “prediction target” see table 3 ) ;
(Obeid, Figure 6, Page 350, Right Col. Algorithm 1, Step 9-11: Step 10-Train model. and selecting a main factor among the first training data; (Obeid, Figure 6, Page 352, F 1 score, equation 9, Table 2, Page 351, Right Col, bottom paragraph, “All three metrics are shown in Table 2. The recall and precision are defined in (7) and (8), respectively. Moreover, the F 1 score, defined in (9), is included in Table 2 as well).
comparing a second low-voltage determination prediction result of the second group of the plurality of the secondary batteries (Obeid, Figure 6, Page 350, Right Col. Algorithm 1, Step 11-13:” 11: Test model on simulated data 12: Pre-process real data,13: Test/use model on real data”) in which the first measurement data is applied to a first low-voltage prediction model of the first group of the plurality of the secondary batteries generated using the first training (Algorithm 1, step 1-3) data with first low-voltage determination prediction result of the second group of the plurality of the secondary batteries based on the first measurement data (Algorithm 1, Step 9-10, “9: Split each dataset 1, j E { 1, 2, 3} into TR (training set) and TE (testing set) 10: Train model) (Algorithm 1, Step 13) and
receiving second training data of a third group of the plurality of the secondary batteries (Algorithm 1 step 1-4, Algorithm 1: Supervised learning-based battery terminal voltage collapse detection methodology 1: Obtain the simulated training data by solving (l}--(4) 2: Label the data based on SOC level 3: Capture segments of data with overlapping windows: obtain N windows”. “Step 4 i=[l:N]do” of algorithm 1 reads on “ third group”. The algorithm can be implemented on any ith battery where i= [1, N] number of different batteries and the training data can be generated using any different third group of batteries under different charging, discharging conditions. This is an algorithm design choice) measured during a third specific time period of charging, discharging, and resting processes, wherein the third group of the plurality of the secondary batteries are selected as a second training targets; (Obeid, Page 352, Right col. Bottom paragraph, “Table 3 gives a comprehensive summary of the performance of the NN with seven batteries (battery numbers. 2, 4, 5, 6, 8, 9, 14).” Each battery has different time).
receiving second measurement data of a fourth group of the plurality of the secondary batteries selected during a fourth specific period of charging, discharging, and resting processes, (Obeid, Figure 6, Page 350, Right Col. Algorithm 1, Step 4-7: input raw data, step 4 i= [1, N] Page 350, left col. Bottom paragraph, “the raw voltage values after normalization are used as features” NOTE: ”. “Step 4 i=[l:N]do” of algorithm 1 reads on “ fourth group”. where N can vary with any nth no of batteries. each battery group has different time period, discharge charging cycle and ret process. See table 3, each battery has different condition and parameters),
wherein the fourth group of the plurality of the secondary batteries are selected as a second prediction targets (Obeid, Page 352, 4.2, Right col. bottom paragraph, Table 3 gives a comprehensive summary of the performance of the NN with seven batteries (battery numbers. 2, 4, 5, 6, 8, 9, 14). To adapt our classification model to the real scenarios, we used transfer learning; i.e. after training the NN on the simulated data, we trained it an additional time on battery number 1 (training target). Then we used the trained NN to test the seven batteries mentioned above”. NOTE: each battery 2, 4,5,6,8,14 are individual “prediction target”. Battery 4, could be 4th group and a second prediction target. see table 3);
generating a second low-voltage prediction model of the third group of the plurality of the secondary batteries by performing machine learning on the optimized low-voltage prediction model (Obeid, Figure 6) of the first group of the plurality of the secondary batteries and the second training data of the third group of the plurality of the secondary batteries (Obeid, Figure 6, Page 350, Right Col. Algorithm 1, Algorithm 1, Step 9-10, “9: Split each dataset 1, j E { 1, 2, 3} into TR (training set) and TE (testing set) 10: Train model); and
outputting a third low-voltage determination prediction result of the third group of the plurality of the secondary batteries (Obeid, Figure 6 (See below), Page 350, Right Col. Algorithm 1, Step 11: Test model on simulated data, Step 12: Pre-process real data, 13: Test/use model on real data), in which the second measurement data is applied to the second low-voltage prediction model of the third group of the plurality of the secondary batteries (Obeid, Figure 6, Page 350, Right Col. Algorithm 1, Algorithm 1, Step 9-10, “9: Split each dataset 1, j E {1, 2, 3} into TR (training set) and TE (testing set) 10: Train model);
wherein process conditions of the first and second group of the plurality of the secondary batteries selected as the first training target and the first prediction target are different from process conditions of the third and fourth group of the plurality of the secondary batteries selected as the second training target and the second prediction target. (Obeid, Page 350, left col. Middle paragraph, “each battery may take a different route to battery failure. The overlapping windows that document the different routes to failure are then labelled as either coming from the safe regions or the failure regions, depending on the SOC information as described earlier. The extracted windows are then split into two groups: a training set (TR) and a testing set (TE). The TR constitutes 70% of all windows. We ensure that windows from the same battery/load combination do not appear simultaneously in training or testing sets. We extract data both from simulated battery and load models, and also from tests on real batteries. Table I explains the different constructed datasets” NOTE: each battery process condition is different. for example see (Obeid, Page 353, Table 3, Page 352, 4.2, Right col. Top paragraph, “. Sixteen different batteries were discharged through varying loads, and their terminal voltage was observed throughout the process. In this procedure, the batteries were drained at 16 different current levels and current profiles”).
PNG
media_image1.png
298
933
media_image1.png
Greyscale
PNG
media_image2.png
403
457
media_image2.png
Greyscale
Obeid, Page 350, Algorithm 1.
Obeid teaches a F1-score for precision of the prediction, Obeid is silent on
finding an optimal value of a weighting factor k that maximizes a performance of the first low-voltage prediction model of the first group of the plurality of the secondary batteries to verify and optimize the low-voltage prediction model of the first group of the plurality of the secondary batteries;
receiving the optimized low voltage prediction model of the first group of the plurality of the secondary batteries and the optimal value of the weighting factor k.
However, Tan teaches finding an optimal value of a weighting factor k that maximizes a performance of the first low-voltage prediction model of the first group of the plurality of the secondary batteries to verify and optimize the low-voltage prediction model of the first group of the plurality of the secondary batteries (Tan, Figure 5, page 8723, abstract, “we select the task with the highest FES score to obtain the base model with superior generalization performance”. “A high feature expression scoring (FES)” reads on “optimal value of a weighting factor k” )
receiving the optimized low voltage prediction model of the first group of the plurality of the secondary batteries and the optimal value of the weighting factor k. (Tan, Figure 5, see below, Block “training base model” with highest FES score” and transfer learning).
PNG
media_image3.png
449
417
media_image3.png
Greyscale
Tan, Figure 5
It would have been obvious to a person having ordinary skill in the art before the effective filing date to modify Obaid’s transfer learning method for predicting optimized model to incorporate Tan’s transfer learning machine learning method with the “A feature expression scoring (FES) as taught by Tan and obtain an accurate trained model and generate output result with optimal precision. (Tan, conclusion). It would have been obvious to a person of ordinary skill to include the well-known transfer learning machine learning model optimization along with the other machine learning network, in order to yield the predicted results of generating accurate battery performance prediction, yet with higher accuracy (KSR).
Regarding Claim 2, combination of Obeid and Tan teaches the apparatus of claim 1,
Obeid further teaches wherein each of the first training data, the first measurement data, the second training data, and the second measurement data refer to one or more measurement values selected from a voltage measurement value (Obeid, Figure 1-2, Page 348, left col. Middle paragraph,” The battery's terminal
voltage patterns are monitored” also see equation 4, The term y(t) represents the battery terminal voltage”. One of the measurements is voltage value.) a current measurement value, an impedance measurement value, a temperature measurement value, a capacity measurement value, and a power measurement value that are measured in the charging, discharging, and resting processes of the plurality of the secondary batteries independently (Obeid, Page 353, Table 3, (Obeid, Page 352, Right col. Bottom paragraph, “Table 3 gives a comprehensive summary of the performance of the NN with seven batteries (battery numbers. 2, 4, 5, 6, 8, 9, 14).”), Page 352, 4.2, Right col. Top paragraph, “. Sixteen different batteries were discharged through varying loads, and their terminal voltage was observed throughout the process. In this procedure, the batteries were drained at 16 different current levels and current profiles”).
Regarding Claim 3, combination of Obeid and Tan teaches the apparatus of
claim 1,
Obeid further teaches wherein the machine learning independently apply one or more methods selected from decision tree, random Forest, neural network, deep neural network, support vector machine, and gradient boosting machine. (Obeid, Figure 6, Neural Network, page 348, left col. Bottom paragraph “a fully connected artificial neural network is used as a classifier”).
Regarding Claim 4, combination of Obeid and Tan teaches the apparatus of
claim 1,
Obeid is silent on wherein the optimal value of the weighting factor k means a value that minimizes a Misclassification Error Rate (MER).
However, Tan teaches wherein the optimal value of the weighting factor k means a value that minimizes a Misclassification Error Rate (MER). (Tan, Table VII, Figure 7-8, page 8729, right col. Bottom paragraph, and Page 8730 left col. Top paragraph. “the RMSE (Root mean square error) of the transfer learning is significantly positive correlated with the FES score. It could be observed that for CS35, the FEScs35 = 18 is the highest and same as that of the B7, and the RMSEcs35 = 0.0052 is the lowest. The experimental results demonstrate the validity of the FES rule for CACLE datasets. Compared to other neural network methods (LSTM-FC, DNN, and GMDH), the LSTM-FC-TL achieves optimal stability with the lowest SDE (SD error”). Highest FES score and lowest RMSE or SDE reads on “optimal value of the weighting factor k means a value that minimizes a Misclassification Error Rate (MER”)”).
It would have been obvious to a person having ordinary skill in the art before the effective filing date to modify Obaid’s transfer learning method for predicting optimized model to incorporate Tan’s transfer learning machine learning method with the “A feature expression scoring (FES) as taught by Tan and obtain an accurate trained model and generate output result with optimal precision. (Tan, conclusion). It would have been obvious to a person of ordinary skill to include the well-known transfer learning machine learning model optimization along with the other machine learning network, in order to yield the predicted results of generating accurate battery performance prediction, yet with higher accuracy (KSR).
Regarding Claim 5, combination of Obeid and Tan teaches the apparatus of
claim 1,
Obeid further teaches further configured to perform outputting the first low-voltage determination prediction result. (Obeid, Figure 6, Figure 7-9, page 351, left col. Bottom paragraph “the performance of the corresponding feature set is displayed in subfigures titled Dataset 1, 2, 3, respectively. The results presented are for a randomly selected test group of windows, with and without noise. The legend in Fig. 9 explains the pattern coding of the figure s, and shows the four different types of outcomes in the NN prediction”) in which the first measurement data is applied to the first low-voltage prediction model of the first group of the plurality of the secondary batteries (Obeid, Figure 6, Obeid, Figure 6, Page 350, Right Col. Algorithm 1, Step 9-10, “9: Split each dataset 1, j E {1, 2, 3} into TR (training set) and TE (testing set) 10: Train model);
Regarding Claim 6, combination of Obeid and Tan teaches the apparatus of
claim 1,
Obeid further teaches, further configured to perform: verifying the second low voltage prediction model by comparing the second low voltage determination prediction result (Obeid, Figure 6 (See below), Page 350, Right Col. Algorithm 1, Step 11: Test model on simulated data, Step 12: Pre-process real data, 13: Test/use model on real data) in which the second measurement data is applied to the second low voltage prediction model generated based on the second training data and the second low voltage determination result based on the second measurement data. (Obeid, Figure 6, Page 350, Right Col. Algorithm 1, Steps 11-13). page 352, right col. Bottom paragraph, “To adapt our classification model to the real scenarios, we used transfer learning; i.e. after training the NN on the simulated data, we
trained it an additional time on battery number 1. Then we used the trained NN to test the seven batteries mentioned above”).
Regarding Claim 7, Obeid teaches,
Obeid further teaches, A method of predicting a low-voltage failure (Obeid, Figure 10, Page 350, Right col. Middle paragraph, “the supervised learning-based battery terminal voltage collapse detection methodology Algorithm 1”. Figure 6”) of a secondary battery, the method comprising:
inputting first training data (Obeid, Figure 6, Page 350, Right Col. Algorithm 1, Step 1: “Obtain the simulated training data”) of a first group of a plurality of secondary batteries (Obeid, Figure 6, Page 348, Right Col. top paragraph, “the training is conducted based on data simulated using a mathematical model for a 4 V, 850 mAh Li-ion battery” NOTE: “4 V, 850 mAh Li-ion battery” represents a first group of plurality of battery used to obtain training data. Rechargeable batteries, particularly the lithium-ion (Li-ion) batteries reads on “secondary battery” (Obeid, Page347, left col. Top paragraph introduction:” Rechargeable batteries, particularly the lithium-ion (Li-ion) batteries,” NOTE: It is well known in the art that for Electric Vehicle /(EV) “rechargeable batteries/ Lithium-ion batteries” are used as “secondary batteries”.) measured during a first specific time period of charging, discharging, and resting processes ((Obeid, page 353, Table 3, “sampling period (SPS)” reads on “time period of charging, discharging”. Table 3 discloses different time for different batteries, (Obeid, Page 352, Right col. Bottom paragraph, “Table 3 gives a comprehensive summary of the performance of the NN with seven batteries (battery numbers. 2, 4, 5, 6, 8, 9, 14).”), wherein the first group of the plurality of the secondary batteries are selected as a first training targets (NOTE: Battery number 1 is first group training target trained by NN. see (Obeid, Page 352, Bottom paragraph, “we used transfer learning; i.e. after training the NN on the simulated data, we trained it an additional time on battery number 1”)
generating a first low-voltage prediction model of the first group of the plurality of the secondary batteries by performing machine learning on the first training data and selecting a main factor among the first training data (Obeid, Figure 6, Page 350, Right Col. “Algorithm 1, Step 9-11: “9: Split each dataset 1, j E {1, 2, 3} into TR (training set) and TE (testing set) 10: Train model)”; Page 348, left col. Bottom paragraph, “raw values, fust-order derivatives, and Fourier transform over sliding time windows of batteries ' terminal voltage values are used as features. Then, a fully connected artificial neural network is used as a classifier”).
inputting first measurement data of a second group of the plurality of the secondary batteries Obeid, Figure 6, Page 350, Right Col. Algorithm 1, Step 3-5: Step 3: Capture segments of data with overlapping windows: obtain N windows, Step 4: for i= [l: N]do, and Step 5: dataset: - raw data”) selected during a second specific time period of charging, discharging, and resting processes (Obeid, page 353, Table 3, “sampling period (SPS)” reads on “time period of charging, discharging”. Table 3 discloses different time for different batteries,)
wherein the second group of the plurality of the secondary batteries are selected as a first prediction targets (Obeid, Page 352, 4.2, Right col. bottom paragraph, Table 3 gives a comprehensive summary of the performance of the NN with seven batteries (battery numbers. 2, 4, 5, 6, 8, 9, 14). we used transfer learning; i.e. after training the NN on the simulated data, we trained it an additional time on battery number 1 (training target). Then we used the trained NN to test the seven batteries mentioned above”. NOTE: each batteries 2, 4,5,6,8,14 are individual “prediction target” see table 3) ;
comparing a second low-voltage determination prediction result of the second group of the plurality of the secondary batteries (Obeid, Figure 6, Page 350, Right Col. Algorithm 1, Step 11-13:” 11: Test model on simulated data 12: Pre-process real data,13: Test/use model on real data”) in which the first measurement data is applied to a first low-voltage prediction model of the first group of the plurality of the secondary batteries generated using the first training (Algorithm 1, step 1-3) data with first low-voltage determination prediction result of the second group of the plurality of the secondary batteries based on the first measurement data (Algorithm 1, Step 9-10, “9: Split each dataset 1, j E { 1, 2, 3} into TR (training set) and TE (testing set) 10: Train model) (Algorithm 1, Step 13) and
receiving second training data of a third group of the plurality of the secondary batteries (Algorithm 1 step 1-4, Algorithm 1: Supervised learning-based battery terminal voltage collapse detection methodology 1: Obtain the simulated training data by solving (l}--(4) 2: Label the data based on SOC level 3: Capture segments of data with overlapping windows: obtain N windows”. “Step 4 i=[l:N]do” of algorithm 1 reads on “ third group”. The algorithm can be implemented on any ith battery where i= [1, N] number of different batteries and the training data can be generated using any different third group of batteries under different charging, discharging conditions. This is an algorithm design choice) measured during a third specific time period of charging, discharging, and resting processes, wherein the third group of the plurality of the secondary batteries are selected as a second training targets; (Obeid, Page 352, Right col. Bottom paragraph, “Table 3 gives a comprehensive summary of the performance of the NN with seven batteries (battery numbers. 2, 4, 5, 6, 8, 9, 14).” Each battery has different time).
generating a second low-voltage prediction model of the third group of the plurality of the secondary batteries by performing machine learning on the optimized low-voltage prediction model (Obeid, Figure 6) of the first group of the plurality of the secondary batteries and the second training data of the third group of the plurality of the secondary batteries (Obeid, Figure 6, Page 350, Right Col. Algorithm 1, Algorithm 1, Step 9-10, “9: Split each dataset 1, j E { 1, 2, 3} into TR (training set) and TE (testing set) 10: Train model);
receiving second measurement data of a fourth group of the plurality of the secondary batteries selected during a fourth specific period of charging, discharging, and resting processes, (Obeid, Figure 6, Page 350, Right Col. Algorithm 1, Step 4-7: input raw data, step 4 i= [1, N] Page 350, left col. Bottom paragraph, “the raw voltage values after normalization are used as features” NOTE: . “Step 4 i=[l:N]do” of algorithm 1 reads on “ fourth group”. where N can vary with any nth no of batteries. each battery group has different time period, discharge charging cycle and ret process. See table 3, each battery has different condition and parameters),
wherein the fourth group of the plurality of the secondary batteries are selected as a second prediction targets (Obeid, Page 352, 4.2, Right col. bottom paragraph, Table 3 gives a comprehensive summary of the performance of the NN with seven batteries (battery numbers. 2, 4, 5, 6, 8, 9, 14). To adapt our classification model to the real scenarios, we used transfer learning; i.e. after training the NN on the simulated data, we trained it an additional time on battery number 1 (training target). Then we used the trained NN to test the seven batteries mentioned above”. NOTE: each battery 2, 4,5,6,8,14 are individual “prediction target”. Battery 4, could be 4th group and a second prediction target. see table 3); and
outputting a second low-voltage determination prediction result of the third group of the plurality of the secondary batteries (Obeid, Figure 6 (See below), Page 350, Right Col. Algorithm 1, Step 11: Test model on simulated data, Step 12: Pre-process real data, 13: Test/use model on real data), in which the second measurement data is applied to the second low-voltage prediction model of the third group of the plurality of the secondary batteries (Obeid, Figure 6, Page 350, Right Col. Algorithm 1, Algorithm 1, Step 9-10, “9: Split each dataset 1, j E {1, 2, 3} into TR (training set) and TE (testing set) 10: Train model);
wherein process conditions of the first and second group of the plurality of the secondary batteries selected as the first training target and the first prediction target are different from process conditions of the third and fourth group of the plurality of the secondary batteries selected as the second training target and the second prediction target(Obeid, Page 350, left col. Middle paragraph, “ each battery may take a different route to battery failure. The overlapping windows that document the different routes to failure are then labelled as either coming from the safe regions or the failure regions, depending on the SOC information as described earlier. The extracted windows are then split into two groups: a training set (TR) and a testing set (TE). The TR constitutes 70% of all windows. We ensure that windows from the same battery/load combination do not appear simultaneously in training or testing sets. We extract data both from simulated battery and load models, and also from tests on real batteries. Table I explains the different constructed datasets” NOTE: each battery process condition is different. for example, see (Obeid, Page 353, Table 3, Page 352, 4.2, Right col. Top paragraph, “. Sixteen different batteries were discharged through varying loads, and their terminal voltage was observed throughout the process. In this procedure, the batteries were drained at 16 different current levels and current profiles”)
Obeid teaches a F1-score for precision of the prediction, Obeid is silent on
finding an optimal value of a weighting factor k that maximizes a performance of the first low-voltage prediction model to verify and optimize the first low-voltage prediction model;
transferring the optimized first low voltage prediction model and the optimal value of the weighting factor k;
However, Tan teaches finding an optimal value of a weighting factor k that maximizes a performance of the first low-voltage prediction model of the first group of the plurality of the secondary batteries to verify and optimize the low-voltage prediction model of the first group of the plurality of the secondary batteries (Tan, Figure 5, page 8723, abstract, “we select the task with the highest FES score to obtain the base model with superior generalization performance”. “A high feature expression scoring (FES)” reads on “optimal value of a weighting factor k”)
receiving the optimized low voltage prediction model of the first group of the plurality of the secondary batteries and the optimal value of the weighting factor k. (Tan, Figure 5, see below, Block “training base model” with highest FES score” and transfer learning).
It would have been obvious to a person having ordinary skill in the art before the effective filing date to modify Obaid’s transfer learning method for predicting optimized model to incorporate Tan’s transfer learning machine learning method with the “A feature expression scoring (FES) as taught by Tan and obtain an accurate trained model and generate output result with optimal precision. (Tan, conclusion). It would have been obvious to a person of ordinary skill to include the well-known transfer learning machine learning model optimization along with the other machine learning network, in order to yield the predicted results of generating accurate battery performance prediction, yet with higher accuracy (KSR).
Regarding Claim 8, combination of Obeid and Tan teaches the method of claim 7,
Obeid further teaches wherein each of the first training data, the first measurement data, the second training data, and the second measurement data refer to one or more measurement values selected from a voltage measurement value (Obeid, Figure 1-2, Page 348, left col. Middle paragraph,” The battery's terminal
voltage patterns are monitored” also see equation 4, The term y(t) represents the battery terminal voltage”. One of the measurements is voltage value.) a current measurement value, an impedance measurement value, a temperature measurement value, a capacity measurement value, and a power measurement value that are measured in the charging, discharging, and resting processes of the plurality of the secondary batteries independently (Obeid, Page 353, Table 3, (Obeid, Page 352, Right col. Bottom paragraph, “Table 3 gives a comprehensive summary of the performance of the NN with seven batteries (battery numbers. 2, 4, 5, 6, 8, 9, 14).”), Page 352, 4.2, Right col. Top paragraph, “. Sixteen different batteries were discharged through varying loads, and their terminal voltage was observed throughout the process. In this procedure, the batteries were drained at 16 different current levels and current profiles”).
Regarding Claim 9, combination of Obeid and Tan teaches the method of claim 7,
Obeid further teaches wherein the machine learning independently apply one or more methods selected from decision tree, random Forest, neural network, deep neural network, support vector machine, and gradient boosting machine. (Obeid, Figure 6, Neural Network, page 348, left col. Bottom paragraph “a fully connected artificial neural network is used as a classifier”).
Regarding Claim 10, combination of Obeid and Tan teaches the method of claim 7,
Obeid is silent on wherein the optimal value of the weighting factor k means a value that minimizes a Misclassification Error Rate (MER).
However, Tan teaches wherein the optimal value of the weighting factor k means a value that minimizes a Misclassification Error Rate (MER). (Tan, Table VII, Figure 7-8, page 8729, right col. Bottom paragraph, and Page 8730 left col. Top paragraph. “the RMSE (Root mean square error) of the transfer learning is significantly positive correlated with the FES score. It could be observed that for CS35, the FEScs35 = 18 is the highest and same as that of the B7, and the RMSEcs35 = 0.0052 is the lowest. The experimental results demonstrate the validity of the FES rule for CACLE datasets. Compared to other neural network methods (LSTM-FC, DNN, and GMDH), the LSTM-FC-TL achieves optimal stability with the lowest SDE (SD error”). Highest FES score and lowest RMSE or SDE reads on “optimal value of the weighting factor k means a value that minimizes a Misclassification Error Rate (MER”)”).
It would have been obvious to a person having ordinary skill in the art before the effective filing date to modify Obaid’s transfer learning method for predicting optimized model to incorporate Tan’s transfer learning machine learning method with the “A feature expression scoring (FES) as taught by Tan and obtain an accurate trained model and generate output result with optimal precision. (Tan, conclusion). It would have been obvious to a person of ordinary skill to include the well-known transfer learning machine learning model optimization along with the other machine learning network, in order to yield the predicted results of generating accurate battery performance prediction, yet with higher accuracy (KSR).
Regarding Claim 11, combination of Obeid and Tan teaches the method of claim 7,
Obeid further teaches further configured to perform outputting the first low-voltage determination prediction result. (Obeid, Figure 6, Figure 7-9, decision, page 351, left col. Bottom paragraph “the performance of the corresponding feature set is displayed in subfigures titled Dataset 1, 2, 3, respectively. The results presented are for a randomly selected test group of windows, with and without noise. The legend in Fig. 9 explains the pattern coding of the figure s, and shows the four different types of outcomes in the NN prediction”) in which the first measurement data is applied to the first low-voltage prediction model of the first group of the plurality of the secondary batteries (Obeid, Figure 6, Obeid, Figure 6, Page 350, Right Col. Algorithm 1, Step 9-10, “9: Split each dataset 1, j E {1, 2, 3} into TR (training set) and TE (testing set) 10: Train model);
Regarding Claim 12, combination of Obeid and Tan teaches the method of claim 7,
Obeid further teaches, further configured to perform: verifying the second low voltage prediction model by comparing the second low voltage determination prediction result (Obeid, Figure 6 (See below), Page 350, Right Col. Algorithm 1, Step 11: Test model on simulated data, Step 12: Pre-process real data, 13: Test/use model on real data) in which the second measurement data is applied to the second low voltage prediction model generated based on the second training data and the second low voltage determination result based on the second measurement data. (Obeid, Figure 6, Page 350, Right Col. Algorithm 1, Steps 11-13). page 352, right col. Bottom paragraph, “To adapt our classification model to the real scenarios, we used transfer learning; i.e. after training the NN on the simulated data, we
trained it an additional time on battery number 1. Then we used the trained NN to test the seven batteries mentioned above”).
Regarding Claim 13, combination of Obeid and Tan teaches the apparatus of claim 1,
Obeid further teaches A Battery Management System (BMS) apparatus including the apparatus for predicting the low-voltage failure of the secondary battery of claims 1 (Obeid, Page 352, left column, “actual runtime operation of the proposed algorithm may happen on a BMS”).
Regarding Claim 14, combination of Obeid and Tan teaches the apparatus of claim 13,
Obeid further teaches wherein the BMS apparatus is remotely controlled. (Obeid, page 352, left col. Bottom paragraph “While actual runtime operation of the proposed algorithm may happen on a BMS, training can be done on any computer. It is also worth noting that there are several powerful single board computers /microcontrollers available now, which can be suited for such operations”. NOTE: any computer wired or remote BMS system can be used”).
Regarding Claim 15, combination of Obeid and Tan teaches the apparatus of claim 14,
Obeid further teaches a mobile device including the BMS (Obeid, page 352, left col. Bottom paragraph “While actual runtime operation of the proposed algorithm may happen on a BMS, training can be done on any computer. It is also worth noting that there are several powerful single board computers /microcontrollers available now, which can be suited for such operations”. NOTE: any computer or a mobile device can perform as a BMS system. It is a design choice).
Regarding Claim 16, combination of Obeid and Tan teaches the apparatus of claim 1,
Obeid further teaches a wherein the BMS apparatus is embedded in the mobile device. (Obeid, page 352, left col. Bottom paragraph “While actual runtime operation of the proposed algorithm may happen on a BMS, training can be done on any computer. It is also worth noting that there are several powerful single board computers /microcontrollers available now, which can be suited for such operations”. NOTE: any several powerful single board computers /microcontrollers available can be embedded into a mobile device and perform as a BMS system. It is a design choice).
Regarding Claim 17, Obeid teaches,
A non-transitory machine-readable medium comprising machine-readable instructions encoded thereon for performing (Obeid, page 352, left col. Bottom paragraph “While actual runtime operation of the proposed algorithm may happen on a BMS, training can be done on any computer) a method of predicting the low-voltage failure of the second battery(Obeid, Figure 10, Page 350, Right col. Middle paragraph, “the supervised learning-based battery terminal voltage collapse detection methodology Algorithm 1”. Figure 6”), the method comprising:
inputting first training data (Obeid, Figure 6, Page 350, Right Col. Algorithm 1, Step 1: “Obtain the simulated training data”) of a first group of a plurality of secondary batteries (Obeid, Figure 6, Page 348, Right Col. top paragraph, “the training is conducted based on data simulated using a mathematical model for a 4 V, 850 mAh Li-ion battery” NOTE: “4 V, 850 mAh Li-ion battery” represents a first group of plurality of battery used to obtain training data. Rechargeable batteries, particularly the lithium-ion (Li-ion) batteries reads on “secondary battery” (Obeid, Page347, left col. Top paragraph introduction:” Rechargeable batteries, particularly the lithium-ion (Li-ion) batteries,” NOTE: It is well known in the art that for Electric Vehicle /(EV) “rechargeable batteries/ Lithium-ion batteries” are used as “secondary batteries”.) measured during a first specific time period of charging, discharging, and resting processes ((Obeid, page 353, Table 3, “sampling period (SPS)” reads on “time period of charging, discharging”. Table 3 discloses different time for different batteries, (Obeid, Page 352, Right col. Bottom paragraph, “Table 3 gives a comprehensive summary of the performance of the NN with seven batteries (battery numbers. 2, 4, 5, 6, 8, 9, 14).”), wherein the first group of the plurality of the secondary batteries are selected as a first training targets (NOTE: Battery number 1 is first group training target trained by NN. see (Obeid, Page 352, Bottom paragraph, “we used transfer learning; i.e. after training the NN on the simulated data, we trained it an additional time on battery number 1”)
generating a first low-voltage prediction model of the first group of the plurality of the secondary batteries by performing machine learning on the first training data and selecting a main factor among the first training data (Obeid, Figure 6, Page 350, Right Col. “Algorithm 1, Step 9-11: “9: Split each dataset 1, j E {1, 2, 3} into TR (training set) and TE (testing set) 10: Train model)”; Page 348, left col. Bottom paragraph, “raw values, fust-order derivatives, and Fourier transform over sliding time windows of batteries ' terminal voltage values are used as features. Then, a fully connected artificial neural network is used as a classifier”);
inputting first measurement data of a second group of the plurality of the secondary batteries Obeid, Figure 6, Page 350, Right Col. Algorithm 1, Step 3-5: Step 3: Capture segments of data with overlapping windows: obtain N windows, Step 4: for i= [l: N]do, and Step 5: dataset: - raw data”) selected during a second specific time period of charging, discharging, and resting processes (Obeid, page 353, Table 3, “sampling period (SPS)” reads on “time period of charging, discharging”. Table 3 discloses different time for different batteries,)
wherein the second group of the plurality of the secondary batteries are selected as a first prediction targets (Obeid, Page 352, 4.2, Right col. bottom paragraph, Table 3 gives a comprehensive summary of the performance of the NN with seven batteries (battery numbers. 2, 4, 5, 6, 8, 9, 14). we used transfer learning; i.e. after training the NN on the simulated data, we trained it an additional time on battery number 1 (training target). Then we used the trained NN to test the seven batteries mentioned above”. NOTE: each battery 2, 4,5,6,8,14 are individual “prediction target” see table 3);
comparing a second low-voltage determination prediction result of the second group of the plurality of the secondary batteries (Obeid, Figure 6, Page 350, Right Col. Algorithm 1, Step 11-13:” 11: Test model on simulated data 12: Pre-process real data,13: Test/use model on real data”) in which the first measurement data is applied to a first low-voltage prediction model of the first group of the plurality of the secondary batteries generated using the first training (Algorithm 1, step 1-3) data with first low-voltage determination prediction result of the second group of the plurality of the secondary batteries based on the first measurement data (Algorithm 1, Step 9-10, “9: Split each dataset 1, j E { 1, 2, 3} into TR (training set) and TE (testing set) 10: Train model) (Algorithm 1, Step 13) and
inputting second training data of a third group of the plurality of the secondary batteries measured during a third specific time period of charging, discharging, and resting processes, wherein the third group of the plurality of the secondary batteries are selected as second training targets (Algorithm 1 step 1-4, Algorithm 1: Supervised learning-based battery terminal voltage collapse detection methodology 1: Obtain the simulated training data by solving (l}--(4) 2: Label the data based on SOC level 3: Capture segments of data with overlapping windows: obtain N windows”. “Step 4 i=[l:N]do” of algorithm 1 reads on “ third group”. The algorithm can be implemented on any ith battery where i= [1, N] number of different batteries and the training data can be generated using any different third group of batteries under different charging, discharging conditions. This is an algorithm design choice) measured during a third specific time period of charging, discharging, and resting processes, wherein the third group of the plurality of the secondary batteries are selected as a second training targets; (Obeid, Page 352, Right col. Bottom paragraph, “Table 3 gives a comprehensive summary of the performance of the NN with seven batteries (battery numbers. 2, 4, 5, 6, 8, 9, 14).” Each battery has different time).
generating a second low-voltage prediction model Obeid, Figure 6) of the third group of the plurality of the secondary batteries by performing machine learning on the transferred optimized low-voltage prediction model (Obeid, Figure 6, Page 350, Right Col. Algorithm 1, Algorithm 1, Step 9-10, “9: Split each dataset 1, j E { 1, 2, 3} into TR (training set) and TE (testing set) 10: Train model);
inputting second measurement data of a fourth group of the plurality of the secondary batteries selected during a fourth specific period of charging, discharging, and resting processes Obeid, Figure 6, Page 350, Right Col. Algorithm 1, Step 4-7: input raw data, step 4 i= [1, N] Page 350, left col. Bottom paragraph, “the raw voltage values after normalization are used as features” NOTE: “Step 4 i= [l: N]do” of algorithm 1 reads on “fourth group”. where N can vary with any nth no of batteries. each battery group has different time period, discharge charging cycle and ret process. See table 3, each battery has different condition and parameters),
wherein the fourth group of the plurality of the secondary batteries are selected as a second prediction targets (Obeid, Page 352, 4.2, Right col. bottom paragraph, Table 3 gives a comprehensive summary of the performance of the NN with seven batteries (battery numbers. 2, 4, 5, 6, 8, 9, 14). To adapt our classification model to the real scenarios, we used transfer learning; i.e. after training the NN on the simulated data, we trained it an additional time on battery number 1 (training target). Then we used the trained NN to test the seven batteries mentioned above”. NOTE: each battery 2, 4,5,6,8,14 are individual “prediction target”. Battery 4, could be 4th group and a second prediction target. see table 3);
outputting a second low-voltage determination prediction result of the third group of the plurality of the secondary batteries (Obeid, Figure 6 (See below), Page 350, Right Col. Algorithm 1, Step 11: Test model on simulated data, Step 12: Pre-process real data, 13: Test/use model on real data), in which the second measurement data is applied to the second low-voltage prediction model of the third group of the plurality of the secondary batteries (Obeid, Figure 6, Page 350, Right Col. Algorithm 1, Algorithm 1, Step 9-10, “9: Split each dataset 1, j E {1, 2, 3} into TR (training set) and TE (testing set) 10: Train model);
wherein process conditions of the first and second group of the plurality of the secondary batteries selected as the first training target and the first prediction target are different from process conditions of the third and fourth group of the plurality of the secondary batteries selected as the second training target and the second prediction target (Obeid, Page 350, left col. Middle paragraph, “each battery may take a different route to battery failure. The overlapping windows that document the different routes to failure are then labelled as either coming from the safe regions or the failure regions, depending on the SOC information as described earlier. The extracted windows are then split into two groups: a training set (TR) and a testing set (TE). The TR constitutes 70% of all windows. We ensure that windows from the same battery/load combination do not appear simultaneously in training or testing sets. We extract data both from simulated battery and load models, and also from tests on real batteries. Table I explains the different constructed datasets” NOTE: each battery process condition is different. for example, see (Obeid, Page 353, Table 3, Page 352, 4.2, Right col. Top paragraph, “. Sixteen different batteries were discharged through varying loads, and their terminal voltage was observed throughout the process. In this procedure, the batteries were drained at 16 different current levels and current profiles”)
Obeid teaches a F1-score for precision of the prediction, Obeid is silent on
finding an optimal value of a weighting factor k that maximizes a performance of the first low-voltage prediction model to verify and optimize the first low-voltage prediction model;
transferring the optimized first low voltage prediction model and the optimal value of the weighting factor k;
However, Tan teaches finding an optimal value of a weighting factor k that maximizes a performance of the first low-voltage prediction model of the first group of the plurality of the secondary batteries to verify and optimize the low-voltage prediction model of the first group of the plurality of the secondary batteries (Tan, Figure 5, page 8723, abstract, “we select the task with the highest FES score to obtain the base model with superior generalization performance”. “A high feature expression scoring (FES)” reads on “optimal value of a weighting factor k”)
receiving the optimized low voltage prediction model of the first group of the plurality of the secondary batteries and the optimal value of the weighting factor k. (Tan, Figure 5, see below, Block “training base model” with highest FES score” and transfer learning).
It would have been obvious to a person having ordinary skill in the art before the effective filing date to modify Obaid’s transfer learning method for predicting optimized model to incorporate Tan’s transfer learning machine learning method with the “A feature expression scoring (FES) as taught by Tan and obtain an accurate trained model and generate output result with optimal precision. (Tan, conclusion). It would have been obvious to a person of ordinary skill to include the well-known transfer learning machine learning model optimization along with the other machine learning network, in order to yield the predicted results of generating accurate battery performance prediction, yet with higher accuracy (KSR).
Regarding Claim 18, combination of Obeid and Tan teaches the apparatus of claim 1,
Obeid further teaches A server including the apparatus for predicting the low-voltage failure of the secondary battery of claim 1. (Obeid, page 352, left col. Bottom paragraph “While actual runtime operation of the proposed algorithm may happen on a BMS, training can be done on any computer”. NOTE: A remote computer or server can be used).
Regarding Claim 19, combination of Obeid and Tan teaches the apparatus of claim 1,
Obeid further teaches A computing device including the apparatus for predicting the low-voltage failure of the secondary battery of claim 1. (Obeid, page 352, left col. Bottom paragraph “While actual runtime operation of the proposed algorithm may happen on a BMS, training can be done on any computer”)
Conclusion
Citation of Pertinent Prior Art
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Shu et al. “Stage of Charge Estimation of Lithium-Ion Battery
Packs Based on Improved Cubature Kalman Filter with Long Short-Term Memory Model”, IEEE TRANSACTIONS ON TRANSPORTATION ELECTRIFICATION, Print publication November 30, 2020.
Abstract: lithium-ion battery packs remain challenging due to inconsistencies among battery cells. To achieve precise SOC estimation
of battery packs, first, a long short-term memory (LSTM) recurrent neural network (RNN)-based model is constructed to characterize the battery electrical performance, and a rolling learning method is proposed to update the model parameters for improving the model accuracy. Then, an improved square root-cubature Kalman filter (SRCKF) is designed together with
the multi-innovation technique to estimate the battery cell’s SOC. Next, to cope with inconsistencies among battery cells, the SOC estimation values from the maximum and minimum cells are combined with a smoothing method to estimate the pack SOC. The robustness and accuracy of the proposed battery model and the cell SOC estimation method are verified by exerting the experimental validation under time-varying temperature
conditions. Finally, real operation data are collected from an electric-scooter (ES) monitoring platform to further validate the generalization of the designed pack SOC estimation algorithm. The experimental results manifest that the SOC estimation error can be limited to 2% after convergence”
L. Chen et al., "A Novel State-of-Charge Estimation Method of Lithium-Ion Batteries Combining the Grey Model and Genetic Algorithms," in IEEE Transactions on Power Electronics, vol. 33, no. 10, pp. 8797-8807, Oct. 2018,
Abstract—Lithium-ion battery remaining useful life (RUL)
prediction is critical for battery health management. Machine learning-
based method is often used to predict battery RUL, an accurate prediction is dependent on a large amount of labeled data, which is difficult and expensive to obtain. This paper proposes a new method to train the data-model of battery RUL prediction with constraint derived from prior physical knowledge. The constraint specifies the nonlinear function between the
battery RUL and energy-throughput. Box-Cox transformation
(BCT) is utilized to optimize the constraint, and transform the
nonlinear function into a linear one. Then the physical knowledge
function between the energy-throughput and RUL is constructed
to generate new labeled data, which are used to train Artificial
Neural Network (ANN) to achieve the label-free supervision
battery RUL prediction data-model. The experimental results
demonstrate that proposed method effectively reduces the labeled
data under the premise of ensuring the accuracy of the prediction
result”.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DILARA SULTANA whose telephone number is (571)272-3861. The examiner can normally be reached Mon-Fri, 9 AM-5:30 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s
supervisor, EMAN ALKAFAWI can be reached on (571) 272-4448. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DILARA SULTANA/Examiner, Art Unit 2858
/EMAN A ALKAFAWI/Supervisory Patent Examiner, Art Unit 2858 1/28/2026