DETAILED ACTION
This FINAL OFFICE action is responsive to the claims filed on October 30, 2025. Claims 1-2, 4-5, 7-9, 11-22, and 24-25 are under examination.
Claims 1-2, 4-5, 7-9, 11-22, and 24-25 are rejected under 35 USC 101.
Claims 9, 11-14, and 18-20 are rejected under 35 USC 103 over Devine and Yook.
Claims 1-2, 4-5, 7-8, 15-17, 21-22, and 24-25 are rejected under 35 USC 103 over Devine, Yook, and (Webb or Magiera)
Response to Amendments/Arguments
NOTE: The Responses to Amendments/Arguments From The Prior two Actions Is Presented Again Below For Reference. The Applicant did not substantively amend the claims, so many of the positions taken in the prior actions are maintained. This includes recommendations for amendment to move the claims towards allowance that the Applicant has elected to forgo.
35 USC 101 Rejections: The arguments presented by the Applicant will be addressed in the order presented in the response.
A. Alleged Unconventional Acceleration of Multiprocessor – All of the arguments reproduced below still apply to the claims. Additionally, in a direct response to the assertions of the Applicant’s most recent response, there is no link in the claim between the recited suppression and the estimation by the machine learning model. Also, suppressing transmission of data in a computer is merely not transmitting data. The processor takes time to process data, so it is not always transmitting data, suppressing the data transfer. The Applicant has failed to claim the inventive advantage and the means by which to accomplish it as compared with basic multiprocessor processing that is a LONGSTANDING PRACTICE and is WURC based on the evidence provided below. The claim also fails to represent any improvement in computing because, as drafted, the claim recites nothing more than generic multiprocessor processing. This is not unconventional. It makes the processor conventionally fast by using parallel processing with multiprocessors that, under BRI, is just standard multi-process processing.
B. Practical Application is Significantly More Than Abstraction – The Applicant states that the mere recitation of generic computing components without any specific improvements thereto represent an acceleration of those components. As demonstrated, under MPEP 2106.05(f), an Applicant must do more than rely on the abstract idea itself for any inventive concept. This must be based both on improvement to the additional limitations and the abstract idea itself. If the alleged improvement is attributable exclusively to the abstract idea, it is not an inventive concept. It is an abstract idea that provides nothing more than the abstract idea.
C. The Claims Allegedly Recite Suppression And It Allegedly Means More Than Withholding Data Processed From Transfer: Nope. BRI of suppression in the context of a computer is merely allowing time to transpire without transmitting processed data. The Applicant was invited to provide a specific type of mechanism expressly withholding data already processed, but the Applicant has failed to do so. The Applicant would attempt to monopolize all processes where data is exchanged between multiprocessor cores in parallel processing for FEM or other mesh systems. Further, the Applicant asserts that the term “suppression” is mischaracterized, but the Applicant has not forwarded an alternative definition, demonstrating not only that it is the Applicant who has failed to make a prima facie case but that the Applicant is unclear as to what the definition of suppression is. Should the definition be forthcoming from the specification, the Applicant is invited to include the definition in the claim to limit the scope of suppression to exclude withholding the transfer of some processed data (e.g., determined intermediates that are not final quantities ready for transfer). Prima facie ineligibility is established. The Applicant has failed to make a prima facie showing that the Applicant’s position on the term “suppression” has any foundation.
D. Claim 9 – The Applicant relies on the assertions made with respect to claim 1 to support alleged eligibility of claim 9, and that position is rebutted with the same unequivocal certainties applied in the Office Action to claim 1, mutatis mutandis. Extrapolating is a LONGSTANDING PRACTICE and is WURC as demonstrated in the body of the rejections below.
35 USC 103 Rejections: The arguments and amendments presented by the Applicant have been considered but are not persuasive. First, the Applicant’s amendment does more than merely incorporate claim 6. Claim 6 recited, “wherein said estimating current flux data comprises using machine learning.” However, this does not mean that the recited “using machine learning” qualifies the manner in which the estimating step current flux data. The reason this awkward wording is not typically used for dependent claims is that it is susceptible to the following reasonable interpretation when incorporating claim 6 into claim 1: “a) for each time step of the at least the portion of the time steps and each element of the at least the portion of the plurality of elements, estimating current flux data from past flux data obtained from a state of a neighboring element of the plurality of elements in a time step immediately preceding said time step, wherein the estimating current flux data further comprises using a machine learning model [for any purpose that is an element of the estimation].” The recited use of the machine learning model can be used for something else as an additional step. That is the breadth of the term comprising. Accordingly, the amendment presented in the claim does not merely incorporate the features of claim 6 that, in its broadest reasonable sense, generically uses the machine learning model to estimate current flux, but goes beyond to use the machine learning model to estimate the current flux data from the past flux data. The current rejections are maintained, but should it be found that the rejections are deficient in any way, the Applicant’s current amendment has not been addressed, so the Office Action can introduce new art while still maintaining finality. Accordingly, a new alternative rejection has been presented, should it be necessary.
The arguments and amendments presented by the Applicant are not persuasive. The rejections are maintained. Also, because the amendment of claims 1 and 21 did not merely incorporate the features of claims 6 and 26, but recite more specific matter, as described before, an additional/alternative rejection is added over Devine, Yook, and Margeira.
The following are the response to the Applicant’s arguments in the order presented in the Applicant’s response:
Webb Is allegedly Mischaracterized For Allegedly Merged Dependent Claim: As discussed, the claim amendment did not merely incorporate the features of claim 6. This is explained in the body of the rejection. The response states, “Here, “cause estimate current flux data comprises cause using a machine learning model. (Webb mischaracterizes Webb that lacks flux and lacks learned flux estimation.” This statement is incoherent and does not appear to have a cogent purpose. It appears the Applicant has continued to argue that Webb does not teach flux. As asserted in the prior rejections with extreme conviction, the rejection does not rely on Webb to teach flux. That said, the Applicant’s statement is insufficiently clear to discern what the Applicant’s point is. The Applicant’s assertions amount to nothing more than attacking the references individually.
Webb Is Allegedly Technologically Incompatible With New Reference Devine: The Applicant states that the use of Webb to teach processing of data to be exchanged, such as flux (as taught in Devine), using a machine learning model as an obvious replacement for traditional extrapolation, is incompatible. Again, Webb is not being used to teach flux. Whether the type of data processed in Webb differs and is processed differently from the flux in Devine is immaterial, because the Webb is not being used to teach the processing of flux data. To this end, none of the Applicant’s arguments about compatibility are relevant. The Applicant concludes with the assertion that a proposed modification cannot render the prior art inoperative and that the Webb reference allegedly frustrates Devine’s purpose. However, the simple substitution of a neural network that mimics extrapolation for a pure extrapolation method is proper and correctly supported. No purpose is frustrated. The Applicant’s assertions amount to nothing more than attacking the references individually.
The York Reference Is Allegedly Incompatibly With Devine: The Applicant states that “Yook requires all nodes have all of the data, and new reference Devine instead requires no node ha all of data.” The Applicant again misses the point. Yook is not being relied on to determine what data is exchanged or available. This is taught by Devine independently of Yook. Yook is merely used to teach that some of the data is estimated rather than transmitted, thus teaching suppression of data transfer. The Applicant’s arguments amount to nothing more than attacking the references individually.
For Independent Claim 9: The Applicant incorrectly reads the claim as limiting the distribution of steps for one type of processing or another. However, as drafted, the times steps are not limited to one or two steps. For example, something that includes to steps, from a patent perspective, is also broad enough to teach three steps. Contrary to the Applicant’s assertion, the claim recites alternating between one or more time steps for estimation/processing and one or more time steps for data transfer. This would apply to any type of processing in which multiple processors process mesh data. Some of the time is spent processing and some is spent transmitting the data processed in a final form made for transmission. Contrary to the Applicant’s assertion, claim 9, as drafted, is not limited to alternating in any pattern. In its broadest reasonable sense, claim 9 merely recites processing some of the time and transmitting the data processed at other times. Also, the rejection does not rely on Devine to teach the suppression of data, but on Yook. Again, this amounts to the Applicant attacking each reference individually rather than the motivated combination of references presented in the rejection.
Conclusion: For at least these reasons, the rejections are maintained.
Response to Amendment/Arguments From Prior Non-Final Action After RCE
NOTE: The Response to Amendment From The Prior two Actions Is Presented Again Below For Reference. The Applicant did not substantively amend the claims, so many of the positions taken in the prior action are maintained. This includes recommendations for amendment to move the claims towards allowance.
Claim Objections: The Applicant’s arguments and amendments have been considered and are persuasive. The claim objections are withdrawn.
35 USC 112 Rejections: The Applicant’s arguments and amendments have been considered and are persuasive. The claim rejections are withdrawn.
35 USC 101 Rejections: The arguments presented by the Applicant will be addressed in the order presented in the response.
A. Unconventional Acceleration of Multiprocessor – The Applicant claims that alternating between estimating edge values and communicating edge values to reduce communication overhead is a benefit of the claims. However, the claims are recited at such a high level that they do not reflect the Applicant’s interpretation. The independent claims, when construed under the broadest reasonable interpretation cover any multiprocessor system that is processing a mesh, with a qualifier that some flux data is calculated based on past flux data. Claim 1:
A method implemented on a computer system comprising:
executing a model of a physical system including a plurality of elements for a plurality of time steps such that, for at least a portion of the plurality of time steps, exchange of data between at least a portion of the plurality of elements is suppressed, wherein said executing comprises: (BRI: A model of something is executed over elements over a period of time on a computer. Some of that time, there is no communication. Suppression means nothing other than the system has not received a command to transfer data, which happens in any processing scenario at some point. This would be common in a generic multiprocessor system because the data is first processed and then transmitted.)
[…] for each time step of the at least the portion of the time steps and each element of the at least the portion of the plurality of elements, current flux data from past flux data obtained from a state of a neighboring element of the plurality of elements in a time step immediately preceding said time step; and (BRI: For some of the time, take data from a neighboring element from past time steps, which is done between neighboring elements in any distributed FEM setup. Flux is estimated based on past flux data obtained from a state of a neighboring element.)
for each time step of a second portion of the time steps and each element of the at least the portion of the plurality of elements, receiving by a first processing core of a multicore processor, from a second processing core of the multicore processor, current flux data of the neighboring element of the plurality of elements. (BRI: For some of the time that isn’t the time in a), communicate data, which is done between processing elements in any distributed FEM setup.)
As it stands, the claim largely represents any multiprocessor that processes flux data using more than one core. This is well-understood, conventional, and routine.
For example, the 1995 Devine reference, “PARALLEL PARTITIONING STRATEGIES FOR THE ADAPTIVE SOLUTION OF CONSERVATION LAWS”, of record, states,
“The discontinuous Galerkin method is well suited to parallelization on massively parallel computers. The computational stencil involves only nearest-neighbor communication regardless of the degree of the piecewise polynomial approximation and the spatial dimension. Additional storage is needed for only one row of ‘ghost’ elements along each edge of a processor's subdomain. Thus, the size of the problem scales easily with the number of processors. Indeed, for two-dimensional problems on rectangular domains with periodic boundary conditions, scaled parallel efficiencies in excess of 97% are achieved [11].” (Devine Page 3, First Paragraph)
“Therefore, the flux may be calculated by computing each component separately, exchanging ff and ff between elements, and summing. This splitting approximately halves the computational and parallel communications effort relative to other flux evaluation schemes.” (Devine Page 5, Last Paragraph)
“The adaptive p-refinement method produced a solution with global error 6.32205 x 10-2 , comparable to the fixed-order solution. With balancing, the maximum computation time (not including communication or balancing time) was reduced by 49.8% (see Table 4.1). The irregular subdomain boundaries created by the tiling algorithm increased the average communication time by 2.5%. Despite the extra communication time and the load balancing time, however, we see a 36.3% improvement in the total execution time.” (Devine Page 13, Last Paragraph – Page 15, First Paragraph)
“In three dimensions, we have demonstrated the effectiveness of a tree-based mesh partitioning algorithm for reducing parallel communication costs. This algorithm performs almost as well as recursive spectral bisection, but requires much less work to compute a partitioning.”
As illustrated in the mesh scheme of Devine, it is contemplated that some of the time, there is communication of fluxes. Other times, there are exclusively local flux calculations based on local neighboring cells.
The only claim 1 elements not taught by Devine are “estimating current flux data from past flux data obtained from a state of a neighboring element of the plurality of elements in a time step immediately preceding said time step.” This is an operation practically performable in the mind or with aid of pen, paper, and/or a calculator, a mental process, an abstract idea. Because the only feature that could potentially be unconventional is an element of the abstract idea, not an additional limitation that can confer eligibility. Also, an example of this can be found in the 2000 Yook reference, “Trading Computation for Bandwidth Reducing Communication in Distributed Control Systems Using State Estimators,” made of record (refer to the art rejection presented below).
B. Practical Application is Significantly More Than Abstraction – The Applicant states that the claimed system is unconventionally fast because of the parallel efficiency. Again, under BRI, the claim describes nothing other than a generic multiprocessor system processing flux data. The Applicant is correct that the generic concepts of distributed computing and parallel processing confer efficiency benefits, as they have for three decades since the first multicore processors (See the Wikipedia Reference For Multi-core Processor on the record.). The claim, as written, is so broad that it encompasses any multiprocessor ever used to process flux. The Applicant can attempt to claim the invention of multi-core processing, but the attempt will fall flat. The current claims are in the prior art (not inventive) and are well-understood, routine, and conventional. Until the claims are amended to specifically limit the metes and the bounds of the claim to be confined in scope within the interpretation asserted by the Applicant, the claims will fail to provide any additional limitations to integrate the abstract idea into a practical application at Step 2A, Prong 2 and Step 2B. The addition of extrapolation from claim 9 does nothing to cure this. Estimation by extrapolation is in the prior art and is also conventional. Further, it is a mathematical concept, an element of the abstract idea, so it cannot provide any additional limitations that would confer eligibility at Step 2A, Prong 2 and Step 2B.
C. ALL CLAIMS ARE DIRECTED TO ELIGIBLE SUBJECT MATTER? – The claims are not directed to eligible subject matter. The Applicant states, that the claims:
“accelerate internal operation of a multiprocessor by unconventional avoidance of transmission of flux data. In that way, the claimed multiprocessor is technologically improved, which is eligible subject matter per se. MPEP 2106.04(d) summarizes, ‘in a growing body of decisions, the Federal Circuit has distinguished… claims that improve the functioning of a computer… Enfish, LLC v. Microsoft Corp. 822 F.3d 1327, 118 USPQ2d 1684 (Fed. Cir. 2016).”
The Applicant correctly asserts that conventional multiprocessor processing is an efficient way to process, but the Applicant misconstrues the scope of the independent claims. The claimed subject matter is not eligible per se at least because the claim is insufficiently limited to be confined in scope to the interpretation Applicant asserts. Until such a time as the Applicant has so narrowed the claim to reflect the Applicant’s interpretation, it is moot to speculate whether, if the claim scope conformed to the interpretation of the Applicant, the claim would be sufficiently integrated into a practical application by its additional limitations as to be eligible at Step 2A, Prong 2, or the rest of the claim’s terms would combine with its additional limitations to provide significantly more than the abstract idea as to confer an inventive concept at Step 2B. The Applicant is directed to the comments made in Response to Amendments/Arguments section from last action for suggestions to improve the eligibility of the claims (reproduced below). As it stands, the claims are not limited to the interpretation forwarded by the Applicant, so the rejections are maintained. Claims 1-18 and 20-21 are ineligible.
Response to Amendment/Arguments From Prior Final Action
35 USC 101 Rejections: While the Applicant attempts to demonstrate that the intent of the current claims is to a system in which, in a distributed computing system running a simulation over multiple memory partitions controlled by different processors, data is only exchanged intermittently and is otherwise estimated in order to expedite cell flux calculation on border cells of a partition. This is not conveyed by the current claim for a number of reasons. However, as the claim is written now, it could cover any basic multiprocessor system with shared memory where communication between processors to determine edge fluxes is conducted some of the time and not at other times. The boundaries are not described as physical memory boundaries, so they could be virtualized with shared memory and no real adjacency in physical memory. As it is currently drafted, the claim includes mere communication between two cores in a standard multicore processor some of the time and not other times. If you consider a situation where a processor ceases communication of data loaded from shared memory after receiving an appropriate acknowledgement, this would cover the cessation/suppression of the communication (that would otherwise continue without acknowledgement). Computer systems also do not necessarily conduct all calculations on transmitted data while the communications are being conducted. This would mean that the communication would be suppressed and then the data would be processed to estimate this or that parameter. In this case, the parameter is flux in a mesh model. It would be easy to provide an example of this generic system in the prior art cited (See Lakhotia et al.), but this and the prior Office Action mapped prior art elements based on an interpretation of what the claims actually say AND what the Applicant intended, not just what the Applicant has actually conveyed in the claim language, in order to advance prosecution.
Because the language of the independent claims is so broad and fails to connect essential elements that would appropriately convey what the Applicant describes as the inventive concept, there is no real integrated application or even invention (as shown, the claim language merely covers generic multiprocessing for flux calculation in a mesh). To this end, there are things the Applicant can do to increase the likelihood of a finding of eligibility the independent claims 1 and 21. Claim 9 could also benefit from inclusion of these features in parallel with claims 1 and 21. NOTE: The examiner has not checked for support in the Applicant’s specification for these types of amendments, so the Applicant will be responsible for finding the support.
Connect the recited suppression of data exchange recited in the executing step with the step a) and contrast it with step b).
Maintain antecedence for the neighboring element so that it is clear that it is the same neighboring element.
Establish mutually exclusive control over one memory partition for one core relative to another core over which the communication is conducted in step b), such that the communication must be via processors that permanently control different memory (rather than shared memory). Specifying that the partitions are different physical partitions could also help.
Rather than using generic association terms, such as “corresponding to,” the Applicant should explicitly state what that means in the context of the inventive concept the Applicant has attempted to assert.
This is a non-exhaustive list with no guarantee that remedying each of the issues will confer eligibility, but it will take the Applicant in the right direction to either integrate the recited abstract idea into a practical application at Step 2A, Prong 2, or have the additional limitations combine with the abstract idea to provide significantly more than the abstract idea that would confer an inventive concept at Step 2B.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-2, 4-5, 7-9, 11-22 and 24-25 are rejected under 35 U.S.C. 101 because the claims are directed to an abstract idea without significantly more.
Independent Claims
Claim 21 (Statutory Category – Machine)
Step 2A – Prong 1: Judicial Exception Recited?
Yes, the claims recite a mental process and a mathematical operation, which are abstract ideas.
MPEP 2106.04(a)(2)(Ill): “Accordingly, the "mental processes" abstract idea grouping is defined as concepts performed in the human mind, and examples of mental processes include observations, evaluations, Judgments, and opinions. […] The courts do not distinguish between mental processes that are performed entirely in the human mind and mental processes that require a human to use a physical aid (e.g., pen and paper or a slide rule) to perform the claim limitation.”
MPEP 2106.04(a)(2)(I): “When determining whether a claim recites a mathematical concept (i.e., mathematical relationships, mathematical formulas or equations, and mathematical calculations), examiners should consider whether the claim recites a mathematical concept or merely limitations that are based on or involve a mathematical concept […] a mathematical concept need not be expressed in mathematical symbols, because "[w]ords used in a claim operating on data to solve a problem can serve the same purpose as a formula." In re Grams, 888 F.2d 835, 837 and n.1, 12 USPQ2d 1824, 1826 and n.1 (Fed. Cir. 1989). See, e.g., SAP America, Inc. v. InvestPic, LLC, 898 F.3d 1161, 1163, 127 USPQ2d 1597, 1599 (Fed. Cir. 2018) (holding that claims to a ‘‘series of mathematical calculations based on selected information’’ are directed to abstract ideas); Digitech Image Techs., LLC v. Elecs. for Imaging, Inc., 758 F.3d 1344, 1350, 111 USPQ2d 1717, 1721 (Fed. Cir. 2014) (holding that claims to a ‘‘process of organizing information through mathematical correlations’’ are directed to an abstract idea).
MPEP 2106.04(a)(2)(I)(A): “Mathematical Relationships. A mathematical relationship is a relationship between variables or numbers. A mathematical relationship may be expressed in words or using mathematical symbols.”
Claim 21 recites (claim features in italics, paragraph references are to the Applicant’s specification):
execute a model of a physical system including a plurality of elements for a plurality of time steps such that, for at least a portion of the plurality of time steps, exchange of data between at least a portion of the plurality of elements is suppressed, wherein said executing comprises: estimate […] a) for each time step of the at least the portion of the plurality of elements, estimating current flux data corresponding to past flux data obtained from a state of a neighboring element of the plurality of elements in a time step immediately preceding said time step; and.
(Note On Interpretation - The execution of a model of a physical system with an exchange of data between elements is a common operation for finite element analysis, Navier-Stokes equations systems, Maxwell’s equations systems, and acoustic wave equations systems. (Background [0001]) In these models, data is shared to affect neighboring elements, but few or no remote ones. As written, the broadest reasonable interpretation include instances in which data is never shared between remote elements, which is a standard execution in the aforementioned systems.)
Regarding the question of eligibility, the execution of a model of a physical system is an evaluation, which is a mental process, and represents mathematical operations on mathematical relationships, which are mathematical concepts. ([0028]-[0033] PDEs are solved to update state variables. Calculations are conducted to determine whether to calculate based on transmitted information or based on mathematical extrapolation). Mental processes and mathematical concepts are abstract ideas.
Claim 21 recites an abstract idea.
Step 2A – Prong 2: Integrated into a Practical Solution?
No.
MPEP 2106.04(d): “[A]fter determining that a claim recites a judicial exception in Step 2A Prong One, examiners should evaluate whether the claim as a whole integrates the recited judicial exception into a practical application of the exception in Step 2A Prong Two. A claim that integrates a judicial exception into a practical application will apply, rely on, or use the judicial exception in a manner that imposes a meaningful limit on the judicial exception, such that the claim is more than a drafting effort designed to monopolize the judicial exception. Whether or not a claim integrates a judicial exception into a practical application is evaluated using the considerations set forth in subsection I below, in accordance with the procedure described below in subsection II.”
MPEP 2106.05(f) Mere Instructions To Apply An Exception: “Another consideration when determining whether a claim integrates a judicial exception into a practical application in Step 2A Prong Two or recites significantly more than a judicial exception in Step 2B is whether the additional elements amount to […] more than a recitation of the words "apply it" (or an equivalent), such as mere instructions to implement an abstract idea on a computer, examiners should explain why they do not meaningfully limit the claim in an eligibility rejection. For example, an examiner could explain that implementing an abstract idea on a generic computer, does not integrate the abstract idea into a practical application in Step 2A Prong Two or add significantly more in Step 2B.
The additional elements:
one or more processing devices;
implemented on a computer system
a model of a physical system
one or more memory devices operably coupled to the one or more processing devices, the one or more memory devices storing executable code that, when executed by the one or more processing devices, causes the one or more processing devices to:
processing core of a multiprocessor
by a machine learning model
The execution of code on generic processor and memory devices is a recitation of a general purpose computer with no specific configurations to execute the claimed method. This applies equally machine learning models for which no improvements are recited in the claim. As such, under MPEP 2106.05(f), the computer implementation implements the recited abstract idea on a generic computer, and does not integrate the abstract idea into a practical application in Step 2A Prong Two.
b) for each time step of a second portion of the time steps and each element of the at least the portion of the plurality of elements, receive by a first processing core of a multiprocessor, from a second processing core of the multiprocessor, current flux data of a neighboring element of the plurality of elements.
Receiving computer data from a core of a processor is mere information gathering similar to the MPEP 2106.05(g) examples: “ iv. Obtaining information about transactions using the Internet to verify credit card transactions, CyberSource v. Retail Decisions, Inc., 654 F.3d 1366, 1375, 99 USPQ2d 1690, 1694 (Fed. Cir. 2011)” “v. Consulting and updating an activity log, Ultramercial, 772 F.3d at 715, 112 USPQ2d at 1754;” “ii. Taking food orders from only table-based customers or drive-through customers, Ameranth, 842 F.3d at 1241-43, 120 USPQ2d at 1854-55;” “iii. Selecting information, based on types of information and availability of information in a power-grid environment, for collection, analysis and display.” Because this is mere data gathering, it is insignificant extra-solution activity and, under MPEP 2106.05(g), fails to integrate the abstract idea into a practical application at Step 2A, Prong 2.
Claim 21 does not integrate the abstract idea into a practical application and is directed to the abstract idea.
Step 2B: Claim provides an Inventive Concept?
No.
MPEP 2106.05(I) “An inventive concept "cannot be furnished by the unpatentable law of nature (or natural phenomenon or abstract idea) itself. […] Instead, an "inventive concept" is furnished by an element or combination of elements that is recited in the claim in addition to (beyond) the judicial exception, and is sufficient to ensure that the claim, as a whole, amounts to significantly more than the judicial exception itself.”
MPEP 2106.05(f) Mere Instructions To Apply An Exception: “[I]mplementing an abstract idea on a generic computer, does not integrate the abstract idea into a practical application in Step 2A Prong Two or add significantly more in Step 2B.
The additional elements:
one or more processing devices;
implemented on a computer system
a model of a physical system
one or more memory devices operably coupled to the one or more processing devices, the one or more memory devices storing executable code that, when executed by the one or more processing devices, causes the one or more processing devices to:
processing core of a multiprocessor
by a machine learning model
The execution of code on generic processor and memory devices is a recitation of a general purpose computer with no specific configurations to execute the claimed method. As such, under MPEP 2106.05(f), the computer implementation implements the recited abstract idea on a generic computer, and does not combine with the other elements of the claim to provide significantly more than the abstract idea that would confer an inventive concept at Step 2B.
b) for each time step of a second portion of the time steps and each element of the at least the portion of the plurality of elements, receive by a first processing core of a multiprocessor, from a second processing core of the multiprocessor, current flux data of a neighboring element of the plurality of elements.
This step, as discussed, is insignificant extra-solution activity under MPEP 2106.05(g). This step is also well-understood, routine, and conventional (WURC) activity similar to the MPEP 2106.05(d) examples: “i. Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information)” “iii. Electronic recordkeeping, Alice Corp. Pty. Ltd. v. CLS Bank Int'l, 573 U.S. 208, 225, 110 USPQ2d 1984 (2014) (creating and maintaining "shadow accounts"); Ultramercial, 772 F.3d at 716, 112 USPQ2d at 1755 (updating an activity log);” “iv. Storing and retrieving information in memory, Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015); OIP Techs., 788 F.3d at 1363, 115 USPQ2d at 1092-93;” “v. Electronically scanning or extracting data from a physical document, Content Extraction and Transmission, LLC v. Wells Fargo Bank, 776 F.3d 1343, 1348, 113 USPQ2d 1354, 1358 (Fed. Cir. 2014) (optical character recognition);” “vi. Arranging a hierarchy of groups, sorting information, eliminating less restrictive pricing information and determining the price, Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1331, 115 USPQ2d 1681, 1699 (Fed. Cir. 2015).” Because this is WURC and insignificant extra-solution activity, under MPEP 2106.05(d) and 2106.05(g), it fails to combine with the other elements of the claim to provide significantly more that would confer an inventive concept at Step 2B.
Therefore, there are no additional limitations in claim 21 that furnish claim 21 with an inventive concept to ensure that claim 21, as a whole, amounts to significantly more than the bolded abstract idea.
Claim 21 is ineligible.
Claim 1 (Statutory Category – Process)
Claim 1 recites the method performed by the machine in claim 21 and is ineligible for at least the same reasons as claim 21. Claim 1 is ineligible.
Claim 9 (Statutory Category – Process)
Step 2A – Prong 1: Judicial Exception Recited?
Yes, the claims recite a mental process and a mathematical concept, which are abstract ideas.
MPEP 2106.04(a)(2)(Ill): “Accordingly, the "mental processes" abstract idea grouping is defined as concepts performed in the human mind, and examples of mental processes include observations, evaluations, Judgments, and opinions. […] The courts do not distinguish between mental processes that are performed entirely in the human mind and mental processes that require a human to use a physical aid (e.g., pen and paper or a slide rule) to perform the claim limitation.”
MPEP 2106.04(a)(2)(I): “When determining whether a claim recites a mathematical concept (i.e., mathematical relationships, mathematical formulas or equations, and mathematical calculations), examiners should consider whether the claim recites a mathematical concept or merely limitations that are based on or involve a mathematical concept […] a mathematical concept need not be expressed in mathematical symbols, because "[w]ords used in a claim operating on data to solve a problem can serve the same purpose as a formula." In re Grams, 888 F.2d 835, 837 and n.1, 12 USPQ2d 1824, 1826 and n.1 (Fed. Cir. 1989). See, e.g., SAP America, Inc. v. InvestPic, LLC, 898 F.3d 1161, 1163, 127 USPQ2d 1597, 1599 (Fed. Cir. 2018) (holding that claims to a ‘‘series of mathematical calculations based on selected information’’ are directed to abstract ideas); Digitech Image Techs., LLC v. Elecs. for Imaging, Inc., 758 F.3d 1344, 1350, 111 USPQ2d 1717, 1721 (Fed. Cir. 2014) (holding that claims to a ‘‘process of organizing information through mathematical correlations’’ are directed to an abstract idea).
MPEP 2106.04(a)(2)(I)(A): “Mathematical Relationships. A mathematical relationship is a relationship between variables or numbers. A mathematical relationship may be expressed in words or using mathematical symbols.”
Claim 9 recites (claim features in italics, paragraph references are to the Applicant’s specification):
A method, comprising:
(evaluation, mathematical concept: [0028] “[T[he method 300 is preceded by initializing state variables of elements of the model for all partitions. In particular, the state variables of one or more elements may be set to initial conditions of the physical system being modeled. The state variables of elements of the boundary conditions where it is part of the physical system being modeled.”)
[…]
repeatedly performing (a)for each time step of one or two first time steps:
[…]
processing […] the one or more state variables of the first element and the first flux data according to the model; and
(evaluation, mathematical concept: [0030] “Performing 302 local calculations includes calculating flux between non-edge elements of the local partition.” The model is evaluated as a set of mathematical relationships. Repetition of these steps does not change their nature.)
updating the one or more state variables according to the processing of the one or more state variables of the first element and the first flux data;
(evaluation, mathematical operation/relationship: [0030] “and updating the state variables of the non-edge elements using the flux and current values of the state variables. The manner in which the flux is calculated and the state variables are updated is according to the physical system being modeled and is performed using any modeling approach known in the art.”)
for a second time step subsequent to the one or two first time steps:
extrapolating […] from the first flux data from the one or two first time steps, second flux data for the first element;
(evaluation, mathematical operation/relationship: [0030] “Performing 302 local calculations includes calculating flux between non-edge elements of the local partition” Extrapolation, as described in [0031], is a mathematical operation and/or evaluation.)
processing […] the one or more state variables of the first element and the second flux data according to the model; and
(evaluation, mathematical operation/relationship: [0030] “and updating the state variables of the non-edge elements using the flux and current values of the state variables.” The manner in which the flux is calculated and the state variables are updated is according to the physical system being modeled and is performed using any modeling approach known in the art.”)
updating […] the one or more state variables of the first element according to the processing of the one or more state variables of the first element and the second flux data.
(evaluation, Mathematical Operation/Relationship: [0030] “and updating the state variables of the non-edge elements using the flux and current values of the state variables.” The manner in which the flux is calculated and the state variables are updated is according to the physical system being modeled and is performed using any modeling approach known in the art.” Any approach used in the art could include extrapolation as described in [0031].)
The elicited method steps of claim 9 are elements of an evaluation, a mental process, which can be practically performed in the mind of a person or with a pen and paper. ([0030]-[0031]). Further, the method steps of claim 9 as described in the claim and specification include and/or are expressed as mathematical calculations or mathematical relationships, which are mathematical concepts. ([0030]-[0031]). By reciting mental processes and mathematical concepts, the method steps recite an abstract idea.
Claim 9 recites an abstract idea.
Step 2A – Prong 2: Integrated into a Practical Solution?
No.
MPEP 2106.04(d): “[A]fter determining that a claim recites a judicial exception in Step 2A Prong One, examiners should evaluate whether the claim as a whole integrates the recited judicial exception into a practical application of the exception in Step 2A Prong Two. A claim that integrates a judicial exception into a practical application will apply, rely on, or use the judicial exception in a manner that imposes a meaningful limit on the judicial exception, such that the claim is more than a drafting effort designed to monopolize the judicial exception. Whether or not a claim integrates a judicial exception into a practical application is evaluated using the considerations set forth in subsection I below, in accordance with the procedure described below in subsection II.”
MPEP 2106.05(f) Mere Instructions To Apply An Exception: “Another consideration when determining whether a claim integrates a judicial exception into a practical application in Step 2A Prong Two or recites significantly more than a judicial exception in Step 2B is whether the additional elements amount to […] more than a recitation of the words "apply it" (or an equivalent), such as mere instructions to implement an abstract idea on a computer, examiners should explain why they do not meaningfully limit the claim in an eligibility rejection. For example, an examiner could explain that implementing an abstract idea on a generic computer, does not integrate the abstract idea into a practical application in Step 2A Prong Two or add significantly more in Step 2B.
MPEP 2106.05(g): “Another consideration when determining whether a claim integrates the judicial exception into a practical application in Step 2A Prong Two or recites significantly more in Step 2B is whether the additional elements add more than insignificant extra-solution activity to the judicial exception. The term "extra-solution activity" can be understood as activities incidental to the primary process or product that are merely a nominal or tangential addition to the claim. Extra-solution activity includes both pre-solution and post-solution activity. An example of pre-solution activity is a step of gathering data for use in a claimed process, e.g., a step of obtaining information about credit card transactions, which is recited as part of a claimed process of analyzing and manipulating the gathered information by a series of steps in order to detect whether the transactions were fraudulent.”
The additional elements (in italics):
by a first processing core of a multicore processor, from a second processing core of the multicore processing
The computer implementation is a recitation of a general-purpose computer with no specific configurations to execute the claimed method. As such, the computer implementation implements the recited abstract idea on a generic computer, and, under 2106.05(f), does not integrate the abstract idea into a practical application in Step 2A Prong Two.
providing […] a model of a physical system including a plurality of elements each having one or more state variables;
obtaining […] first flux data for a first element of the plurality of elements, the first flux data corresponding to the one or more state variables of one or more second elements of the plurality of elements;
The providing and obtaining steps merely gather existing information for evaluation. Mere data gathering is insignificant extra solution activity under MPEP 2106.05(g). Under Mere Data Gathering, an analogous example is provided: “iv. Obtaining information about transactions using the Internet to verify credit card transactions, CyberSource v. Retail Decisions, Inc., 654 F.3d 1366, 1375, 99 USPQ2d 1690, 1694 (Fed. Cir. 2011).” Under MPEP 2106.05(g), receiving data for evaluation is not significant in meaningfully limiting the invention, and the receiving of the data is necessary to the evaluations and mathematical operations of the claim. Under MPEP 2106.05(g). The providing and obtaining steps add nothing more than insignificant extra solution activity, so it does not integrate the abstract idea into a practical application in Step 2A Prong Two.
Claim 9 does not integrate the abstract idea into a practical application and is directed to the abstract idea.
Step 2B: Claim provides an Inventive Concept?
No.
MPEP 2106.05(I) “An inventive concept "cannot be furnished by the unpatentable law of nature (or natural phenomenon or abstract idea) itself. […] Instead, an "inventive concept" is furnished by an element or combination of elements that is recited in the claim in addition to (beyond) the judicial exception, and is sufficient to ensure that the claim, as a whole, amounts to significantly more than the judicial exception itself.”
MPEP 2106.05(f) Mere Instructions To Apply An Exception: “[I]mplementing an abstract idea on a generic computer, does not integrate the abstract idea into a practical application in Step 2A Prong Two or add significantly more in Step 2B.
MPEP 2106.05(d)(II)(i): “The courts have recognized the following computer functions as well‐understood, routine, and conventional functions when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity. […] i. Receiving or transmitting data over a network, e.g., using the Internet to gather data […] iv. Storing and retrieving information in memory”
MPEP 2106.05(g): “As explained by the Supreme Court, the addition of insignificant extra-solution activity does not amount to an inventive concept, particularly when the activity is well-understood or conventional. Parker v. Flook, 437 U.S. 584, 588-89, 198 USPQ 193, 196 (1978).”
The additional limitations (in italics):
by a first processing core of a multicore processor, from a second processing core of the multicore processing
The computer implementation is a recitation of a general-purpose computer with no specific configurations to execute the claimed method. As such, the computer implementation implements the recited abstract idea on a generic computer, and, under 2106.05(f), fails to combine with other elements of the claim to provide significantly more than the abstract idea that would confer an inventive concept at Step 2B.
providing […] a model of a physical system including a plurality of elements each having one or more state variables;
obtaining […] first flux data for a first element of the plurality of elements, the first flux data corresponding to the one or more state variables of one or more second elements of the plurality of elements;
The providing and obtaining steps amount to “receiving or transmitting data” and also “storing and retrieving information from memory,” so they are analogous to the examples cited in MPEP 2106.05(d)(II)(i) representing well-understood, routine, and conventional (WURC) functions. Also, as demonstrated, the providing and obtaining steps are insignificant extra-solution activity. Because the providing and obtaining steps are WURC and insignificant extra-solution activity, under MPEP 2106.05(d) and 2106.05(g), fail to combine with the other elements of the claim to provide significantly more than the abstract idea that would confer an inventive concept at Step 2B.
The additional limitations fail to combine with the other elements of the claim to provide significantly more than the abstract idea that would render the claim an inventive concept, under MPEP 2106.05(f), MPEP 2106.05(d), and MPEP 2106.05(g) at Step 2B.
Claim 9 is ineligible.
Dependent Claims
Dependent claims 2-9, 10-20, and 22-26 are also ineligible for at least the following reasons:
Claims 2 and 22
Claim 2 (and similarly claim 22) recites:
wherein the plurality of elements is grouped into a plurality of partitions; and
wherein the at least the portion of the plurality of elements are on edges of the plurality of partitions.
The claim 2 limitations merely specify the elements on which the evaluations and mathematical operations of the abstract idea are performed, and, therefore, are elements of the abstract idea. Elements of the abstract idea do not provide additional limitations to integrate the abstract idea into a practical application or combine with the abstract idea to contribute significantly more than the abstract idea to render the combination an inventive concept. Claim 22 recites limitations similar to those of claim 2, but for generic CRM elements that do not confer eligibility under MPEP 2106.05(f), so claim 22 is treated similarly to claim 2 under this analysis.
Claims 2 and 22 are ineligible.
Claims 4 and 24
Claim 4 (and similarly claim 24) recites:
further comprising, for each time step of the at least the portion of the time steps and each element of the at least the portion of the plurality of elements:
estimating current flux data corresponding to past flux data obtained from states of a neighboring element of the plurality of elements in multiple time steps immediately preceding said each time step.
The estimating step is an evaluation, which is a mental process and is a mathematical operation, which is a mathematical concept. Mental processes and mathematical concepts are abstract ideas, so the limitations of claim 4 are elements of the abstract idea and do not provide additional limitations to integrate the abstract idea into a practical application or combine with the abstract idea to contribute significantly more than the abstract idea to render the combination an inventive concept. recites limitations similar to those of claim 4, but for generic CRM elements that do not confer eligibility under MPEP 2106.05(f) so claim 24 is treated similarly to claim 4 under this analysis.
Claims 4 and 24 are ineligible.
Claims 5 and 25
Claim 5 (and similarly claim 25) recites:
further comprising, for each time step of the at least the portion of the time steps and each element of the at least the portion of the plurality of elements:
estimating current flux data by extrapolating past flux data obtained from states of a neighboring element of the plurality of elements in multiple time steps immediately preceding said each time step.
The estimating step is an evaluation, which is a mental process and is a mathematical operation, which is a mathematical concept. Mental processes and mathematical concepts are abstract ideas, so the limitations of claim 5 are elements of the abstract idea and do not provide additional limitations to integrate the abstract idea into a practical application or combine with the abstract idea to contribute significantly more than the abstract idea to render the combination an inventive concept. Claim 25 recites limitations similar to those of claim 5, but for generic CRM elements that do not confer eligibility under MPEP 2106.05(f) so claim 25 is treated similarly to claim 5 under this analysis.
Claims 5 and 25 are ineligible.
Claims 6 and 26
Claim 6 recites:
wherein said estimating current flux data comprises using a machine learning model.
Claim 26 recites:
said cause estimate current flux data comprises cause using a machine learning model.
The estimating step is an evaluation, which is a mental process and is a mathematical operation, which is a mathematical concept. Mental processes and mathematical concepts are abstract ideas, so the limitations of claim 6 are elements of the abstract idea and do not provide additional limitations to integrate the abstract idea into a practical application or combine with the abstract idea to contribute significantly more than the abstract idea to render the combination an inventive concept. Claim 26 recites limitations similar to those of claim 6, but for generic CRM elements that do not confer eligibility under MPEP 2106.05(f) so claim 26 is treated similarly to claim 6 under this analysis.
Claims 6 and 26 are ineligible.
Claims 7 and 16
Claim 7 (and similarly claim 16) recites:
wherein the machine learning model is a neural network.
This merely recites a mathematical relationship (neural network) used for the evaluations and mathematical operations of the abstract idea and, is therefore, an element of the abstract idea. Therefore, the limitations of claim 7 are elements of the abstract idea and do not provide additional limitations to integrate the abstract idea into a practical application or combine with the abstract idea to contribute significantly more than the abstract idea to render the combination an inventive concept. Claim 16 recites limitations similar to those of claim 7, and is treated similarly under this analysis.
Claims 7 and 16 are ineligible.
Claims 8 and 17
Claim 8 recites:
wherein the machine learning model is a neural network.
Claim 17 recites:
wherein the neural network is a deep neural network including one or more hidden layers.
This merely recites a mathematical relationship (deep neural network) used for the evaluations and mathematical operations of the abstract idea and, is therefore, an element of the abstract idea. Therefore, the limitations of claim 8 are elements of the abstract idea and do not provide additional limitations to integrate the abstract idea into a practical application or combine with the abstract idea to contribute significantly more than the abstract idea to render the combination an inventive concept. Claim 17 recites limitations similar to those of claim 8, and is treated similarly under this analysis.
Claims 8 and 17 are ineligible.
Claim 11
Claim 11 recites:
wherein calculating the second flux data comprises linearly extrapolating the second flux data from the first flux data.
The linearly extrapolating step, recited as an element of the one of the calculating steps, is itself an evaluation, which is a mental process and a mathematical operation, which is a mathematical concept ([0030]-[0031]). Mental processes and mathematical concepts are abstract ideas, so the limitations of claim 11 are elements of the abstract idea and do not provide additional limitations to integrate the abstract idea into a practical application or combine with the abstract idea to contribute significantly more than the abstract idea to render the combination an inventive concept.
Claims 11 is ineligible.
Claim 12
Claim 12 recites:
wherein the one or two first time steps are two time steps.
The number of time steps merely represents the number of evaluations and mathematical operations that occur or are used for a further evaluation or mathematical operation, which is an element of the evaluation and mathematical evaluation an abstract idea. Therefore, the limitations of claim 12 are elements of the abstract idea and do not provide additional limitations to integrate the abstract idea into a practical application or combine with the abstract idea to contribute significantly more than the abstract idea to render the combination an inventive concept.
Claim 12 is ineligible.
Claim 13
Claim 13 recites,
calculating the second flux data comprises calculating the second flux data based on the first flux data and the one or more state variables of the first element.
This merely qualifies the basis for calculation in the calculation step that is an element of the abstract idea, and is, therefore, an element of the abstract idea. Therefore, the limitations of claim 13 are elements of the abstract idea and do not provide additional limitations to integrate the abstract idea into a practical application or combine with the abstract idea to contribute significantly more than the abstract idea to render the combination an inventive concept.
Claim 13 is ineligible.
Claim 14
Claim 14 recites:
wherein the one or more state variables include first variables and second variables that are partial derivatives of the first variables.
Claim 14 merely specifies the variables that are evaluated and mathematically calculated by the abstract idea, and are, therefore, elements of the abstract idea. Therefore, the limitations of claim 14 are elements of the abstract idea and do not provide additional limitations to integrate the abstract idea into a practical application or combine with the abstract idea to contribute significantly more than the abstract idea to render the combination an inventive concept.
Claim 14 is ineligible.
Claim 15
Claim 15 recites:
wherein calculating the second flux data comprises processing the first flux data and the one or more state variables of the first element using a machine learning model.
Processing data using a machine learning model (a mathematical relationship) is an evaluation and a mathematical operation and mergers with the abstract idea that includes the calculating step the processing step qualifies. Therefore, the limitations of claim 15 are elements of the abstract idea and do not provide additional limitations to integrate the abstract idea into a practical application or combine with the abstract idea to contribute significantly more than the abstract idea to render the combination an inventive concept.
Claim 15 is ineligible.
Claim 18
Claim 18 recites,
wherein the plurality of elements are divided into a plurality of partitions, each partition including a group of contiguous elements of the plurality of elements.
Under the broadest reasonable interpretation, this is merely parsing data into different groups (it need not involve any specific hardware allocation). This merely qualifies how the data is stored and is therefore insignificant extra solution activity akin to the example, under 2106.05(g), “An example of pre-solution activity is a step of gathering data for use in a claimed process, e.g., a step of obtaining information about credit card transactions, which is recited as part of a claimed process of analyzing and manipulating the gathered information by a series of steps in order to detect whether the transactions were fraudulent.” Insignificant extra-solution activity does not integrate the abstract idea into a practical application under Step 2A, Prong 2.
Also, the features of claim 18 are analogous to the examples provided in MPEP 2106.05(d)(II)(i), i. Receiving or transmitting data over a network, e.g., using the Internet to gather data […] iv. Storing and retrieving information in memory,” which are well‐understood, routine, and conventional functions. Because claim 18 recites well‐understood, routine, and conventional functions and insignificant extra-solution activity, the features of claim 18 do not combine with the abstract idea to provide significantly more to yield an inventive concept.
Claim 18 is ineligible.
Claim 19
Claim 19 recites,
wherein the first element is on an edge of a partition of the plurality of partitions.
Under the broadest reasonable interpretation, this is merely parsing data into different groups (it need not involve any specific hardware allocation) and specifying an element within the partitions. This merely qualifies how the data is stored and is therefore insignificant extra solution activity akin to the example, under 2106.05(g), “An example of pre-solution activity is a step of gathering data for use in a claimed process, e.g., a step of obtaining information about credit card transactions, which is recited as part of a claimed process of analyzing and manipulating the gathered information by a series of steps in order to detect whether the transactions were fraudulent.” Insignificant extra-solution activity does not integrate the abstract idea into a practical application under Step 2A, Prong 2.
Also, the features of claim 19 are analogous to the examples provided in MPEP 2106.05(d)(II)(i), i. Receiving or transmitting data over a network, e.g., using the Internet to gather data […] iv. Storing and retrieving information in memory,” which are well‐understood, routine, and conventional functions. Because claim 19 recites well‐understood, routine, and conventional functions and insignificant extra-solution activity, the features of claim 19 do not combine with the abstract idea to provide significantly more to yield an inventive concept.
Claim 19 is ineligible.
Claim 20
Claim 20 recites:
obtaining third flux data for a third element of the plurality of elements corresponding to the one or more state variables of one or more fourth elements of the plurality of elements, the third element and fourth element being in a same partition of the plurality of partitions and the third element not being an edge element of the same partition;
The obtaining step merely gathers data, and is therefore insignificant extra solution activity akin to the example, under 2106.05(g), “An example of pre-solution activity is a step of gathering data for use in a claimed process, e.g., a step of obtaining information about credit card transactions, which is recited as part of a claimed process of analyzing and manipulating the gathered information by a series of steps in order to detect whether the transactions were fraudulent.” Insignificant extra-solution activity does not integrate the abstract idea into a practical application under Step 2A, Prong 2.
Also, the obtaining step is analogous to the examples provided in MPEP 2106.05(d)(II)(i), i. Receiving or transmitting data over a network, e.g., using the Internet to gather data […] iv. Storing and retrieving information in memory,” which are well‐understood, routine, and conventional functions. Because the obtaining step recites well‐understood, routine, and conventional functions and insignificant extra-solution activity, the obtaining step does not combine with the abstract idea to provide significantly more to yield an inventive concept.
processing the one or more state variables of the third element and the third flux data according to the model; and
updating the one or more state variables of the third element according to the processing of the one or more state variables of the third element and the third flux data.
The processing and updating steps are further evaluations and mathematical operations, which are elements of an abstract idea that merge with the abstract idea of the claims from which claim 20 depends. Therefore, the processing and updating steps are elements of the abstract idea and do not provide additional limitations to integrate the abstract idea into a practical application or combine with the abstract idea to contribute significantly more than the abstract idea to render the combination an inventive concept.
Claim 20 is ineligible.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The factual inquiries for establishing a background for determining obviousness under pre-AIA 35 U.S.C. 103(a) are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims under pre-AIA 35 U.S.C. 103(a), the examiner presumes that the subject matter of the various claims was commonly owned at the time any inventions covered therein were made absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and invention dates of each claim that was not commonly owned at the time a later invention was made in order for the examiner to consider the applicability of pre-AIA 35 U.S.C. 103(c) and potential pre-AIA 35 U.S.C. 102(e), (f) or (g) prior art under pre-AIA 35 U.S.C. 103(a).
Claims 1-2, 4-5, 7-8, 15-17, 21-22, and 24-25: Devine, Yook, and (Webb or Magiera)
Claims 1-2, 4-5, 7-8, 21-22, and 24-25 are rejected under 35 U.S.C. 103 as being unpatentable over NPL to Devine et al. (Devine) and further in view of NPL: “Trading Computation for Bandwidth: Reducing Communication in Distributed Control Systems Using State Estimators” by Yook et al. (Yook) and NPL: “Learning Representations that Support Extrapolation” by Webb et al. (Webb).
Should it be found that the Webb reference is inadequate for any reasons, Claims 1-2, 4-5, 7-8, 21-22, and 24-25 are also rejected under pre-AIA 35 U.S.C. 103(a) as being unpatentable over NPL to Devine et al. (Devine) and further in view of NPL: “Trading Computation for Bandwidth: Reducing Communication in Distributed Control Systems Using State Estimators” by Yook et al. (Yook) and NPL: “Constraint-aware neural networks for Riemann problems” by Magiera et al. (Magiera).
Claims 1 and 21
Regarding Claim 21, Devine Teaches:
An apparatus comprising: one or more processing devices; one or more memory devices operably coupled to the one or more processing devices, the one or more memory devices storing executable code that, when executed by the one or more processing devices, causes the one or more processing devices to: (Devine Abstract “We describe and examine the performance of adaptive methods for solving hyperbolic systems of conservation laws on massively parallel computers.” Page 2, First Paragraph “When solving partial differential equations (PDEs) on MIMD computers, spatial data must be distributed across the processors' memory while minimizing the amount of data that must be exchanged between processors.” – Devine teaches massively parallel computers with multiple processors and their associated memories.)
execute a model of a physical system including a plurality of elements for a plurality of time steps such that, for at least a portion of the plurality of time steps, exchange of data between at least a portion of the plurality of elements is suppressed, wherein said execute comprises: (Devine Abstract “We describe and examine the performance of adaptive methods for solving hyperbolic systems of conservation laws on massively parallel computers. The differential system is approximated by a discontinuous Galerkin finite element method with a hierarchical Legendre piecewise polynomial basis for the spatial discretization. Fluxes at element boundaries are computed by solving an approximate Riemann problem; a projection limiter is applied to keep the average solution monotone; time discretization is performed by Runge-Kutta integration; and a p-refinement-based error estimate is used as an enrichment indicator. – Devine uses massively parallel computers to model a physical system for a series of time steps for a plurality of elements. Page 13 “With balancing, the maximum computation time (not including communication or balancing time) was reduced by 49.8% (see Table 4.1).” – For some of the time steps, communication is suppressed because some of the time calculation is done exclusively of communication.)
estimate , for each time step of the at least the portion of the time steps and each element of the at least the portion of the plurality of elements, estimate current flux data Devine Abstract “Fluxes at element boundaries are computed by solving an approximate Riemann problem;” Page 13 “With balancing, the maximum computation time (not including communication or balancing time) was reduced by 49.8% (see Table 4.1).”– Flux is estimated during calculation stages.)
for each time step of a second portion of the time steps and each element of the at least the portion of the plurality of elements, receive by a first processing core of a multicore processor, from a second processing core of the multicore processor, current flux data of the neighboring element of the plurality of elements. (Devine Page 2, Last Paragraph “Fluxes at element boundaries are computed by solving an approximate Riemann problem with a projection limiter applied to keep the average solution monotone near discontinuities [11,14,36]. Time discretization is performed by an explicit Runge-Kutta method.” Page 3, First Paragraph “The discontinuous Galerkin method is well suited to parallelization on massively parallel computers. The computational stencil involves only nearest-neighbor communication regardless of the degree of the piecewise polynomial approximation and the spatial dimension. Additional storage is needed for only one row of "ghost" elements along each edge of a processor's subdomain. Thus, the size of the problem scales easily with the number of processors. Indeed, for two-dimensional problems on rectangular domains with periodic boundary conditions, scaled parallel efficiencies in excess of 97% are achieved [11].” Page 5, Last Paragraph “Therefore, the flux may be calculated by computing each component separately, exchanging
f
1
+
and
f
1
-
between elements, and summing. This splitting approximately halves the computational and parallel communications effort relative to other flux evaluation schemes.” – Devine spends some time steps communicating flux data between edge/boundary elements.)
Devine teaches estimating fluxes at element boundaries (Devine Abstract), but does not appear to explicitly teach, but Devine in view of Yook teaches:
for each time step of the at least the portion of the time steps and each element of the at least the portion of the plurality of elements, estimating current flux data […] from past flux data obtained from a state of a neighboring element of the plurality of elements in a time step immediately preceding said time step; and (Yook Abstract “This paper describes a new framework for distributed control systems in which estimators are used at each node to estimate the values of the outputs at the other nodes. The estimated values are then used to compute the control algorithms at each node. When the estimated value deviates from the true value by more than a pre-specified tolerance, the actual value is broadcast to the rest of the system; all of the estimators are then updated to the current value. By using the estimated values instead of true value at every node, a significant savings in the required bandwidth is achieved, allowing large-scale distributed control systems to be implemented effectively. The stability, performance, and expected communication frequency of the reduced communication system are analyzed in detail. Simulation and experimental results validating the effectiveness and communication savings of the framework are also presented.” Page 4, 3.2 State Estimator Framework “The main idea of the proposed estimator framework is that all nodes have identical estimators and thus identical estimator states. The estimated values of the remote outputs are used in the feedback control. Every sample time, the controller at the ith node compares the estimate of the ith output to its true values. If the difference is greater than a predefined threshold, the true value is communicated to the other nodes. Therefore, the error between the estimated data used in the control algorithm and the actual value is always bounded by a threshold value that can be chosen by the control designer. This bound between the estimated and actual data influences the behavior of the entire system; more details are given in Section 4.” Page 5, Second-Third Paragraphs “When there is a communication from ith node, the estimators in all nodes update their states to reflect the current actual value of the system outputs or states.1 In order to update the estimator states, we need to be able to recover the ith node states from the output
Y
i
*
. If the output of a MIMO system is also a state variable, we can update the corresponding estimator state. For a system which is observable from only the ith output, all estimator states could be reconstructed and updated (although the actual controller outputs from all other nodes would be required). If all the ith node states are measured, we can simply use the measurements to update the estimator states. Detailed analysis of different estimator state updating methods will be presented in Section 4.2. In a distributed control system with the proposed state estimator, the computation load at each node is increased due to the additional computation related to the estimator; however, the required communication is decreased since the communication only occurs when the difference between the measured actual output and the estimated output of the system is above the threshold value. The required communication within the system with the proposed estimator will be discussed in more detail in the following section.” Page 2, 2.1 Dead Reckoning “Dead reckoning is an approach to data replication in which every host models the state (position, velocity, and acceleration) of each remote entity by applying predictive extrapolation based on update information sent from each entity’s local host. Dead reckoning typically reduces bandwidth because update packets can be transmitted at lower than frame rate frequencies. When remote sites are updated at a slower rate, receivers use extrapolation to provide seamlessly integrated remote and local entities on display. Dead reckoning protocols are implemented in distributed simulation games such as Amaze [3], and virtual battlefield environments such as U.S. Army’s Simulation Networking system [13].” – Yook teaches using past communicated state data (e.g., flux) from neighbors to extrapolate/estimate newly communicated state data as an element of calculation to minimize communication time.)
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claims to modify the distributed computing task of Devine by the approximation techniques of Yook because the person of ordinary skill in the art would be motivated by the expressed object of Devine to minimize the amount of data that must be exchanged between processors to look to Yook that uses estimation methods to reduce communication between processors using effective and validated methods. (Devine Page 2, Second Paragraph “When solving partial differential equations (PDEs) on MIMD computers, spatial data must be distributed across the processors' memory while minimizing the amount of data that must be exchanged between processors.; Yook Abstract “This paper describes a new framework for distributed control systems in which estimators are used at each node to estimate the values of the outputs at the other nodes. The estimated values are then used to compute the control algorithms at each node. When the estimated value deviates from the true value by more than a pre-specified tolerance, the actual value is broadcast to the rest of the system; all of the estimators are then updated to the current value. By using the estimated values instead of true value at every node, a significant savings in the required bandwidth is achieved, allowing large-scale distributed control systems to be implemented effectively. The stability, performance, and expected communication frequency of the reduced communication system are analyzed in detail. Simulation and experimental results validating the effectiveness and communication savings of the framework are also presented.)
Devin in view of Yook does not appear to explicitly teach, but Devine in view of Yook and Webb teaches:
estimate by a machine learning model (Webb “Extrapolation – the ability to make inferences that go beyond the scope of one’s experiences – is a hallmark of human intelligence. By contrast, the generalization exhibited by contemporary neural network algorithms is largely limited to interpolation between data points in their training corpora. In this paper, we consider the challenge of learning representations that support extrapolation. We introduce a novel visual analogy benchmark that allows the graded evaluation of extrapolation as a function of distance from the convex domain defined by the training data. We also introduce a simple technique, temporal context normalization, that encourages representations that emphasize the relations between objects. We find that this technique enables a significant improvement in the ability to extrapolate, considerably outperforming a number of competitive techniques. – Webb teaches that extrapolation can be modeled by a machine learning model, e.g., as a substitute)
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claims to substitute the extrapolation estimation technique for saving communication overhead of Yook for using a machine learning model that mimics extrapolation in Webb because, under MPEP 2143(I)(B) Simple substitution of one known element for another to obtain predictable result, the machine learning model of Webb is an obvious, simple substitution for the extrapolation of Yook. Specifically (1) the reference differ by a component, an estimator. (2) Both extrapolation and ML models that model extrapolation are known in the art (Webb Abstract). (3) One of ordinary skill in the art could have substituted the ML model that models extrapolation for the extrapolation with predictable results, especially with the temporal context normalization of Webb (Webb Abstract). (4) There are no factors indicating that the substitution is other than obvious or that the substitution would not function as intended.)
In the alternative, should it be found that the Webb reference is insufficient, Devin in view of Yook does not appear to explicitly teach, but Devine in view of Yook and Magiera teaches:
estimate by a machine learning model (Magiera Abstract “We propose two different strategies to generate constraint-aware neural networks. We test their performance in the context of front-capturing schemes for strongly nonlinear wave motion in compressible fluid flow. Precisely in this context so-called Riemann problems have to be solved as surrogates. Their solution describes the local dynamics of the captured wave front in numerical simulations. Three model problems are considered: a cubic flux model problem, an isothermal two-phase flow model, and the Euler equations. We observe, that constraint-aware neural networks do not only account for the constraint but lead to an improvement of the numerical accuracy of the overall fluid simulation.” – Magiera would conduct the Reimann estimation of Devine using machine learning.)
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claims to modify the Reimann estimation of Devine by the Reimann estimation method using machine learning in Magiera because the person of ordinary skill in the art would be motivated by the aim of Devine to effectuate quality mesh calculations by estimation to look to Magiera that performs Reimann estimations using machine learning to improve the numerical accuracy of the simulation. (Devine Abstract “We also demonstrate a fast, tree-based parallel partitioning strategy for three-dimensional octree-structured meshes. This method produces partition quality comparable to recursive spectral bisection at a greatly reduced cost.”; Magiera Abstract “We propose two different strategies to generate constraint-aware neural networks. We test their performance in the context of front-capturing schemes for strongly nonlinear wave motion in compressible fluid flow. Precisely in this context so-called Riemann problems have to be solved as surrogates. Their solution describes the local dynamics of the captured wave front in numerical simulations. Three model problems are considered: a cubic flux model problem, an isothermal two-phase flow model, and the Euler equations. We observe, that constraint-aware neural networks do not only account for the constraint but lead to an improvement of the numerical accuracy of the overall fluid simulation.”)
Regarding claim 1, claim 1 teaches the operations effectuated by the system of claim 21, so claim 1 is rejected for at least the same reasons as claim 21.
Claims 2 and 22
Regarding claims 2 and 22, Devine in view of Yook and (Webb or Magiera) teaches the features of claims 1 and 21 and further teaches:
wherein the plurality of elements is grouped into a plurality of partitions; and wherein the at least the portion of the plurality of elements are on edges of the plurality of partitions. (Devine Page 2, Second Paragraph “Distributed-memory, massively parallel computers have enabled the development of applications requiring computational resources previously unavailable. Finite difference and finite element methods for structural mechanics and fluid dynamics problems, for example, often require millions of degrees of freedom to accurately simulate physical phenomenon. When solving partial differential equations (PDEs) on MIMD computers, spatial data must be distributed across the processors' memory while minimizing the amount of data that must be exchanged between processors. This problem is especially acute when dealing with (i) adaptive methods, where mesh structure and work loads change during the computation, and (ii) three-dimensional meshes, whose data grow at a faster rate than in two dimensions when performing h-refinement. The challenge is to combine the computational efficiency of adaptive methods with the computational resources of massively parallel computation.” Page 6, Third-Fourth Paragraphs “We have developed an adaptive p-refinement method for two-dimensional systems that uses dynamic load balancing to adjust the processor decomposition in the presence of nonuniform and changing work loads. Tiling [37] is a modification of a dynamic load balancing technique developed by Leiss and Reddy [24] that balances work within overlapping processor neighborhoods to achieve a global load balance. Work is migrated from a processor to others within the same neighborhood. We demonstrate the improved performance obtained from a combination of p-adaptivity and parallel computation on several examples using a 1024-processor nCUBE/2 hypercube. For three-dimensional problems with irregular grids of tetrahedral elements and adaptive h-refinement, we have developed a tree-based mesh partitioning technique that exploits the properties of tree-structured meshes. The rich, hierarchical structure of these meshes allows them to be divided into components along boundaries of the tree structure. Our partitioning technique is based on two tree traversals that ( i) calculate the processing costs of all subtrees of a node, and ( ii) form the partitions. Our method is inexpensive and, thus, has an advantage relative to other global partitioning techniques [21,23,27]. We demonstrate the performance of the tree-based mesh partitioning technique on a variety of three-dimensional meshes and discuss extension of the technique for parallel implementation and dynamic load balancing. We present results, using a Thinking Machines CM-5 computer, for the adaptive h-refinement solutions of an Euler flow past a cone.” See Also FIGs. 4.1 and 4.3 – The elements are distributed to memory partitions, each partition includes elements assigned to be processed by a processor of a multiprocessor system. FIGs. 4.1 and 4.3 illustrate the edges of these partitions and the cells thereon.)
PNG
media_image1.png
382
658
media_image1.png
Greyscale
PNG
media_image2.png
451
633
media_image2.png
Greyscale
Claims 4 and 24
Regarding claims 4 and 24, Devine in view of Yook and (Webb or Magiera) teaches the features of claims 1 and 21, and further teaches:
wherein the executable code, when executed by the one or more processing devices, further causes the one or more processing devices to, for each time step of the at least the portion of the time steps and each element of the at least the portion of the plurality of elements: flux past data obtained from states of the neighboring element of the plurality of elements in multiple time steps immediately preceding said each time step. (Devine Table 4.1 – Even in the last column, the communication steps take 72.65 second, and the computation time is 393.32. Even if some of that time is overlapping, because of the magnitude of the disparity, there is at least one instance where computation is done for multiple consecutive time steps before communication because there is more than twice as much time. This would mean that the calculations calculated prior to communication of the data (that may be used for Yook later for estimation) occurs over multiple consecutive time steps prior to communication.)
PNG
media_image3.png
460
634
media_image3.png
Greyscale
for each time step of the at least the portion of the time steps and each element of the at least the portion of the plurality of elements: estimate current flux data from past flux data obtained from states of a neighboring element of the plurality of elements in multiple time steps immediately preceding said each time step. (Yook Abstract “This paper describes a new framework for distributed control systems in which estimators are used at each node to estimate the values of the outputs at the other nodes. The estimated values are then used to compute the control algorithms at each node. When the estimated value deviates from the true value by more than a pre-specified tolerance, the actual value is broadcast to the rest of the system; all of the estimators are then updated to the current value. By using the estimated values instead of true value at every node, a significant savings in the required bandwidth is achieved, allowing large-scale distributed control systems to be implemented effectively. The stability, performance, and expected communication frequency of the reduced communication system are analyzed in detail. Simulation and experimental results validating the effectiveness and communication savings of the framework are also presented.” Page 2, 2.1 Dead Reckoning “Dead reckoning is an approach to data replication in which every host models the state (position, velocity, and acceleration) of each remote entity by applying predictive extrapolation based on update information sent from each entity’s local host. Dead reckoning typically reduces bandwidth because update packets can be transmitted at lower than frame rate frequencies. When remote sites are updated at a slower rate, receivers use extrapolation to provide seamlessly integrated remote and local entities on display. Dead reckoning protocols are implemented in distributed simulation games such as Amaze [3], and virtual battlefield environments such as U.S. Army’s Simulation Networking system [13].” Page 4, 3.2 State Estimator Framework “The main idea of the proposed estimator framework is that all nodes have identical estimators and thus identical estimator states. The estimated values of the remote outputs are used in the feedback control. Every sample time, the controller at the ith node compares the estimate of the ith output to its true values. If the difference is greater than a predefined threshold, the true value is communicated to the other nodes. Therefore, the error between the estimated data used in the control algorithm and the actual value is always bounded by a threshold value that can be chosen by the control designer. This bound between the estimated and actual data influences the behavior of the entire system; more details are given in Section 4.” Page 5, Second-Third Paragraphs “When there is a communication from ith node, the estimators in all nodes update their states to reflect the current actual value of the system outputs or states.1 In order to update the estimator states, we need to be able to recover the ith node states from the output
Y
i
*
. If the output of a MIMO system is also a state variable, we can update the corresponding estimator state. For a system which is observable from only the ith output, all estimator states could be reconstructed and updated (although the actual controller outputs from all other nodes would be required). If all the ith node states are measured, we can simply use the measurements to update the estimator states. Detailed analysis of different estimator state updating methods will be presented in Section 4.2. In a distributed control system with the proposed state estimator, the computation load at each node is increased due to the additional computation related to the estimator; however, the required communication is decreased since the communication only occurs when the difference between the measured actual output and the estimated output of the system is above the threshold value. The required communication within the system with the proposed estimator will be discussed in more detail in the following section.” Page 2, 2.1 Dead Reckoning “Dead reckoning is an approach to data replication in which every host models the state (position, velocity, and acceleration) of each remote entity by applying predictive extrapolation based on update information sent from each entity’s local host. Dead reckoning typically reduces bandwidth because update packets can be transmitted at lower than frame rate frequencies. When remote sites are updated at a slower rate, receivers use extrapolation to provide seamlessly integrated remote and local entities on display. Dead reckoning protocols are implemented in distributed simulation games such as Amaze [3], and virtual battlefield environments such as U.S. Army’s Simulation Networking system [13].” – Yook teaches using past communicated state data (e.g., flux) from neighbors to extrapolate/estimate newly communicated state data as an element of calculation to minimize communication time.)
Claims 5 and 25
Regarding claims 5 and 25, Devine in view of Yook and (Webb or Magiera) teaches the features of claims 1 and 21 and further teaches:
wherein the executable code, when executed by the one or more processing devices, further causes the one or more processing devices to, for each time step of the at least the portion of the time steps and each element of the at least the portion of the plurality of elements: fluxflux states of a neighboring element of the plurality of elements in multiple time steps immediately preceding said each time step. (Devine Abstract “Fluxes at element boundaries are computed by solving an approximate Riemann problem;” Page 13 “With balancing, the maximum computation time (not including communication or balancing time) was reduced by 49.8% (see Table 4.1).”– Flux is estimated during calculation stages at multiple time steps. Table 4.1 – Even in the last column, the communication steps take 72.65 second, and the computation time is 393.32. Even if some of that time is overlapping, because of the magnitude of the disparity, there is at least one instance where computation is done for multiple consecutive time steps before communication because there is more than twice as much time. This would mean that the calculations calculated prior to communication of the data (that may be used for Yook later for estimation) occurs over multiple consecutive time steps prior to communication.)
PNG
media_image3.png
460
634
media_image3.png
Greyscale
estimate current data by extrapolating past data obtained from states of a neighboring element of the plurality of elements in multiple time steps immediately preceding said each time step. (Yook Abstract “This paper describes a new framework for distributed control systems in which estimators are used at each node to estimate the values of the outputs at the other nodes. The estimated values are then used to compute the control algorithms at each node. When the estimated value deviates from the true value by more than a pre-specified tolerance, the actual value is broadcast to the rest of the system; all of the estimators are then updated to the current value. By using the estimated values instead of true value at every node, a significant savings in the required bandwidth is achieved, allowing large-scale distributed control systems to be implemented effectively. The stability, performance, and expected communication frequency of the reduced communication system are analyzed in detail. Simulation and experimental results validating the effectiveness and communication savings of the framework are also presented.” Page 2, 2.1 Dead Reckoning “Dead reckoning is an approach to data replication in which every host models the state (position, velocity, and acceleration) of each remote entity by applying predictive extrapolation based on update information sent from each entity’s local host. Dead reckoning typically reduces bandwidth because update packets can be transmitted at lower than frame rate frequencies. When remote sites are updated at a slower rate, receivers use extrapolation to provide seamlessly integrated remote and local entities on display. Dead reckoning protocols are implemented in distributed simulation games such as Amaze [3], and virtual battlefield environments such as U.S. Army’s Simulation Networking system [13].” Page 4, 3.2 State Estimator Framework “The main idea of the proposed estimator framework is that all nodes have identical estimators and thus identical estimator states. The estimated values of the remote outputs are used in the feedback control. Every sample time, the controller at the ith node compares the estimate of the ith output to its true values. If the difference is greater than a predefined threshold, the true value is communicated to the other nodes. Therefore, the error between the estimated data used in the control algorithm and the actual value is always bounded by a threshold value that can be chosen by the control designer. This bound between the estimated and actual data influences the behavior of the entire system; more details are given in Section 4.” Page 5, Second-Third Paragraphs “When there is a communication from ith node, the estimators in all nodes update their states to reflect the current actual value of the system outputs or states.1 In order to update the estimator states, we need to be able to recover the ith node states from the output
Y
i
*
. If the output of a MIMO system is also a state variable, we can update the corresponding estimator state. For a system which is observable from only the ith output, all estimator states could be reconstructed and updated (although the actual controller outputs from all other nodes would be required). If all the ith node states are measured, we can simply use the measurements to update the estimator states. Detailed analysis of different estimator state updating methods will be presented in Section 4.2. In a distributed control system with the proposed state estimator, the computation load at each node is increased due to the additional computation related to the estimator; however, the required communication is decreased since the communication only occurs when the difference between the measured actual output and the estimated output of the system is above the threshold value. The required communication within the system with the proposed estimator will be discussed in more detail in the following section.” Page 2, 2.1 Dead Reckoning “Dead reckoning is an approach to data replication in which every host models the state (position, velocity, and acceleration) of each remote entity by applying predictive extrapolation based on update information sent from each entity’s local host. Dead reckoning typically reduces bandwidth because update packets can be transmitted at lower than frame rate frequencies. When remote sites are updated at a slower rate, receivers use extrapolation to provide seamlessly integrated remote and local entities on display. Dead reckoning protocols are implemented in distributed simulation games such as Amaze [3], and virtual battlefield environments such as U.S. Army’s Simulation Networking system [13].” – Yook teaches using past communicated state data (e.g., flux) from neighbors to extrapolate/estimate newly communicated state data as an element of calculation to minimize communication time.
Claim 15
NOTE: The rejection of claim 9 and some of the dependent claims that depend from claim 9 can be found below in a different rejection.
Regarding claim 15, Devine in view of Yook and (Webb or Magiera) teach the features of claim 13 and further teach:
wherein calculating the second flux data comprises processing the first flux data and the one or more state variables of the first element using [Extrapolation] . (Yook Abstract “This paper describes a new framework for distributed control systems in which estimators are used at each node to estimate the values of the outputs at the other nodes. The estimated values are then used to compute the control algorithms at each node. When the estimated value deviates from the true value by more than a pre-specified tolerance, the actual value is broadcast to the rest of the system; all of the estimators are then updated to the current value. By using the estimated values instead of true value at every node, a significant savings in the required bandwidth is achieved, allowing large-scale distributed control systems to be implemented effectively. The stability, performance, and expected communication frequency of the reduced communication system are analyzed in detail. Simulation and experimental results validating the effectiveness and communication savings of the framework are also presented.” Page 2, 2.1 Dead Reckoning “Dead reckoning is an approach to data replication in which every host models the state (position, velocity, and acceleration) of each remote entity by applying predictive extrapolation based on update information sent from each entity’s local host. Dead reckoning typically reduces bandwidth because update packets can be transmitted at lower than frame rate frequencies. When remote sites are updated at a slower rate, receivers use extrapolation to provide seamlessly integrated remote and local entities on display. Dead reckoning protocols are implemented in distributed simulation games such as Amaze [3], and virtual battlefield environments such as U.S. Army’s Simulation Networking system [13].” – Yook estimates by extrapolation current data based on prior communicated data to avoid communication overhead of waiting for new data.)
Devin in view of Yook does not appear to explicitly teach, but Devine in view of Yook and Webb teaches:
wherein calculating the second flux data comprises processing the first flux data and the one or more state variables of the first element using a machine learning model. (Webb “Extrapolation – the ability to make inferences that go beyond the scope of one’s experiences – is a hallmark of human intelligence. By contrast, the generalization exhibited by contemporary neural network algorithms is largely limited to interpolation between data points in their training corpora. In this paper, we consider the challenge of learning representations that support extrapolation. We introduce a novel visual analogy benchmark that allows the graded evaluation of extrapolation as a function of distance from the convex domain defined by the training data. We also introduce a simple technique, temporal context normalization, that encourages representations that emphasize the relations between objects. We find that this technique enables a significant improvement in the ability to extrapolate, considerably outperforming a number of competitive techniques. – Webb teaches that extrapolation can be modeled by a machine learning model, e.g., as a substitute)
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claims to substitute the extrapolation estimation technique for saving communication overhead of Yook for using a machine learning model that mimics extrapolation in Webb because, under MPEP 2143(I)(B) Simple substitution of one known element for another to obtain predictable result, the machine learning model of Webb is an obvious, simple substitution for the extrapolation of Yook. Specifically (1) the reference differ by a component, an estimator. (2) Both extrapolation and ML models that model extrapolation are known in the art (Webb Abstract). (3) One of ordinary skill in the art could have substituted the ML model that models extrapolation for the extrapolation with predictable results, especially with the temporal context normalization of Webb (Webb Abstract). (4) There are no factors indicating that the substitution is other than obvious or that the substitution would not function as intended.)
In the alternative, should it be found that the Webb reference is insufficient, Devin in view of Yook does not appear to explicitly teach, but Devine in view of Yook and Magiera teaches:
wherein calculating the second flux data comprises processing the first flux data and the one or more state variables of the first element using a machine learning model. (Magiera Page 3, Neural Networks “We are interested in approximating a function of the form […] using neural networks.” – This performs the Reimann estimation using a neural network, machine learning.)
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claims to modify the Reimann estimation of Devine by the Reimann estimation method using machine learning in Magiera because the person of ordinary skill in the art would be motivated by the aim of Devine to effectuate quality mesh calculations by estimation to look to Magiera that performs Reimann estimations using machine learning to improve the numerical accuracy of the simulation. (Devine Abstract “We also demonstrate a fast, tree-based parallel partitioning strategy for three-dimensional octree-structured meshes. This method produces partition quality comparable to recursive spectral bisection at a greatly reduced cost.”; Magiera Abstract “We propose two different strategies to generate constraint-aware neural networks. We test their performance in the context of front-capturing schemes for strongly nonlinear wave motion in compressible fluid flow. Precisely in this context so-called Riemann problems have to be solved as surrogates. Their solution describes the local dynamics of the captured wave front in numerical simulations. Three model problems are considered: a cubic flux model problem, an isothermal two-phase flow model, and the Euler equations. We observe, that constraint-aware neural networks do not only account for the constraint but lead to an improvement of the numerical accuracy of the overall fluid simulation.”)
Claims 7 and 16
NOTE: The rejection of claim 9 and some of the dependent claims that depend from claim 9 can be found below in a different rejection.
Regarding claims 7 and 16, Devine in view of Yook and (Webb or Magiera) teaches the features of claims 1 and 15, and further teaches:
wherein the machine learning model is a neural network. (Webb Abstract “By contrast, the generalization exhibited by contemporary neural network algorithms is largely limited to interpolation between data points in their training corpora.” Page 1, Right Column, Third Paragraph “We hypothesized that the application of TCN would improve the ability of neural networks to extrapolate. Critically, when trained end-to-end, we expect the presence of TCN to impose constraints on both upstream (computed
prior to a layer with normalization) and downstream (computed post-normalization) representations, promoting the acquisition of abstract representations that reflect task-relevant symmetries. – The machine learning model is a neural network.)
In the alternative, should it be found that the Webb reference is insufficient, Devin in view of Yook does not appear to explicitly teach, but Devine in view of Yook and Magiera teaches:
estimate by a machine learning model (Magiera Abstract “We propose two different strategies to generate constraint-aware neural networks. We test their performance in the context of front-capturing schemes for strongly nonlinear wave motion in compressible fluid flow. Precisely in this context so-called Riemann problems have to be solved as surrogates. Their solution describes the local dynamics of the captured wave front in numerical simulations. Three model problems are considered: a cubic flux model problem, an isothermal two-phase flow model, and the Euler equations. We observe, that constraint-aware neural networks do not only account for the constraint but lead to an improvement of the numerical accuracy of the overall fluid simulation.” – Magiera would conduct the Reimann estimation of Devine using machine learning.)
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claims to modify the Reimann estimation of Devine by the Reimann estimation method using machine learning in Magiera because the person of ordinary skill in the art would be motivated by the aim of Devine to effectuate quality mesh calculations by estimation to look to Magiera that performs Reimann estimations using machine learning to improve the numerical accuracy of the simulation. (Devine Abstract “We also demonstrate a fast, tree-based parallel partitioning strategy for three-dimensional octree-structured meshes. This method produces partition quality comparable to recursive spectral bisection at a greatly reduced cost.”; Magiera Abstract “We propose two different strategies to generate constraint-aware neural networks. We test their performance in the context of front-capturing schemes for strongly nonlinear wave motion in compressible fluid flow. Precisely in this context so-called Riemann problems have to be solved as surrogates. Their solution describes the local dynamics of the captured wave front in numerical simulations. Three model problems are considered: a cubic flux model problem, an isothermal two-phase flow model, and the Euler equations. We observe, that constraint-aware neural networks do not only account for the constraint but lead to an improvement of the numerical accuracy of the overall fluid simulation.”)
Claims 8 and 17
NOTE: The rejection of claim 9 and some of the dependent claims that depend from claim 9 can be found below in a different rejection.
Regarding claims 8 and 17, Devine in view of Yook and Webb teaches the features of claims 7 and 16, and further teaches:
wherein the neural network is a deep neural network including one or more hidden layers. (Webb Page 4, 3.3 Dynamic Object prediction Model “To address the dynamic object prediction task, we employ an approach that combines an autoencoder and a recurrent network (detailed in 4.3). First, we train an autoencoder to generate a low-dimensional embedding z given an image x. Then, for each sequence of images x1. ... xT , we obtain the corresponding low-dimensional embeddings z1. . . zT . Finally, we train a recurrent network to predict zt given z1. . . zt-1. The combined system is capable of making predictions in image space given an input sequence of images, and can be fine-tuned end-to-end.” – The model includes an encoder portion and an recurrent network (e.g., LSTM) portion. Page 4, 4.1 Analogy Architecture and Training Procedure “We applied either TCN, or one out of a number of other normalization techniques (detailed in 4.2), to these embeddings, and then passed a sequence consisting of the embeddings for A, B, C, and the candidate answer to an LSTM with a single hidden layer of 256 units.” – The recurrent/LSTM portion of the model includes a single hidden layer. Page 5, Right Column, 4.3 Dynamic Object Prediction “For the dynamic object prediction model, we first trained an autoencoder to generate a low-dimensional embedding z given an image x. The encoder architecture consisted of 3 convolutional layers, each with 32 kernels of size 4x4, and a stride of 2, resulting in a feature map of size 8 x 8 x 32. This was followed by 2 fully-connected layers, with 256 units per layer. ReLU nonlinearities were used in all layers of the encoder. The image embedding z was then generated with a linear layer consisting of 10 units. The decoder architecture consisted of 2 fully-connected layers with 256 units, followed by a fully-connected layer with 2,048 units (reshaped for input to convolutional layers). This was followed by 2 layers of transposed convolutions, each with 32 kernels of size 4 x 4, and a fractional stride of 1=2, and a final transposed convolutional layer with 1 output channel (also with kernel size 4 x 4 and a fractional stride of 1=2).” – This teaches that the autoencoder portion includes multiple hidden layers. A single hidden layer (of the LSTM portion) added to multiple hidden layers (of the autoencoder portion) yields one or mor hidden layers.)
Should it be found that the Webb reference is deficient for any reason, regarding claims 8 and 17, Devine in view of Yook and Magiera teaches the features of claims 7 and 16, and further teaches:
wherein the neural network is a deep neural network including one or more hidden layers. (Magiera Page 3, Neural Networks “In particular, we consider a specific feed-forward network architecture known as multilayer perceptrons (MLPs), in which the basic computing units (neurons) are stacked in layers. The first layer is called the input layer, and has the sole purpose of providing an input signal to the network. The last layer is the output layer, while all the intermediate layers are known as the hidden layers.”)
Claims 9, 11-14, and 18-20: Devine and Yook
Claims 9 and 11-20 are rejected 35 U.S.C. 103 as being unpatentable over NPL to Devine et al. (Devine) and further in view of NPL: “Trading Computation for Bandwidth: Reducing Communication in Distributed Control Systems Using State Estimators” by Yook et al. (Yook)
Claim 9
Regarding claim 9, Devine teaches:
A method comprising: providing, on a computer system, a model of a physical system including a plurality of elements each having one or more state variables; (Devine Abstract “We describe and examine the performance of adaptive methods for solving hyperbolic systems of conservation laws on massively parallel computers.” Page 2, First Paragraph “When solving partial differential equations (PDEs) on MIMD computers, spatial data must be distributed across the processors' memory while minimizing the amount of data that must be exchanged between processors.” – Devine teaches massively parallel computers with multiple processors and their associated memories. Page 4, Last Paragraph – Page 5, First Paragraph “We specify it using a "numerical flux" function
h
U
j
+
,
U
j
-
dependent on solution states
U
j
+
and
U
j
-
on the inside and outside, respectively, of
∂
Ω
j
.” – State variables communicated and/or estimated include inside and outside states as well as flux. Page 5, The Paragraph Including Equation 2.6 “In three dimensions, we use van Leer's flux vector splitting [25,35] to construct a numerical flux. This technique is not generally applicable but does apply to the Euler equations of compressible flow which have the solution and flux vectors
PNG
media_image4.png
111
533
media_image4.png
Greyscale
where ρ, e, and p are the density, energy, and pressure; u' is the velocity component in the direction of n; and v' and w' are velocity components tangent to
∂
Ω
j
. – Here are further state variables estimated/communicated.)
and repeatedly performing: for each time step of one or more first time steps: obtaining by a first processing core of a multicore processor, from a second processing core of the multicore processor, first flux data for a first element of the plurality of elements, the first flux data based on the one or more state variables of one or more second elements of the plurality of elements; (Devine Abstract “Fluxes at element boundaries are computed by solving an approximate Riemann problem; a projection limiter is applied to keep the average solution monotone; time discretization is performed by Runge-Kut ta integration; and a p-refinement-based error estimate is used as an enrichment indicator.” – The entire method is discretized, so all operations are conducted at time steps, with prior operations occurring at prior steps and later operations occurring later steps. Page 2, Second Paragraph “Distributed-memory, massively parallel computers have enabled the development of applications requiring computational resources previously unavailable. Finite difference and finite element methods for structural mechanics and fluid dynamics problems, for example, often require millions of degrees of freedom to accurately simulate physical phenomenon. When solving partial differential equations (PDEs) on MIMD computers, spatial data must be distributed across the processors' memory while minimizing the amount of data that must be exchanged between processors. This problem is especially acute when dealing with (i) adaptive methods, where mesh structure and work loads change during the computation, and (ii) three-dimensional meshes, whose data grow at a faster rate than in two dimensions when performing h-refinement. The challenge is to combine the computational efficiency of adaptive methods with the computational resources of massively parallel computation.” Page 5, Last Paragraph “Therefore, the flux may be calculated by computing each component separately, exchanging
f
1
+
and
f
1
-
between elements, and summing. This splitting approximately halves the computational and parallel communications effort relative to other flux evaluation schemes.” Page 2, Last Paragraph “We use a discontinuous Galerkin finite element method [11,13,14] where the spatial approximation is continuous within an element, but may be discontinuous at interelement boundaries to accommodate solution discontinuities more accurately. Fluxes at element boundaries are computed by solving an approximate Riemann problem with a projection limiter applied to keep the average solution monotone near discontinuities [11,14,36]. Time discretization is performed by an explicit Runge-Kutta method.” Page 3, First Paragraph “We use a discontinuous Galerkin finite element method [11,13,14] where the spatial approximation is continuous within an element, but may be discontinuous at interelement boundaries to accommodate solution discontinuities more accurately. Fluxes at element boundaries are computed by solving an approximate Riemann problem with a projection limiter applied to keep the average solution monotone near discontinuities [11,14,36]. Time discretization is performed by an explicit Runge-Kutta method.” Page 13, Last Paragraph “The adaptive p-refinement method produced a solution with global error 6.32205 x 10-2 , comparable to the fixed-order solution. With balancing, the maximum computation time (not including communication or balancing time) was reduced by 49.8% (see Table 4.1). – Processor cores in a distributed computing system transfer between the cores state data including variables associated with fluxes, such as fluxes themselves, or other state variables used to calculate the fluxes at some time steps. The repeatedly performing is a repetition of steps, which is rejected in the same manner as the first mapped instance of each step.)
processing, by the computer system, the one or more state variables of the first element and the first flux data according to the model; and updating the one or more state variables according to the processing of the one or more state variables of the first element and the first flux data; (Devine Page 5, Last Paragraph “Therefore, the flux may be calculated by computing each component separately, exchanging
f
1
+
and
f
1
-
between elements, and summing. This splitting approximately halves the computational and parallel communications effort relative to other flux evaluation schemes.” Page 2, Last Paragraph “We use a discontinuous Galerkin finite element method [11,13,14] where the spatial approximation is continuous within an element, but may be discontinuous at interelement boundaries to accommodate solution discontinuities more accurately. Fluxes at element boundaries are computed by solving an approximate Riemann problem with a projection limiter applied to keep the average solution monotone near discontinuities [11,14,36]. Time discretization is performed by an explicit Runge-Kutta method.” Page 3, First Paragraph “We use a discontinuous Galerkin finite element method [11,13,14] where the spatial approximation is continuous within an element, but may be discontinuous at interelement boundaries to accommodate solution discontinuities more accurately. Fluxes at element boundaries are computed by solving an approximate Riemann problem with a projection limiter applied to keep the average solution monotone near discontinuities [11,14,36]. Time discretization is performed by an explicit Runge-Kutta method.” Page 13, Last Paragraph “The adaptive p-refinement method produced a solution with global error 6.32205 x 10-2 , comparable to the fixed-order solution. With balancing, the maximum computation time (not including communication or balancing time) was reduced by 49.8% (see Table 4.1). – The system uses the received state data from the communication phase to calculate edge flexes.)
Devine teaches calculating local fluxes in times when not communicating (Devine Abstract “Fluxes at element boundaries are computed by solving an approximate Riemann problem;” Page 13 “With balancing, the maximum computation time (not including communication or balancing time) was reduced by 49.8% (see Table 4.1).”– Flux is estimated during calculation stages.), but Devine does not appear to explicitly teach, but Devine in view of Yook teaches:
for a second time step subsequent to the one or more first time steps: extrapolating, by the computer system, from the first flux data from the one or more first time steps, second flux data for the first element; processing, by the computer system, the one or more state variables of the first element and the second flux data according to the model; and updating, by the computer system, the one or more state variables of the first element according to the processing of the one or more state variables of the first element and the second flux data. (Yook Page 2, 2.1 Dead Reckoning “Dead reckoning is an approach to data replication in which every host models the state (position, velocity, and acceleration) of each remote entity by applying predictive extrapolation based on update information sent from each entity’s local host. Dead reckoning typically reduces bandwidth because update packets can be transmitted at lower than frame rate frequencies. When remote sites are updated at a slower rate, receivers use extrapolation to provide seamlessly integrated remote and local entities on display. Dead reckoning protocols are implemented in distributed simulation games such as Amaze [3], and virtual battlefield environments such as U.S. Army’s Simulation Networking system [13].” – The exchanged state variables are extrapolated for some time steps using previously transferred data calculated based on transmitted data, perhaps transmitted contemporaneously with the processing, as the processing completes.)
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claims to modify the distributed computing task of Devine by the approximation techniques of Yook because the person of ordinary skill in the art would be motivated by the expressed object of Devine to minimize the amount of data that must be exchanged between processors to look to Yook that uses estimation methods to reduce communication between processors using effective and validated methods. (Devine Page 2, Second Paragraph “When solving partial differential equations (PDEs) on MIMD computers, spatial data must be distributed across the processors' memory while minimizing the amount of data that must be exchanged between processors.; Yook Abstract “This paper describes a new framework for distributed control systems in which estimators are used at each node to estimate the values of the outputs at the other nodes. The estimated values are then used to compute the control algorithms at each node. When the estimated value deviates from the true value by more than a pre-specified tolerance, the actual value is broadcast to the rest of the system; all of the estimators are then updated to the current value. By using the estimated values instead of true value at every node, a significant savings in the required bandwidth is achieved, allowing large-scale distributed control systems to be implemented effectively. The stability, performance, and expected communication frequency of the reduced communication system are analyzed in detail. Simulation and experimental results validating the effectiveness and communication savings of the framework are also presented.)
Claim 11
Regarding claim 11, Devine in view of Yook teaches the features of claim 9 and further teaches:
wherein calculating the second flux data comprises linearly extrapolating the second flux data from the first flux data. (Yook Abstract “This paper describes a new framework for distributed control systems in which estimators are used at each node to estimate the values of the outputs at the other nodes. The estimated values are then used to compute the control algorithms at each node. When the estimated value deviates from the true value by more than a pre-specified tolerance, the actual value is broadcast to the rest of the system; all of the estimators are then updated to the current value. By using the estimated values instead of true value at every node, a significant savings in the required bandwidth is achieved, allowing large-scale distributed control systems to be implemented effectively. The stability, performance, and expected communication frequency of the reduced communication system are analyzed in detail. Simulation and experimental results validating the effectiveness and communication savings of the framework are also presented.” Page 2, 2.1 Dead Reckoning “Dead reckoning is an approach to data replication in which every host models the state (position, velocity, and acceleration) of each remote entity by applying predictive extrapolation based on update information sent from each entity’s local host. Dead reckoning typically reduces bandwidth because update packets can be transmitted at lower than frame rate frequencies. When remote sites are updated at a slower rate, receivers use extrapolation to provide seamlessly integrated remote and local entities on display. Dead reckoning protocols are implemented in distributed simulation games such as Amaze [3], and virtual battlefield environments such as U.S. Army’s Simulation Networking system [13].” Page 4, 3.2 State Estimator Framework “The main idea of the proposed estimator framework is that all nodes have identical estimators and thus identical estimator states. The estimated values of the remote outputs are used in the feedback control. – Yook teaches estimating data as a substitute for communicated data using interpolation.)
Claim 12
Regarding claim 12, Devine in view of Yook teaches the features of claim 9 and further teaches:
wherein the one or more first time steps include at least two time steps. (Devine Abstract “Fluxes at element boundaries are computed by solving an approximate Riemann problem;” Page 13 “With balancing, the maximum computation time (not including communication or balancing time) was reduced by 49.8% (see Table 4.1).”– Flux is estimated during calculation stages at multiple time steps. Table 4.1 – Even in the last column, the communication steps take 72.65 second, and the computation time is 393.32. Even if some of that time is overlapping, because of the magnitude of the disparity, there is at least one instance where computation is done for multiple consecutive time steps before communication because there is more than twice as much time. This would mean that the calculations calculated prior to communication of the data (that may be used for Yook later for estimation) occurs over multiple consecutive time steps prior to communication.)
PNG
media_image3.png
460
634
media_image3.png
Greyscale
Claim 13
Regarding claim 13, Devine in view of Yook teaches the features of claim 9, and further teaches:
wherein calculating the second flux data comprises calculating the second flux data based on the first flux data and the one or more state variables of the first element. (Yook Abstract “This paper describes a new framework for distributed control systems in which estimators are used at each node to estimate the values of the outputs at the other nodes. The estimated values are then used to compute the control algorithms at each node. When the estimated value deviates from the true value by more than a pre-specified tolerance, the actual value is broadcast to the rest of the system; all of the estimators are then updated to the current value. By using the estimated values instead of true value at every node, a significant savings in the required bandwidth is achieved, allowing large-scale distributed control systems to be implemented effectively. The stability, performance, and expected communication frequency of the reduced communication system are analyzed in detail. Simulation and experimental results validating the effectiveness and communication savings of the framework are also presented.” Page 2, 2.1 Dead Reckoning “Dead reckoning is an approach to data replication in which every host models the state (position, velocity, and acceleration) of each remote entity by applying predictive extrapolation based on update information sent from each entity’s local host. Dead reckoning typically reduces bandwidth because update packets can be transmitted at lower than frame rate frequencies. When remote sites are updated at a slower rate, receivers use extrapolation to provide seamlessly integrated remote and local entities on display. Dead reckoning protocols are implemented in distributed simulation games such as Amaze [3], and virtual battlefield environments such as U.S. Army’s Simulation Networking system [13].” Page 4, 3.2 State Estimator Framework “The main idea of the proposed estimator framework is that all nodes have identical estimators and thus identical estimator states. The estimated values of the remote outputs are used in the feedback control. Every sample time, the controller at the ith node compares the estimate of the ith output to its true values. If the difference is greater than a predefined threshold, the true value is communicated to the other nodes. Therefore, the error between the estimated data used in the control algorithm and the actual value is always bounded by a threshold value that can be chosen by the control designer. This bound between the estimated and actual data influences the behavior of the entire system; more details are given in Section 4.” Page 5, Second-Third Paragraphs “When there is a communication from ith node, the estimators in all nodes update their states to reflect the current actual value of the system outputs or states.1 In order to update the estimator states, we need to be able to recover the ith node states from the output
Y
i
*
. If the output of a MIMO system is also a state variable, we can update the corresponding estimator state. For a system which is observable from only the ith output, all estimator states could be reconstructed and updated (although the actual controller outputs from all other nodes would be required). If all the ith node states are measured, we can simply use the measurements to update the estimator states. Detailed analysis of different estimator state updating methods will be presented in Section 4.2. In a distributed control system with the proposed state estimator, the computation load at each node is increased due to the additional computation related to the estimator; however, the required communication is decreased since the communication only occurs when the difference between the measured actual output and the estimated output of the system is above the threshold value. The required communication within the system with the proposed estimator will be discussed in more detail in the following section.” Page 2, 2.1 Dead Reckoning “Dead reckoning is an approach to data replication in which every host models the state (position, velocity, and acceleration) of each remote entity by applying predictive extrapolation based on update information sent from each entity’s local host. Dead reckoning typically reduces bandwidth because update packets can be transmitted at lower than frame rate frequencies. When remote sites are updated at a slower rate, receivers use extrapolation to provide seamlessly integrated remote and local entities on display. Dead reckoning protocols are implemented in distributed simulation games such as Amaze [3], and virtual battlefield environments such as U.S. Army’s Simulation Networking system [13].” – Yook teaches using past communicated state data (e.g., flux) from neighbors to extrapolate/estimate newly communicated state data as an element of calculation to minimize communication time.)
Claim 14
Regarding claim 14, Devine in view of Yook teaches the features of claim 13 and further teaches:
wherein the one or more state variables include first variables and second variables that are partial derivatives of the first variables. (Devine Page 4, Last Paragraph – Page 5, First Paragraph “We specify it using a "numerical flux" function
h
U
j
+
,
U
j
-
dependent on solution states
U
j
+
and
U
j
-
on the inside and outside, respectively, of
∂
Ω
j
.” – State variables communicated and/or estimated include inside and outside states as well as flux. Page 2, Second Paragraph “When solving partial differential equations (PDEs) on MIMD computers, spatial data must be distributed across the processors' memory while minimizing the amount of data that must be exchanged between processors. This problem is especially acute when dealing with ( i) adaptive methods, where mesh structure and work loads change during the computation, and ( ii) three-dimensional meshes, whose data grow at a faster rate than in two dimensions when performing h-refinement.” Page 2, Third Paragraph “The subscripts t and xji = 1, 2, ... , d, denote partial differentiation with respect to time and the spatial coordinates, and u, u0 , and fji = 1,2, ..., ,d, are m-vectors on the problem domain n x (t > 0).” Page 5, The Paragraph Including Equation 2.6 “In three dimensions, we use van Leer's flux vector splitting [25,35] to construct a numerical flux. This technique is not generally applicable but does apply to the Euler equations of compressible flow which have the solution and flux vectors
PNG
media_image4.png
111
533
media_image4.png
Greyscale
where ρ, e, and p are the density, energy, and pressure; u' is the velocity component in the direction of n; and v' and w' are velocity components tangent to
∂
Ω
j
. – In the 3D model, the variable u for the m vectors in each axis is included as well as its partial derivative x for each axis. This sums to at least three first variables and at least three second variables that are partial derivatives of the first variables for transfer between nodes.)
Claim 18
Regarding claim 18, Devine in view of Yook teaches the features of claim 9, and further teaches:
wherein the plurality of elements are divided into a plurality of partitions, each partition including a group of contiguous elements of the plurality of elements. (Devine (Devine Page 2, Second Paragraph “Distributed-memory, massively parallel computers have enabled the development of applications requiring computational resources previously unavailable. Finite difference and finite element methods for structural mechanics and fluid dynamics problems, for example, often require millions of degrees of freedom to accurately simulate physical phenomenon. When solving partial differential equations (PDEs) on MIMD computers, spatial data must be distributed across the processors' memory while minimizing the amount of data that must be exchanged between processors. This problem is especially acute when dealing with (i) adaptive methods, where mesh structure and work loads change during the computation, and (ii) three-dimensional meshes, whose data grow at a faster rate than in two dimensions when performing h-refinement. The challenge is to combine the computational efficiency of adaptive methods with the computational resources of massively parallel computation.” Page 6, Third-Fourth Paragraphs “We have developed an adaptive p-refinement method for two-dimensional systems that uses dynamic load balancing to adjust the processor decomposition in the presence of nonuniform and changing work loads. Tiling [37] is a modification of a dynamic load balancing technique developed by Leiss and Reddy [24] that balances work within overlapping processor neighborhoods to achieve a global load balance. Work is migrated from a processor to others within the same neighborhood. We demonstrate the improved performance obtained from a combination of p-adaptivity and parallel computation on several examples using a 1024-processor nCUBE/2 hypercube. For three-dimensional problems with irregular grids of tetrahedral elements and adaptive h-refinement, we have developed a tree-based mesh partitioning technique that exploits the properties of tree-structured meshes. The rich, hierarchical structure of these meshes allows them to be divided into components along boundaries of the tree structure. Our partitioning technique is based on two tree traversals that ( i) calculate the processing costs of all subtrees of a node, and ( ii) form the partitions. Our method is inexpensive and, thus, has an advantage relative to other global partitioning techniques [21,23,27]. We demonstrate the performance of the tree-based mesh partitioning technique on a variety of three-dimensional meshes and discuss extension of the technique for parallel implementation and dynamic load balancing. We present results, using a Thinking Machines CM-5 computer, for the adaptive h-refinement solutions of an Euler flow past a cone.” See Also FIGs. 4.1 and 4.3 – The elements are distributed to memory partitions, each partition includes contiguous elements assigned to be processed by a processor of a multiprocessor system. FIGs. 4.1 and 4.3 illustrate the edges of these partitions and the cells thereon.)
PNG
media_image1.png
382
658
media_image1.png
Greyscale
PNG
media_image2.png
451
633
media_image2.png
Greyscale
Claim 19
Regarding claim 19, Devine in view of Yook teaches the features of claim 18 and further teaches:
wherein the first element is on an edge of a partition of the plurality of partitions. (Devine Page 3, First Paragraph “The discontinuous Galerkin method is well suited to parallelization on massively parallel computers. The computational stencil involves only nearest-neighbor communication regardless of the degree of the piecewise polynomial approximation and the spatial dimension. Additional storage is needed for only one row of "ghost" elements along each edge of a processor's subdomain. Thus, the size of the problem scales easily with the number of processors. Indeed, for two-dimensional problems on rectangular domains with periodic boundary conditions, scaled parallel efficiencies in excess of 97% are achieved [11].” Page 10, last Paragraph – Page 11, First Paragraph “The tiling algorithm consists of ( i) a computation phase and ( ii) a balancing phase, and is designed to be independent of the application. The computation phase corresponds to the application's implementation without load balancing. Each processor operates on its local data, exchanges inter-processor boundary data, and processes the boundary data. A balancing phase restores load balance following a given number of computation phases.” See Also FIGs. 4.1 and 4.3 above – Devine teaches the first element is on an edge of a partition of the plurality of partitions.)
Claim 20
Regarding claim 20, Devine in view of Yook teaches the features of claim 19 and further teaches:
obtaining third flux data for a third element of the plurality of elements from the one or more state variables of one or more fourth elements of the plurality of elements, the third element and fourth element being in a same partition of the plurality of partitions and the third element not being an edge element of the same partition; processing the one or more state variables of the third element and the third flux data according to the model; and updating the one or more state variables of the third element according to the processing of the one or more state variables of the third element and the third flux data. (NOTE: This is merely an internal flux calculation within a partition, which is a standard operation within FEM. Devine Page 3, First Paragraph “The discontinuous Galerkin method is well suited to parallelization on massively parallel computers. The computational stencil involves only nearest-neighbor communication regardless of the degree of the piecewise polynomial approximation and the spatial dimension. Additional storage is needed for only one row of "ghost" elements along each edge of a processor's subdomain. Thus, the size of the problem scales easily with the number of processors. Indeed, for two-dimensional problems on rectangular domains with periodic boundary conditions, scaled parallel efficiencies in excess of 97% are achieved [11].” Page 10, last Paragraph – Page 11, First Paragraph “The tiling algorithm consists of (i) a computation phase and (ii) a balancing phase, and is designed to be independent of the application. The computation phase corresponds to the application's implementation without load balancing. Each processor operates on its local data, exchanges inter-processor boundary data, and processes the boundary data. A balancing phase restores load balance following a given number of computation phases.” Page 13, Last Paragraph “The adaptive p-refinement method produced a solution with global error 6.32205 x 10-2 , comparable to the fixed-order solution. With balancing, the maximum computation time (not including communication or balancing time) was reduced by 49.8% (see Table 4.1).” – During a calculation phase, data is determined between local elements. Edge elements must have their data communicated intermittently to other nodes to make the edges of their respective partitions concur. ALSO, THE CLAIMS READ ON THE TREE-TRAVERSAL PARTITIONING ALGORITHM Page 19, Last paragraph – Page 20, First Paragraph “The tree-traversal partitioning algorithm may easily be extended for use with a parallel adaptive environment. An initial partitioning is made using the serial algorithm described above. As the numerical solution advances in time, h- and/or p-refinement introduces a load imbalance. To obtain a new partitioning, let each processor compute its subtree costs using the serial traversal algorithm within its domain. This step requires no interprocessor communication. An inexpensive parallel prefix operation may be performed on the processor-subtree totals to obtain a global cost structure. This information enables a processor to determine where its local tree traversal is located in the global traversal.”)
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
(From Prior Office Action)
US 2007/0162914 A1 to Deen et al. (Teaches using thick edges to eschew need for communication of edge or boundary values in a distributed mesh)
US 2004/0006584 A1 to Vandeweerd (Teaches routing algorithms in distributing computing systems, including what to do when communication fails)
US 9,818,136 B1 to Hoffberg et al. (Teaches a scheduler for parallel mesh communication)
NPL: “An implicit boundary finite element method with extension to frictional sliding boundary conditions and elasto-plastic analyses” to Lu et al. (Teaches updating edges of an FEM partition using interpolation under certain circumstances)
NPL: “POSTER: GPOP: A cache and memory-efficient framework for Graph Processing Over Partitions” by Lakhotia et al. (Discusses distributed computing solutions for graph problems with differing communications at different times.)
NPL: “An unstructured-grid, finite-volume, nonhydrostatic, parallel coastal ocean simulator” by Fringer et al. (Discuses distributed computing solutions for finite-volume Navier-Stokes calculations).
NPL: “3D Euler Calculations Using Characteristic Flux Extrapolation” by Eberle (Discusses extrapolation of flux values in 3D mesh calculations).
NPL: “Spectral-element simulations of global seismic wave propagation—I. Validation” by Komatitsch et al (Discusses distributed computation of FEM).
NPL: “Computational aspects of a scalable high-order discontinuous Galerkin
atmospheric dynamical core” by Nair et al. (Discusses reducing communication costs of distributed mesh model calculations).
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JAY MICHAEL WHITE whose telephone number is (571)272-7073. The examiner can normally be reached Mon-Fri 11:00-7:00 EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ryan Pitaro can be reached at (571) 272-4071. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/J.M.W./Examiner, Art Unit 2188
/RYAN F PITARO/Supervisory Patent Examiner, Art Unit 2188