DETAILED ACTION
Status of the Application
The following is a Final Office Action. In response to Examiner's communication of November 26 2025, Applicant, on January 29, 2026, amended claims 1, 11, 17, & 19. Claims 1-21 are now pending in this application and have been rejected below.
The present application is being examined under the pre-AIA first to invent provisions. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Response to Amendment
Applicant's amendments are not sufficient to overcome the 35 USC 101 rejections set forth in the previous action. Therefore, these rejections are maintained below.
Applicant's amendments raise new grounds for 35 USC 112(a) and 35 USC 112(b) rejections necessitated by Applicant’s amendments. Therefore, these 35 USC 112 rejections are set forth below.
Response to Arguments - 35 USC § 101
Applicant’s arguments with respect to the 35 USC 101 rejections have been fully considered, but they are not persuasive.
Applicants argue that claim 1, 11, 17, and 19 are not abstract nor fall within the "Mental Processes" grouping of abstract ideas because the claim amendments delete portions Examiner interpreted as a human performing an evaluations and using judgement, the amended claims focus on technical data processing and are directed to a specific technical pipeline that cannot be performed by the human mind including data conditioning that transforms data format for specific network circuits, machine learning extrapolation that uses ML to fill in missing data, and control signal generation that generates a machine-readable control signal, a human cannot dynamically reconfigure a digital network circuit via a control signal to prevent a processor from executing unnecessary runs, and by stripping the scenario interpretation language, the claims are directed to an improvement in computer resource management. Examiner respectfully disagrees.
Pursuant to 2019 Revised Patent Subject Matter Eligibility Guidance, in order to determine whether a claim is directed to an abstract idea, under Step 2A, we first (1) determine whether the claims recite limitations, individually or in combination, that fall within the enumerated subject matter groupings of abstract ideas (mathematical concepts, certain methods of organizing human activity, or mental processes), and (2) determine whether any additional elements beyond the recited abstract idea, individually and as an ordered combination, integrate the judicial exception into a practical application. 84 Fed. Reg. 52, 54-55. Next, if a claim (1) recites an abstract idea and (2) does not integrate that exception into a practical application, in order to determine whether the claim recites an “inventive concept,” under Step 2B, we then determine whether any of the additional elements beyond the recited abstract idea, individually and in combination, are significantly more than the abstract idea itself. 84 Fed. Reg. 56.
That is, under Prong 1 of Step 2A, we first determine whether the claims recite limitations, individual or in combination, that fall within the abstract groupings. The generic computer components, i.e., the circuits, the machine learning, and the machine-readable control signal, themselves implementing the functions are additional elements beyond the recited abstract idea addressed in Prong 2 of Step 2A; however, the functions or actions implemented by the various circuits, the machine learning, and the machine-readable control signal are abstract as follows.
Under Prong 1 of Step 2A, claim 1, and similarly claims 2-21, “an agglomerate network that includes a plurality of agglomerate network … to generate a time-sequence … comprising: a data retrieval … to retrieve data from a data source, and, when data is not available, retrieve data from one or more of (i) another agglomerate network representing the same organization, (ii) from a comparable agglomerate network … from the same organization, or (iii) from averaging or mixture of (i) and (ii); a data conditioning … to transform a format of the retrieved data to correspond to an expected format compatible with one or more agglomerate network circuits included in the agglomerate network; a data provisioning … to transmit the retrieved and formatted data in the one or more agglomerate network … included into the agglomerate network for generating the time-sequence data; and … at least historical time-sequence data, … determine a variation of a first feature value based on one or more changes to a correlated second feature value; adjust, based on the correlated second feature value, raw data output from one of the plurality of agglomerate network … that received as input the first feature value; and extrapolate missing or incomplete data in the raw data output by mixing in aggregated data output from comparable agglomerate network …; and … generate … a feature value descriptor ... describing how the first feature value responds to the one or more changes to the correlated second feature value; and propagate the feature value descriptor to the one or more agglomerate network … included in the agglomerate network to dynamically reconfigure the one or more agglomerate network …, wherein the dynamically reconfiguration reduces a number of execution runs needed of the one or more agglomerate network … included in the agglomerate network for generating the time-sequence data.” Claims 1-21, in view of the claim limitations, recite the abstract idea of generating time sequence data comprising schedules of time sequences by retrieving data from the source comprising the data regarding an organization, conditioning the data regarding the organization by formatting the data for inclusion in the agglomerate network models, provisioning the retrieved data and formatted regarding the organization to generate the time sequence data, using historic time sequence data to determine a variation of a first feature value based changes to a correlated second feature value, adjust, based on the correlated second feature value, raw data output from one of the plurality of agglomerate network models, extrapolate missing data in the raw data output by mixing in aggregated data output from comparable agglomerate network models, generating a feature value descriptor describing how the first feature value responds to the changes to the correlated second feature value, and propagating the feature value descriptor to reconfigure the one or more agglomerate network models included in the agglomerate network to generate the time sequence data.
A claim recites mental processes when the claim recites concepts performed in the human mind (including an observation, evaluation, judgment, opinion), wherein if the claim, under its broadest reasonable interpretation, covers the claim being practically performed in the mind but for the recitation of generic computer components, then the claim is in the mental process category. 84 Fed. Reg. 52 n.14. Here, as a whole, in view of the claim limitations, but for the computer components and systems performing the claimed functions, such as the machine learning and circuits referred to by Applicant, the broadest reasonable interpretation of the recited generating time sequence data comprising schedules of time sequences by retrieving data from the source comprising the data regarding an organization, conditioning the data regarding the organization by formatting the data for inclusion in the agglomerate network models, provisioning the retrieved data and formatted regarding the organization to generate the time sequence data, using historic time sequence data to determine a variation of a first feature value based changes to a correlated second feature value, adjust, based on the correlated second feature value, raw data output from one of the plurality of agglomerate network models, extrapolate missing data in the raw data output by mixing in aggregated data output from comparable agglomerate network models, generating a feature value descriptor describing how the first feature value responds to the changes to the correlated second feature value, and propagating the feature value descriptor to reconfigure the one or more agglomerate network models included in the agglomerate network to generate the time sequence data could all be reasonably interpreted as a human using judgment to generate sequence data schedules for time sequences by a human observing the source of data to retrieve the data, a human performing evaluations and using judgment to make a decision to format the data, determine a variation of feature values, adjust data output from the model, extrapolate the missing data from comparable models, generate a feature descriptor describing how the feature responds to changes, and propagate the feature description to reconfigure one of the models in the network, and a human outputting the results to update the time sequence data comprising schedules to include the formatted and retrieved data manually and/or with a pen and paper. Therefore, while the claims recite a generic machine learning model and generic circuits, the claims, including the steps performed using or related to the machine learning model and the circuits, recite mental processes.
In addition, a claim recites certain methods of organizing human activity when the claim recites fundamental economic principles or practices (including hedging, insurance, mitigating risk), commercial or legal interactions (including agreements in the form of contracts, legal obligations, advertising, marketing or sales activities or behaviors, business relations), managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions). 84 Fed. Reg. at 52. Examiner notes, with respect to the “time-sequence” recited throughout the claims, in view of the specification at in view of the specification at [000297], “the term schedule may include time-sequences, e.g., an ordered list or sequence of events, tasks, and/or shifts which may correspond to dates, days of the week, times of the day, etc.,” at [000302], “[a]s used herein, a schedule may refer to work schedules where employees or other personnel and/or resources are scheduled for work or other duties at certain locations and/or times of the day,” and thus, the generating of the time sequence by retrieving data from the data regarding an organization, conditioning the data regarding the organization by formatting the data, provisioning the retrieved and formatted data regarding the organization to generate the time sequence data, using historic time sequence data to determine a variation of a first feature value based changes to a correlated second feature value, adjust raw data output from one of the agglomerate network models, extrapolate missing data from comparable agglomerate network models, generating a feature value, and propagating the feature value descriptor to reconfigure the agglomerate network models to generate the time sequence data are all directed to generating time-sequences comprising “an ordered list or sequence of events, tasks, and/or shifts which may correspond to dates, days …, times” included as part of schedules such as work schedules for “employees or other personnel,” which manages the personal human behavior of the events, tasks, shifts, and schedules of people, and thus, the claims recite a certain method of organizing human activity. Thus, while the claims recite generic circuits, a generic machine learning model, and computer-readable control signal, the claims, including the steps performed using or related to the machine learning model, circuits, and computer-readable control signal, recite a certain method of organizing human activity.
Accordingly, since the claims recite mental processes and a certain method of organizing human activity, the claims recite an abstract idea under the first prong of Step 2A.
Examiner notes, Applicant’s specification explicitly states the “disclosure provide for networked, autonomous, agglomerated resource utilization modelers” ([0004]), the “agglomerate network circuit” to generate a corresponding schedule ([0005]) are “also referred to herein as a "scheduling circuit/module/model" ([0303]), “[a]n agglomerate network may be a collection of various types of circuits/modules/models, as described herein, e.g., scheduler circuits, connector circuits, schedule analysis circuits, etc.” ([0304]), and “the responsive scheduler is one of a plurality of modules/models/circuits within an agglomerate network.” [0504]. The specification opens by describing that the disclosure provides network modelers, throughout the specification, the terms circuit and model are repeated used interchangeably, and the specification explicitly states the circuits are also referred to as models. Accordingly, as used in the specification, the terms “agglomerate network” and “agglomerate network circuits” are, respectively, a group of models and models, implemented by software, for generating schedules.
Further, Examiner notes Applicant’s specification is silent with respect to the feature value descriptor comprising a machine-readable control signal. The Specification describes the “feature value descriptor” in paragraphs [0583]-[0588] as follows:
[0583] Embodiments may include feature value descriptor. In embodiments, any given agglomerate model may output a feature value that is consumed by another agglomerate model, directly, after the application of a feature space conversion, or in combination with other feature values. Feature values may be represented by any type of number, string, object, list, array, map, feature value description, feature value or combination thereof.
[0584] In embodiments, the feature value descriptor contains a feature value and/or other information describing model confidence, e.g., likelihood that the actual behavior results in the feature value output by the agglomerate model; likelihood that the actual behavior returns results better than the modeled feature value output; likelihood that the actual behavior returns results worse than the feature value; a list of outputs and likelihoods; a feature value surface describing the feature value and how the value responds to changes to a feature value input; or any combination thereof.
[0585] In an embodiment, a feature value descriptor contains one or more of a feature value for feature values that may be represented by a continuous function in the region of the feature value, a description of how the feature value responds to changes in one or more agglomerate model feature input values, where the feature input values define the conditions over which a given agglomerate model is executed, or for which the output of the agglomerate model includes an output feature value which the Feature Value Descriptor corresponds to.
[0586] In embodiments, a feature value descriptor may represent a non-continuous set of results. Alternatively, the feature value descriptor may contain a feature value that is represented by a non-continuous function.
[0587] In embodiments, a feature value descriptor may represent a set of values, which may vary along some axes non-continuously, and which may be represented along other axes by continuous functions.
[0588] In embodiments, a feature value descriptor (or descriptors) may describe the modeled or learned behavior of the feature value descriptor in response to changes in another feature value descriptor. As will be understood, in many cases, the other feature value descriptor might represent an input to the agglomerate model generating the output value, but more generically, the feature value descriptor might describe how a feature value might respond to changes in any correlated feature value.
In these portions of the Specification, there is a discussion that the feature value descriptor contains nothing more than a feature value and/or other information describing model confidence, one or more of a feature value for feature values that may be represented by a continuous function in the region of the feature value, a description of how the feature value responds to changes in one or more agglomerate model feature input values, where the feature input values define the conditions over which a given agglomerate model is executed, or for which the output of the agglomerate model includes an output feature value about the value, may represent a set of values, and a description of the modeled or learned behavior of how a feature value might respond to changes in any correlated feature value, and feature values may be represented by any type of number, string, object, list, array, map, feature value description, feature value or combination thereof. Therefore, the recitation of the “feature value descriptor comprising a machine-readable control signal” amounts to nothing more than a generic computer component applying the abstract mental process and certain method of organizing human activity of generating the time-sequence data using the agglomerate network models.
Further, as noted above under Prong 1 of Step 2A, the steps performed using or related to the circuits, machine learning model, and machine-readable control signal recite mental processes and a certain method of organizing human activity.
Thus, using the generically recited “circuits,” “machine learning models,” and “machine-readable signal” to use historical time-sequence data to determine a variation of a first feature value based on one or more changes to a correlated second feature value, adjust, based on the correlated second feature value, raw data output from one of a plurality of agglomerate network models, extrapolate missing raw data output by mixing in aggregated data output from comparable agglomerate network models, generate a feature value descriptor describing how the first feature value responds to the changes to the correlated second feature value, and propagate the feature value descriptor to the agglomerate network models included in the agglomerate network, and that these recited elements reduce the number of executions used to generate time-sequence data is nothing more than requiring that these steps of the mental process that are part of and directed to the abstract idea are performed using generic computer components including the generically recited machine learning model and that the models for generating schedules are implemented by software modules.
Mere automation of a manual process or a business method being applied on a general purpose computer is not sufficient to show an improvement in computers or other technology, and the claim must include more than mere instructions to perform the method on a generic component or machinery to qualify as an improvement to an existing technology. MPEP 2106.05(a). Merely requiring that the claims use generic computer components, such as the generically recited “circuits,” “machine learning models,” and “machine-readable signal,” to implement the recited abstract idea does not make the claims directed to an improvement in technology or otherwise transform the abstract idea into a patent eligible invention. Generating time-sequence data, observing or retrieving data regarding an organization and performing evaluations and using judgment to condition data regarding an organization, provide the data to generate the time-sequence, make a decision to determine a variation of feature values, adjust the model, extrapolate the missing data from comparable models, generate a feature descriptor describing how the feature responds to changes, and propagate the feature description to one of the models in the network are mental processes and certain methods of organizing human activity, and thus, implementing this with software module or circuits, machine learning models, and machine-readable signal amounts to nothing more than requiring that the abstract idea is implemented with generic computer components, which is not sufficient to integrate an abstract idea into a practical application.
As noted in the MPEP, "an improvement in the abstract idea itself (e.g. a recited fundamental economic concept) is not an improvement in technology." MPEP 2106.05(a). The recitation requiring that the “feature value descriptor comprising a machine-readable control signal“ and “the dynamic reconfiguration reduces a number of execution runs needed of the one or more agglomerate network circuits included in the agglomerate network for generating the time-sequence data” is not an improvement in computer technology because, as noted above, the “agglomerate network circuits” are models, implemented by software and the “machine-readable control signal” amounts to nothing more than apply it with a generic computer component, that are used to generating schedules and the steps performed by these circuits to generate the schedules can be performed mentally, and thus, the limitation amounts to no more than a reduction in steps of the mental process, which is an abstract idea. Similarly, the alleged improvement of “reduces a number of execution runs,” in view of the Specification, improves the scheduling events for employees, and thus, this is not an improvement in computer technology, but rather are directed to a certain method of organizing human activity, which is an abstract idea. Therefore, because these alleged improvements are in the abstract idea itself, the claims do not recite an improvement in technology. See MPEP 2106.05(a).
Like in Electric Power Group, the claims are not focused on a specific improvement in computers, but on certain independently abstract ideas that simply use computers as tools. Electric Power Group, LLC v. Alstom S.A., et al., No. 2015-1778, slip op. at 8 (Fed. Cir. Aug. 1, 2016); MPEP 2106.05(a).
Applicant argues the claims recites additional elements that integrate the judicial exception into a practical application because the claims includes an additional element that reflects an improvement in the functioning of a computer because the claims include a technical Improvement (reducing execution runs), wherein the amended claims explicitly recite that the propagation of the feature value descriptor is used to "dynamically reconfigure the one or more agglomerate network circuits" such that the reconfiguration "reduces a number of execution runs needed” and the claims removed the limitations directed to “organizing human activity" because the removed limitations managed "schedules of people.” Examiner respectfully disagrees.
Examiner notes, Applicant’s specification explicitly states the “disclosure provide for networked, autonomous, agglomerated resource utilization modelers” ([0004]), the “agglomerate network circuit” to generate a corresponding schedule ([0005]) are “also referred to herein as a "scheduling circuit/module/model" ([0303]), “[a]n agglomerate network may be a collection of various types of circuits/modules/models, as described herein, e.g., scheduler circuits, connector circuits, schedule analysis circuits, etc.” ([0304]), and “the responsive scheduler is one of a plurality of modules/models/circuits within an agglomerate network.” [0504]. The specification opens by describing that the disclosure provides network modelers, throughout the specification, the terms circuit and model are repeated used interchangeably, and the specification explicitly states the circuits are also referred to as models. Accordingly, as used in the specification, the terms “agglomerate network” and “agglomerate network circuits” are, respectively, a group of models and models, implemented by software, for generating schedules. Further, as noted above under Prong 1 of Step 2A, the steps performed using or related to the machine learning model recite mental processes and a certain method of organizing human activity. Thus, using the generically recited “circuits,” “machine learning models,” and “machine-readable signal” to use historical time-sequence data to determine a variation of a first feature value based on one or more changes to a correlated second feature value, adjust, based on the correlated second feature value, raw data output from one of a plurality of agglomerate network models, extrapolate missing raw data output by mixing in aggregated data output from comparable agglomerate network models, generate a feature value descriptor describing how the first feature value responds to the changes to the correlated second feature value, and propagate the feature value descriptor to the agglomerate network models included in the agglomerate network, and that these recited elements reduce the number of executions used to generate time-sequence data is nothing more than requiring that these steps of the mental process that are part of and directed to the abstract idea are performed using generic computer components including the generically recited machine learning model and that the models for generating schedules are implemented by circuits or software modules.
Further, in addition to reciting an abstract idea because the claims recite a mental process, the claims still nonetheless recite a “certain method of organizing human activity” because, as discussed above, in view of the discussion of the term “time sequence” in the specification at [000297] and [000302], the recited generating of the time sequence by retrieving data from the data regarding an organization, conditioning the data regarding the organization by formatting the data, provisioning the retrieved and formatted data regarding the organization to generate the time sequence data, using historic time sequence data to determine a variation of a first feature value based changes to a correlated second feature value, adjust raw data output from one of the agglomerate network models, extrapolate missing data from comparable agglomerate network models, generating a feature value, and propagating the feature value descriptor to reconfigure the agglomerate network models to generate the time sequence data are all directed to generating time-sequences comprising “an ordered list or sequence of events, tasks, and/or shifts which may correspond to dates, days …, times” included as part of schedules such as work schedules for “employees or other personnel,” which manages the personal human behavior of the events, tasks, shifts, and schedules of people, and thus, the claims still recite a certain method of organizing human activity.
Mere automation of a manual process or a business method being applied on a general purpose computer is not sufficient to show an improvement in computers or other technology, and the claim must include more than mere instructions to perform the method on a generic component or machinery to qualify as an improvement to an existing technology. MPEP 2106.05(a). Merely requiring that the claims use generic computer components, such as the generically recited circuits and the machine learning model, to implement the recited abstract idea does not make the claims directed to an improvement in technology or otherwise transform the abstract idea into a patent eligible invention. Performing evaluations and using judgment to make a decision to determine a variation of feature values, adjust the model, extrapolate the missing data from comparable models, generate a feature descriptor describing how the feature responds to changes, and propagate the feature description to one of the models in the network to generate time-sequence data are mental processes and certain methods of organizing human activity, and thus, implementing this with software module or circuits, machine learning models, and machine-readable signal amounts to nothing more than requiring that the abstract idea is implemented with generic computer components, which is not sufficient to integrate an abstract idea into a practical application.
As noted in the MPEP, "an improvement in the abstract idea itself (e.g. a recited fundamental economic concept) is not an improvement in technology." MPEP 2106.05(a). The recitation requiring that the “dynamic reconfiguration reduces a number of execution runs needed of the one or more agglomerate network circuits included in the agglomerate network for generating the time-sequence data” is not an improvement in computer technology because, as noted above, the “agglomerate network circuits” are models, implemented by software, for generating schedules and the steps performed by these circuits to generate the schedules can be performed mentally, and thus, the limitation amounts to no more than a reduction in steps of the mental process, which is an abstract idea. Similarly, the alleged improvement of “dynamic reconfiguration reduces a number of execution runs needed of the one or more agglomerate network circuits included in the agglomerate network for generating the time-sequence data,” in view of the Specification, improves the scheduling events for employees, and thus, this is not an improvement in computer technology, but rather are directed to a certain method of organizing human activity, which is an abstract idea. Therefore, because these alleged improvements are in the abstract idea itself, the claims do not recite an improvement in technology. See MPEP 2106.05(a).
Like in Electric Power Group, the claims are not focused on a specific improvement in computers, but on certain independently abstract ideas that simply use computers as tools. Electric Power Group, LLC v. Alstom S.A., et al., No. 2015-1778, slip op. at 8 (Fed. Cir. Aug. 1, 2016); MPEP 2106.05(a).
Under the second prong of Step 2A, the claims recite the additional elements beyond the recited abstract idea of “[a] hierarchical feature propagator (HFP) apparatus,” “circuit,” “at least one machine learning model trained on,” “wherein the at least one machine learning model is structured to,” “the HFP apparatus is structured to,” “via the at least one machine learning model,“ and “machine-readable control signal” in claim 1, and similarly in claims 2-21; however, individually and when viewed as an ordered combination, and pursuant to the broadest reasonable interpretation, each of the additional elements are computing elements recited at high level of generality implementing the abstract idea on a computer (i.e. apply it), and thus, are no more than applying the abstract idea with generic computer components. Further, these elements generally link the abstract idea to a field of use.
Applicant argues that “even if the claims were deemed to be directed to an abstract idea, they include an inventive concept that amounts to significantly more than the abstract idea itself because 1. The Claims Recite the Exact Combination Found Novel by the Examiner: In the withdrawal of the rejection under 35 U.S.C. § 103, the Examiner explicitly found that the prior art (Zarakas, Eder, Garber) failed to teach the specific backend combination retained in the current claims, …. 2. Deletion of Prior Art Elements Clarifies the Inventive Concept: … By deleting these "old" elements, the amended claims now consist almost entirely of the novel combination identified above (ML extrapolation + Control Signal + Dynamic Reconfiguration). 3. Conclusion on Step 2B: Because the remaining claim limitations (the "something more") are admittedly not present in the prior art, they cannot be considered "well-understood, routine, and conventional" activity. Using a machine learning model to extrapolate missing data and generate a control signal to reconfigure circuits for efficiency is a specific, non-conventional technological solution. Therefore, the claims satisfy Step 2B.” Examiner respectfully disagrees.
As discussed above, like in Prong 2 of Step 2A, under Step 2B, we determine whether any of the additional elements beyond the recited abstract idea are significantly more than the abstract idea itself.
The search for an inventive concept under § 101 is distinct from demonstrating novel and non-obviousness. See SAP America Inc. v. Investpic, LLC, No. 2017-2081, slip op. at 2-3 (Fed Cir. May 15, 2018) (citing Synopsys, Inc. v. Mentor Graphics Corp., 839 F.3d 1138, 1151 (Fed. Cir. 2016); Intellectual Ventures I LLC v. Symantec Corp., 838 F.3d 1307, 1315 (Fed. Cir. 2016). Even novel and newly discovered judicial exceptions are still exceptions, despite their novelty. July 2015 Update, p. 3; see SAP America at 2. Simply reciting specific limitations that narrow the abstract idea does not make an abstract idea non-abstract. 79 Fed. Reg. 74631; buySAFE Inc. v. Google, Inc., 765 F.3d 1350, 1355 (2014); see SAP America at 12. As discussed in SAP America, no matter how much of an advance the claims recite, when “the advance lies entirely in the realm of abstract ideas, with no plausibly alleged innovation in the non-abstract application realm,” “[a]n advance of that nature is ineligible for patenting.” Id. at 3. In Step 2B, “[w]hat is needed is an inventive concept in the non-abstract application realm.” Id. at 11.
The remaining functions referred to by Applicant implemented by the machine learning model, the various circuits, and the machine-readable control signal, namely the use historical time-sequence data to determine a variation of a first feature value based on changes to a correlated second feature value, adjust, based on the correlated second feature value, raw data output from one of a plurality of agglomerate network models, extrapolate missing raw data by mixing in aggregated data output from comparable agglomerate network models, generate a feature value descriptor describing how the first feature value responds to the changes to the correlated second feature value, and propagate the feature value descriptor to dynamically reconfigure the agglomerate network models are included in the agglomerate network are not additional elements beyond the recited abstract idea, but rather, for the reasons detailed above these functions are abstract mental processes and certain methods of organizing human activity that part of and directed to the recited abstract idea. Implementing the various abstract mental processes and certain methods of organizing human activity of the abstract idea with generic computer components of the generic machine learning model, the various circuits, and the machine-readable control signal amounts to nothing more than applying the abstract idea with generic computer components, which is not sufficient to be significantly more than an abstract idea.
As noted above, mere automation of a manual process or a business method being applied on a general purpose computer is not sufficient to show an improvement in computers or other technology, and the claim must include more than mere instructions to perform the method on a generic component or machinery to qualify as an improvement to an existing technology. MPEP 2106.05(a). Merely requiring that the claims use generic computer components, such as the generically recited machine learning model, various circuits, and machine-readable control signal, to implement the recited abstract idea does not make the claims directed to an improvement in technology or otherwise transform the abstract idea into a patent eligible invention.
As noted in the MPEP, "an improvement in the abstract idea itself (e.g. a recited fundamental economic concept) is not an improvement in technology." MPEP 2106.05(a). The recitations requiring that “a feature value descriptor comprising a machine-readable control signal“ and the “dynamic reconfiguration reduces a number of execution runs needed of the one or more agglomerate network circuits included in the agglomerate network for generating the time-sequence data” is not an improvement in computer technology because, as noted above, the “agglomerate network circuits” are models, implemented by software, for generating schedules and the steps performed by these circuits to generate the schedules can be performed mentally, and thus, the limitation amounts to no more than a reduction in steps of a mental process, which is an abstract idea. Similarly, the dynamic reconfiguration of the agglomerate network circuits included in the agglomerate network for generating the time-sequence data, in view of the Specification, is directed to the scheduling events and adjusting the schedule of events for employees, and thus, the problem addressed by the claims is not necessarily rooted in computer technology, but rather it is directed to a certain method of organizing human activity, which is an abstract idea. Therefore, because these alleged improvements are in the abstract idea itself, the claims do not recite an improvement in technology. See MPEP 2106.05(a).
Like in Electric Power Group, the claims are not focused on a specific improvement in computers, but on certain independently abstract ideas that simply use computers as tools. Electric Power Group, LLC v. Alstom S.A., et al., No. 2015-1778, slip op. at 8 (Fed. Cir. Aug. 1, 2016); MPEP 2106.05(a).
Under Step 2B, as noted above, the aforementioned additional elements beyond the recited abstract idea, as an order combination, are no more than mere instructions to implement the idea using generic computer components (i.e. apply it), and further, generally link the abstract idea to a field of use, which is not sufficient to amount to significantly more than an abstract idea; therefore, the additional elements are not sufficient to amount to significantly more than an abstract idea.
Claim Rejections - 35 USC § 112, First Paragraph
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claim 1-21 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for pre-AIA the inventor(s), at the time the application was filed, had possession of the claimed invention.
Claim 1, and similarly claims 11, 17, & 19, recites “generate, via the at least one machine learning model, a feature value descriptor comprising a machine-readable control signal describing how the first feature value responds to the one or more changes to the correlated second feature value.” However, Applicant’s specification does not in expressly or inherently require that the feature value descriptor comprises a machine-readable control signal, as the claims require.
In order to satisfy the written description requirement, each claim limitation must be expressly or inherently supported by the disclosure. MPEP 2163 (emphasis added). “The 'written description' requirement implements the principle that a patent must describe the technology that is sought to be patented; the requirement serves both to satisfy the inventor's obligation to disclose the technologic knowledge upon which the patent is based, and to demonstrate that the patentee was in possession of the invention that is claimed.” Capon v. Eshhar, 76 USPQ2d 1078, 1084 (Fed. Cir. 2005). Further, the written description requirement promotes the progress of the useful arts by ensuring that patentees adequately describe their inventions in their patent specifications in exchange for the right to exclude others from practicing the invention for the duration of the patent's term. See MPEP 2163 (emphasis added).
For claims directed toward computer-implemented functions, like the presently claimed invention, “[i]f the specification does not provide a disclosure of the computer and algorithm in sufficient detail to demonstrate to one of ordinary skill in the art that the inventor possessed the invention including how to program the disclosed computer to perform the claimed function, a rejection under 35 U.S.C. 112(a) or pre-AIA 35 U.S.C. 112, first paragraph, for lack of written description must be made.” MPEP 2161.01 (emphasis added). It is not enough that one skilled in the art could write a program to achieve the claimed function because the written description requirement requires that the specification explains how the inventor intends to achieve the claimed function. Examining Claims for Compliance with 35 USC 112(a) - PowerPoint of Computer Based Training, Slides 20 & 21, (emphasis added) available at http://www.uspto.gov/ sites/default/files/documents/uspto_112a_ part1_17aug2015.pptx. The ability of one skilled in the art to make and use the claimed invention does not satisfy the written description requirement if details of how the function is to be performed are not disclosed. Id. at Slide 20.
With respect to the recitation of “generate, via the at least one machine learning model, a feature value descriptor comprising a machine-readable control signal describing how the first feature value responds to the one or more changes to the correlated second feature value,” However, nothing in the Specification expressly or inherently requires generating a feature value descriptor that comprises a machine-readable control signal.
Applicant’s specification is silent with respect to the feature value descriptor comprising a machine-readable control signal. The Specification describes the “feature value descriptor” in paragraphs [0583]-[0588] as follows:
[0583] Embodiments may include feature value descriptor. In embodiments, any given agglomerate model may output a feature value that is consumed by another agglomerate model, directly, after the application of a feature space conversion, or in combination with other feature values. Feature values may be represented by any type of number, string, object, list, array, map, feature value description, feature value or combination thereof.
[0584] In embodiments, the feature value descriptor contains a feature value and/or other information describing model confidence, e.g., likelihood that the actual behavior results in the feature value output by the agglomerate model; likelihood that the actual behavior returns results better than the modeled feature value output; likelihood that the actual behavior returns results worse than the feature value; a list of outputs and likelihoods; a feature value surface describing the feature value and how the value responds to changes to a feature value input; or any combination thereof.
[0585] In an embodiment, a feature value descriptor contains one or more of a feature value for feature values that may be represented by a continuous function in the region of the feature value, a description of how the feature value responds to changes in one or more agglomerate model feature input values, where the feature input values define the conditions over which a given agglomerate model is executed, or for which the output of the agglomerate model includes an output feature value which the Feature Value Descriptor corresponds to.
[0586] In embodiments, a feature value descriptor may represent a non-continuous set of results. Alternatively, the feature value descriptor may contain a feature value that is represented by a non-continuous function.
[0587] In embodiments, a feature value descriptor may represent a set of values, which may vary along some axes non-continuously, and which may be represented along other axes by continuous functions.
[0588] In embodiments, a feature value descriptor (or descriptors) may describe the modeled or learned behavior of the feature value descriptor in response to changes in another feature value descriptor. As will be understood, in many cases, the other feature value descriptor might represent an input to the agglomerate model generating the output value, but more generically, the feature value descriptor might describe how a feature value might respond to changes in any correlated feature value.
In these portions of the Specification, there is a discussion that the feature value descriptor contains a feature value and/or other information describing model confidence, contains one or more of a feature value for feature values that may be represented by a continuous function in the region of the feature value, a description of how the feature value responds to changes in one or more agglomerate model feature input values, where the feature input values define the conditions over which a given agglomerate model is executed, or for which the output of the agglomerate model includes an output feature value about the value, may represent a set of values, and may describe the modeled or learned behavior of how a feature value might respond to changes in any correlated feature value, and feature values may be represented by any type of number, string, object, list, array, map, feature value description, feature value or combination thereof, yet nothing in these portions describing what the feature value descriptor is, what it describes, what it contains, and how a feature value can be represented can be characterized as a machine-readable control signal, as required by the claims.
For the reasons set forth above, although the Specification discusses generating the feature value descriptor, what it is, what it describes, what it contains, and how a feature value can be represented, the Specification does not inherently nor expressly support generating a feature value descriptor that comprises a machine-readable control signal, as required by the claims.
Claims 2-10 & 21 depend on claim 1 and do not cure the aforementioned deficiencies, and thus, these claims are rejected for the reasons set forth above.
Claims 12-16 depend on claim 11 and do not cure the aforementioned deficiencies, and thus, these claims are rejected for the reasons set forth above.
Claim 18 depends on claim 17 and do not cure the aforementioned deficiencies, and thus, this claim is rejected for the reasons set forth above.
Claim 20 depends on claim 19 and do not cure the aforementioned deficiencies, and thus, this claim is rejected for the reasons set forth above.
Claim Rejections - 35 USC § 112, Second Paragraph
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-21 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claims 1, 11, 17, & 19 recites the limitation "the same organization.” There is insufficient antecedent basis for this limitation in the claim.
Claims 2-10 & 21 depend on claim 1 and do not cure the aforementioned deficiencies, and thus, these claims are rejected for the reasons set forth above.
Claims 12-16 depend on claim 11 and do not cure the aforementioned deficiencies, and thus, these claims are rejected for the reasons set forth above.
Claim 18 depends on claim 17 and do not cure the aforementioned deficiencies, and thus, this claim is rejected for the reasons set forth above.
Claim 20 depends on claim 19 and do not cure the aforementioned deficiencies, and thus, this claim is rejected for the reasons set forth above.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-21 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Under Prong 1 of Step 2A, claim 1, and similarly claims 2-21, recites “an agglomerate network that includes a plurality of agglomerate network … to generate a time-sequence … comprising: a data retrieval … to retrieve data from a data source, and, when data is not available, retrieve data from one or more of (i) another agglomerate network representing the same organization, (ii) from a comparable agglomerate network … from the same organization, or (iii) from averaging or mixture of (i) and (ii); a data conditioning … to transform a format of the retrieved data to correspond to an expected format compatible with one or more agglomerate network circuits included in the agglomerate network; a data provisioning … to transmit the retrieved and formatted data in the one or more agglomerate network … included into the agglomerate network for generating the time-sequence data; and … at least historical time-sequence data, … determine a variation of a first feature value based on one or more changes to a correlated second feature value; adjust, based on the correlated second feature value, raw data output from one of the plurality of agglomerate network … that received as input the first feature value; and extrapolate missing or incomplete data in the raw data output by mixing in aggregated data output from comparable agglomerate network …; and … generate … a feature value descriptor ... describing how the first feature value responds to the one or more changes to the correlated second feature value; and propagate the feature value descriptor to the one or more agglomerate network … included in the agglomerate network to dynamically reconfigure the one or more agglomerate network …, wherein the dynamically reconfiguration reduces a number of execution runs needed of the one or more agglomerate network … included in the agglomerate network for generating the time-sequence data.” Claims 1-21, in view of the claim limitations, recite the abstract idea of generating time sequence data comprising schedules of time sequences by retrieving data from the source comprising the data regarding an organization, conditioning the data regarding the organization by formatting the data for inclusion in the agglomerate network models, provisioning the retrieved data and formatted regarding the organization to generate the time sequence data, using historic time sequence data to determine a variation of a first feature value based changes to a correlated second feature value, adjust, based on the correlated second feature value, raw data output from one of the plurality of agglomerate network models, extrapolate missing data in the raw data output by mixing in aggregated data output from comparable agglomerate network models, generating a feature value descriptor describing how the first feature value responds to the changes to the correlated second feature value, and propagating the feature value descriptor to reconfigure the one or more agglomerate network models included in the agglomerate network to generate the time sequence data.
As a whole, in view of the claim limitations, but for the computer components and systems performing the claimed functions, the broadest reasonable interpretation of the recited generating time sequence data comprising schedules of time sequences by retrieving data from the source comprising the data regarding an organization, conditioning the data regarding the organization by formatting the data for inclusion in the agglomerate network models, provisioning the retrieved data and formatted regarding the organization to generate the time sequence data, using historic time sequence data to determine a variation of a first feature value based changes to a correlated second feature value, adjust, based on the correlated second feature value, raw data output from one of the plurality of agglomerate network models, extrapolate missing data in the raw data output by mixing in aggregated data output from comparable agglomerate network models, generating a feature value descriptor describing how the first feature value responds to the changes to the correlated second feature value, and propagating the feature value descriptor to reconfigure the one or more agglomerate network models included in the agglomerate network to generate the time sequence data could all be reasonably interpreted as a human using judgment to generate sequence data schedules for time sequences by a human observing the source of data to retrieve the data, a human performing evaluations and using judgment to make a decision to format the data, determine a variation of feature values, adjust data output from the model, extrapolate the missing data from comparable models, generate a feature descriptor describing how the feature responds to changes, and propagate the feature description to reconfigure one of the models in the network, and a human outputting the results to update the time sequence data comprising schedules to include the formatted and retrieved data manually and/or with a pen and paper; therefore, the claims recite mental processes. In addition, in view of the specification at [000297], “the term schedule may include time-sequences, e.g., an ordered list or sequence of events, tasks, and/or shifts which may correspond to dates, days of the week, times of the day, etc.,” at [000302], “[a]s used herein, a schedule may refer to work schedules where employees or other personnel and/or resources are scheduled for work or other duties at certain locations and/or times of the day,” and thus, the generating of the time sequence by retrieving data from the data regarding an organization, conditioning the data regarding the organization by formatting the data, provisioning the retrieved and formatted data regarding the organization to generate the time sequence data, using historic time sequence data to determine a variation of a first feature value based changes to a correlated second feature value, adjust raw data output from one of the agglomerate network models, extrapolate missing data from comparable agglomerate network models, generating a feature value, and propagating the feature value descriptor to reconfigure the agglomerate network models to generate the time sequence data are all directed to generating time-sequences of “an ordered list or sequence of events, tasks, and/or shifts which may correspond to dates, days of the week, times of the day” and “work schedules for employees,” which manages the personal human behavior of the events, tasks, shifts, and schedules of people, and thus, the claims recite a certain method of organizing human activity. Further, with respect to the dependent claims, aside from the additional elements beyond the recited abstract idea addressed below under the second prong of Step 2A and 2B, the limitations of dependent claims 2-10, 12-16, 18, 20, & 21, recite similar further abstract limitations to those discussed above that narrow the abstract idea recited in the independent claims because, aside from the generic computer components and systems performing the claimed functions the limitations of claims recite mental processes that can be practically performed mentally by observing, evaluating, and judging information mentally and/or with a pen and paper. Accordingly, since the claims recite mental processes, the claims recite an abstract idea under the first prong of Step 2A.
This judicial exception is not integrated into a practical application under the second prong of Step 2A. In particular, the claims recite the additional elements beyond the recited abstract idea of “[a] hierarchical feature propagator (HFP) apparatus,” “circuit,” “at least one machine learning model trained on,” “wherein the at least one machine learning model is structured to,” “the HFP apparatus is structured to,” “via the at least one machine learning model,“ and “machine-readable control signal” in claim 1, and similarly in claims 2-10, and further, “[a] method … circuits … comprising,” “via a … circuit, by the processor,” “at least one machine learning model trained on,” “via the at least one machine learning model,“ and “machine-readable control signal” in claim 11, and similarly in claims 12-16, “[a] non-transitory computer-readable medium storing instructions that adapt at least one processor … circuits, the instructions causing the at least one processor to,” “at least one machine learning model trained on,” “via the at least one machine learning model,“ and “machine-readable control signal” in claim 17, and similarly in claim 18, and “[a] … network … comprising,” “circuits,” “at least one machine learning model trained on,” “via the at least one machine learning model,“ and “machine-readable control signal” in claim 19, and similarly in claim 20, and “via a blockchain”; however, individually and when viewed as an ordered combination, and pursuant to the broadest reasonable interpretation, each of the additional elements are computing elements recited at high level of generality implementing the abstract idea on a computer (i.e. apply it), and thus, are no more than applying the abstract idea with generic computer components. Further, these elements generally link the abstract idea to a field of use. Moreover, aside from the aforementioned additional elements, the remaining elements of dependent claims 2-10, 12-16, 18, 20, & 21 do not integrate the abstract idea into a practical application because these claims merely recite further limitations that provide no more than simply narrowing the recited abstract idea.
The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception under Step 2B. As noted above, the aforementioned additional elements beyond the recited abstract idea, as an order combination, are no more than mere instructions to implement the idea using generic computer components (i.e. apply it), and further, generally link the abstract idea to a field of use, which is not sufficient to amount to significantly more than an abstract idea; therefore, the additional elements are not sufficient to amount to significantly more than an abstract idea. Additionally, these recitations as an ordered combination, simply append the abstract idea to recitations of generic computer structure performing generic computer functions that are well-understood, routine, and conventional in the field as evinced by Applicant’s specification at [0333], [0821] (describing he HFP 220100 may structure the agglomerate network 220110 to use high-level or generic modules/circuits and the methods and/or processes described above, and steps thereof, may be realized in hardware, program code, instructions, and/or programs or any combination of hardware and methods, program code, instructions, and/or programs suitable for a particular application). Furthermore, as an ordered combination, these elements amount to generic computer components performing repetitive calculations, receiving or transmitting data over a network, electronic record keeping, storing and retrieving information in memory, and presenting offers, which, as held by the courts, are well-understood, routine, and conventional. See MPEP 2106.05(d); July 2015 Update, p. 7. Moreover, aside from the aforementioned additional elements, the remaining elements of dependent claims 2-10, 12-16, 18, 20, & 21 do not transform the recited abstract idea into a patent eligible invention because these claims merely recite further limitations that provide no more than simply narrowing the recited abstract idea.
Looking at these limitations as an ordered combination adds nothing additional that is sufficient to amount to significantly more than the recited abstract idea because they simply provide instructions to use a generic arrangement of generic computer components and recitations of generic computer structure that perform well-understood, routine, and conventional computer functions that are used to “apply” the recited abstract idea. Thus, the elements of the claims, considered both individually and as an ordered combination, are not sufficient to ensure that the claims as a whole amount to significantly more than the abstract idea itself. Since there are no limitations in these claims that transform the exception into a patent eligible application such that these claims amount to significantly more than the exception itself, claims 1-21 are rejected under 35 U.S.C. 101 as being directed to non-statutory subject matter.
Allowable Subject Matter
While claims 1-21 are rejected pursuant to 35 USC 101 and 35 USC 112, these claims are potentially allowable if amended to overcome the 101 and 112 rejections since claims 1-21 are novel and non-obvious in view of 35 USC 102 and 35 USC 103.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHARLES A GUILIANO whose telephone number is (571)272-9859. The examiner can normally be reached Mon-Fri 10:00 am - 6:00 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Rutao Wu can be reached at 571-272-6045. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
CHARLES GUILIANO
Primary Examiner
Art Unit 3623
/CHARLES GUILIANO/Primary Examiner, Art Unit 3623