Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
Claims 1-20 are presented for examination in this application, 17/850744, filed 2022-06-27 with an effective filing date of 2022-06-27.
The Examiner cites particular sections in the references as applied to the claims
below for the convenience of the applicant(s). Although the specified citations are
representative of the teachings in the art and are applied to the specific limitations within
the individual claim, other passages and figures may apply as well. It is respectfully
requested that, in preparing responses, the applicant(s) fully consider the references in
their entirety as potentially teaching all or part of the claimed invention, as well as the
context of the passage as taught by the prior art or disclosed by the Examiner.
Response to Arguments
Applicant’s arguments and remarks filed 2026-01-02 have been fully considered. The arguments and remarks regarding the 35 U.S.C 101 rejections were found to be persuasive. The arguments and remarks regarding the 35 U.S.C 102 rejections were found to be persuasive however the amendments have necessitated a change in the references applied resulting in a new grounds of rejection. The 35 U.S.C 103 rejections have been maintained via new ground of rejection.
35 U.S.C 102
Applicant’s response:
Applicant asserts “Amended independent claims 1 recites, in part, "reconstructing, by the data aggregator and using a first inference model that is a twin of a second twin inference model used by the data collector to implement the data reduction plan, data upon which the reduced size data is based using the feature relationship inference model to obtain a representation of the data having error within the acceptable error thresholds." Amended independent claims 10 and 16 include similar limitations. Elkabetz does not disclose, at least, these limitations of the amended independent claims. The above limitations require, at least, a data collector that uses an inference model to implement the data reduction plan. In the Office Action, it is alleged that Elkabetz discloses external weather sensors that read on the recited data collector. However, Elkabetz's external weather sensors do not use inference models to implement data reduction plans as required by the above recitations of the amended independent claims. Therefore, the recited data collector and first inference model that is a twin of the second inference model are distinguishable from Elkabetz. Therefore, Elkabetz does not disclose, at least, the above claim limitations and cannot support an anticipation rejection of the amended independent claims. The dependent claims are patentable Elkabetz for similar reasons.”.
Examiner’s response:
Arguments regarding the amended limitations are considered but are moot in view of new grounds of rejection.
35 U.S.C 103
Applicant’s response:
Applicant asserts “It is not alleged, much less demonstrated, that Jia, Kulkarni, or Moloney show or suggest any of the above noted limitations of the amended independent claims. Therefore, Jia, Kulkarni, and Moloney, like Elkabetz, are silent with respect to at least the above claim limitations of the amended independent claims. It logically follows that the combination of Elkabetz, Jia, Kulkarni, and Moloney does not show or suggest, at least, the above limitations of the amended independent claims. Therefore, Elkabetz, Jia, Kulkarni, and Moloney cannot support an obviousness rejection for failing to show or suggest all of the limitations of amended independent claims. The dependent claims are patentable for similar reasons.”.
Examiner’s response:
Examiner respectfully disagrees. The Examiner finds Moloney, at least, teaches a twin neural network, specifically a Siamese network, that utilizes the data reduction plan of the machine learning system. The data plan includes reduction methods that eliminate the need for large, multi-class datasets and reduce the need for training of multiple classes as mentioned in para [0057]. In addition, Moloney teaches the use of data aggregators to collect data from sensors. Using broadest reasonable interpretation, in light of the specification, the Examiner finds that it would have been obvious to combine Elkabetz and Moloney to use Siamese networks to implement data reduction plans of Elkabetz.
Information Disclosure Statement
Acknowledgement is made of the information disclosure statements filed on 2025-11-14 and 2025-12-31. The patent documents were fully considered.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1, 2, 5-8, 10, 11, 12, 15, 16, 17 and 20 are rejected under 35 U.S.C 103 as being unpatentable over Elkabetz et al. (WO2019126707A1 hereinafter referred to as Elkabetz) in view of Moloney et al. (DE112019002589T5 hereinafter referred to as Moloney).
Regarding claim 1 (currently amended):
Elkabetz teaches a method for managing data collection in a distributed system where data is collected in a data aggregator of the distributed system and from a data collector of the distributed system that is operably connected to the data aggregator via a communication system, (see para [00311]: “The weather sensor data collection program (513) receives weather sensor data from one or more weather sensor data sources (346), which includes providers of weather-related collected data measured or otherwise gathered by an external weather sensor. External weather sensors can include weather sensors such as, for example, those listed in Tables 8 and 9. Exemplary weather sensor data sources include ground weather stations, aggregators of ground weather station data, connected vehicle fleet data collection and management systems, commercial airlines, and any other aggregator and/or supplier of data from external weather sensors.” Also see para [00174]: “Computer systems can be implemented using virtual or physical deployments, or by using a combination of these means. In some implementations, the servers may be physically located together, or they may be distributed in remote locations, such as in shared hosting facilities or in virtualized facilities (e.g.“the cloud”)”.) the method comprising:
obtaining, by the data aggregator, a data set for the data collector (see para [00458]: “One important optimization by the cadence manager is the determination on whether a specific cadence instance can reuse some or all of prior collected data and forecasts or whether it is more efficient to fully process and calculate each element of the cadence instance. Accordingly, the cadence manager implements one of two mechanisms for creating and updating cadence instances, depending upon the current status of cadence instance(s) that have been generated in the past and the current data collection state.”.);
obtaining, by the data aggregator and using the data set, a feature relationship inference model comprising trained neural networks adapted to generate inferences for features of the data set (see para [00206]: “Cadence instances may include information generated by internal processes, such as machine learning models that are generated by the modelling and prediction server, and are then used to process collected data and produce new tile layers of data based, at least in part, upon the predictions. For example, For example, trained machine learning model, for example a neural network, may be used to calculate a probability of a forecast event being rain, mist, or fog based upon past historical weather data combined with current collected and forecast generated data”);
selecting, by the data aggregator and using the feature relationship inference model, a data reduction plan based on acceptable error thresholds associated with the features (see [00448]: “The ML model validation module (679) retrieves a trained ML model from the system database (320), retrieves evaluation data (i.e. testing and validation data) from the ML training data store, and performs testing and validation operations using the trained model and the retrieved testing and validation data. In some exemplary embodiments, the ML validation module generates a quality metric, e.g., a model accuracy or performance metric such as variance, mean standard error, receiver operating characteristic (ROC) curve, or precision-recall (PR) curve, associated with the trained ML model. For example, the ML model validation model generates the quality metric by executing the model and comparing predictions generated by the model to observed outcomes.”.);
configuring, by the data aggregator, the data collector to send reduced size data based on the data reduction plan (see para [00458]: “The cadence manager is responsible for optimizing run-time resource utilization during the processing of cadence instances. One important optimization by the cadence manager is the determination on whether a specific cadence instance can reuse some or all of prior collected data and forecasts or whether it is more efficient to fully process and calculate each element of the cadence instance. Accordingly, the cadence manager implements one of two mechanisms for creating and updating cadence instances, depending upon the current status of cadence instance(s) that have been generated in the past and the current data collection state.”. );
obtaining, by the data aggregator, the reduced size data from the configured data collector (see para [00463]: If the cadence manager selects the option to copy and update a prior forecast tile layer, the cadence manager often still has to run any missing processing programs in order to complete a new cadence instance. Even though the missing processing programs and forecast cycles have to be run, the compute and time savings of these “shortcut” approaches significantly reduces the amount of computing cycles requires to produce the next forecast cycle, and substantially reduces the amount of time required as well. In some cases, the time savings may exceed 75, 80, 85, 90, 95, or even 98%, resulting in corresponding forecast calculation times (assuming a 10 minute forecast cycle) of 2.5 min, 2.0 min.1.5 min, 1.0 min, 30 sec, or 15 sec respectively.”. Also see claim 48: “ where the calculation of a specific cadence forecast is performed based upon the newly collected data and only the portions of the forecast effected by the newly collected data are updated.”.); and
reconstructing, by the data aggregator, (see para [00542]: “In an exemplary embodiment, the fog inference program includes an expert systems module (948) that retrieves one or more fog time series rule ML models from a system database (320) and implements the one or more fog time series rule ML models to process input data, for a current (Mi) fog LWC tile layer (2048) and one or more previous cadence instance fog LWC tile layers from fog inference data database (998), to produce output data including time series based fog inference decisions. The fog inference program (919) uses the time series rule ML models to confirm a preliminary fog inference and determine a confirmed fog inference (or in the absence of a confirmation, refute the preliminary fog inference). In an embodiment, the fog inference program increases a confidence indication associated with each confirmed fog inference, for example a medium confidence indication or a numerical value representing a medium confidence.”. Also see para [00160]: “Furthermore, the described systems supports efficiency optimizations in managing forecasts, and for deriving a second forecast from a first forecast without recalculating the entire forecast. These optimizations can improve the forecast calculation times by up to 95%, reducing a 10 minute forecast cycle to under 30 seconds. This improvement permits near real- time forecast generation.”.).
Elkabetz does not explicitly teach using a first inference model that is a twin of a second inference model used by the data collector to implement the data reduction plan.
Moloney, however, analogously teaches using a first inference model that is a twin of a second inference model used by the data collector to implement the data reduction plan (see para [0057]: “In some implementations, a machine learning system such as the example in Fig. 14 may be provided, which can simulate the ability to classify object categories from a few training examples. Such a system can also eliminate the need to create large, multi-class datasets to effectively train the corresponding machine learning model. You can also select the machine learning model that does not require training for multiple classes. The machine learning model can be used to recognize an object (for example, a product, a person, an animal, or another object) by feeding the model a single image of that object for the system along with a comparison image. If the comparison image is not recognized by the system, the machine learning model (for example, a Siamese network) is used to determine that the objects do not match..”. Also see para [0058]: “In some implementations, a Siamese network may be used as the machine learning model, which has been trained using the synthetic training data introduced, for example, in the examples above.”)
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Elkabetz and Moloney before him or her, to modify the method of claim 1 to include attributes of a twin inference model to implement a data reduction plan in order to evaluate similarity of outputs and make comparisons between them (see Elkabetz at para [0058]: “A comparison block (e.g., 1620) may be provided to evaluate the similarity of the outputs of the two identical networks and compare the determined degree of similarity with a threshold.”).
Regarding claim 2:
Elkabetz in view of Moloney teaches the method of claim 1.
Elkabetz further teaches wherein the feature relationship inference model comprises a trained neural network (see para [00206]: “Cadence instances may include information generated by internal processes, such as machine learning models that are generated by the modelling and prediction server, and are then used to process collected data and produce new tile layers of data based, at least in part, upon the predictions. For example, For example, trained machine learning model, for example a neural network, may be used to calculate a probability of a forecast event being rain, mist, or fog based upon past historical weather data combined with current collected and forecast generated data.”.)
Regarding claim 17:
Claim 17 recites analogous limitations to claim 2 and is therefore rejected on the same grounds.
Regarding claim 5:
Elkabetz in view of Moloney teaches the method of claim 1.
Elkabetz further teaches wherein the data reduction plan indicates: a first subset of the features that are to be indicated by the reduced size data and a second subset of the features that are not to be indicated by the reduced size data (see para [00206]: “Cadence instances may include information generated by internal processes, such as machine learning models that are generated by the modelling and prediction server, and are then used to process collected data and produce new tile layers of data based, at least in part, upon the predictions. For example, For example, trained machine learning model, for example a neural network, may be used to calculate a probability of a forecast event being rain, mist, or fog based upon past historical weather data combined with current collected and forecast generated data. In an exemplary embodiment, the probability of rain, mist, or fog may be used by future processing steps, such as NowCasting and/or RVR calculations to determine the contribution of rain and/or fog to the precipitation and visibility forecasts.”. Also see [00208]: “The system may also enforce an interstitial delay between cadence cycles if desired. Cadence cycle timing may vary based upon weather or upon the results of one or more previous cadence cycle processing steps. For example, cadence cycle length and collection interval length may be increased during clear weather and decreased during stormy weather.”.);
a quantization level for the first subset of the features (see para [00414]: “o Q - a given quantization level in the specific pro forma satellite-to-earth station or satellite link segment”.);
and a window duration that defines when the reduced size data is to be provided by the configured data collector to the data aggregator (see para [00454: “The server also implements one or more data management (e.g. applying transforms, tile layer compares, copying, and blending), weather inference, and weather forecast programs. These programs are used create aspects of the forecast data for the system. Generally, the modeling and prediction server programs retrieve data of the forecast program required type and time window from the system database (320) and from other sources of data provided by the system. ”. Also see table 2.).
Regarding claim 20:
Claim 20 recites analogous limitations to claim 5 and is therefore rejected on the same grounds.
Regarding claim 6:
Elkabetz in view of Moloney teaches the method of claim 5.
Elkabetz further teaches wherein the reduced size data comprises: representations of the first set of the features for a period of time defined by the window duration (see para [00454: “The server also implements one or more data management (e.g. applying transforms, tile layer compares, copying, and blending), weather inference, and weather forecast programs. These programs are used create aspects of the forecast data for the system. Generally, the modeling and prediction server programs retrieve data of the forecast program required type and time window from the system database (320) and from other sources of data provided by the system. ”. Also see table 2.),
the representations excluding portions of respective features based on a corresponding acceptable error threshold of the acceptable error thresholds (see [00542]: “In an exemplary embodiment, the fog inference program includes an expert systems module (948) that retrieves one or more fog time series rule ML models from a system database (320) and implements the one or more fog time series rule ML models to process input data, for a current (Mi) fog LWC tile layer (2048) and one or more previous cadence instance fog LWC tile layers from fog inference data database (998), to produce output data including time series based fog inference decisions. The fog inference program (919) uses the time series rule ML models to confirm a preliminary fog inference and determine a confirmed fog inference (or in the absence of a confirmation, refute the preliminary fog inference). In an embodiment, the fog inference program increases a confidence indication associated with each confirmed fog inference, for example a medium confidence indication or a numerical value representing a medium confidence.”. Also see para [00160]: “Furthermore, the described systems supports efficiency optimizations in managing forecasts, and for deriving a second forecast from a first forecast without recalculating the entire forecast. These optimizations can improve the forecast calculation times by up to 95%, reducing a 10 minute forecast cycle to under 30 seconds. This improvement permits near real- time forecast generation.”.).
Regarding claim 7:
Elkabetz in view of Moloney teaches the method of claim 6.
Elkabetz further teaches providing the configured data collector with a copy of the feature relationship inference model (see para [00466]: “Once a determination is made by the cadence manager (905) to perform a partial calculation and update within a cadence instance, several steps occur. First the portions of the tile layers to be copied to the cadence instance along with their corresponding prior tile layers selected from one or more of prior collected data tile layers, processed collected data tile layers, forecast tile layers, forecast post-processing tile layers, and weather product tile layers. The identified prior tile layers are propagated by copying to the current cadence instance.”. Also see paras [00492]-[00494]: “ If the cadence manager selects the option to copy and update a prior forecast tile layer, the cadence manager often still has to run one or more processing programs in order to complete some of the tile layers of the new cadence instance. For example, if a forecast is copied from once cadence cycle to another, the copied forecast will need is last forecast cycle run to complete the new forecast. [00493] Process and programs run by the cadence manager [00494] The cadence manager has a number of cadence specific programs that may be performed at specific times in the cadence cycle to create and manage the cadence data structures. These programs include collection, post-collection, pre-forecast processing programs as described herein. ”.) and
initiating refinement of the data reduction of the data reduction plan by the configured data collector using the feature relationship inference model and measurements obtained by the configured data collector during the window duration, at least one of the representations represents a feature of the second subset of the features (see para [00204]: “Processed data comprises data that has been previously associated with a cadence instance and has been further processed by one or more data processing programs of the system, with results of that processing stored in a system database. Processed data may include additional refinements to collected data, derivation of additional information from collected or forecast generated data, or data that is calculated by other systems and associated with one or more cadence instances.”)
Regarding claim 8:
Elkabetz in view of Moloney teaches the method of claim 7.
Elkabetz further teaches wherein the data reduction plan is refined sequentially for data corresponding to respective window durations (see para [00204]: “Processed data comprises data that has been previously associated with a cadence instance and has been further processed by one or more data processing programs of the system, with results of that processing stored in a system database. Processed data may include additional refinements to collected data, derivation of additional information from collected or forecast generated data, or data that is calculated by other systems and associated with one or more cadence instances.”. Also see para [00311]: “In this way, the weather sensor data collection program filters out collected data points that are not required for further processing and aggregates large volumes of collected data points into statistical representations. These data set size reductions significantly reduce the amount of calculations required in subsequent forecasting by permitting forecast optimizations to be used.”)
Regarding claim 10:
Elkabetz in view of Moloney teaches the method of claim 1.
Elkabetz further teaches wherein the configured data collector is intermittently operably connected to the data aggregator by the communication system (see para [00173]: “. Accordingly, some data inputs are received and processed in real time, e.g. processing begins as soon as the data is received and pre-processed (e.g., formatted), while other data inputs are received and/or processed on a scheduled routine (e.g., stored for formatting and processing on a time delayed or batch mode processing schedule). ”. Also see para [00224]: “The time at which the cadence index was incremented is called the cadence timestamp, and is used in the forecasting process to adjust for collection delays.”. Also see para [00225]: “Once a cadence instance has all of its collected data fully processed and written to the databases, one or more collection post-processing programs are executed by one or more processors of the system. These post-processing programs are sequenced by the system configuration so that data dependencies are honored and parallel processing pipelines are correctly configured. Specifically, the system does not start the execution of collection post- processing programs until the data that they need becomes available from other processes (collection processed or other post-processing processes). As each piece of data is made available, the system starts any programs that can be run against that data.”)
Regarding claim 11 (currently amended):
Elkabetz teaches a non-transitory machine-readable medium having instructions stored therein, which when executed by a processor, cause the processor to perform operations (see para [00175]: “Stored within persistent memories of the system may be one or more databases used for the storage of information collected and/or calculated by the servers and read, processed, and written by the processors under control of the program(s).”. Also see para [00176]: “Persistent memories may include disk, PROM, EEPROM, flash storage, and similar
technologies”.) ([Examiner note: the persistent memories from Elkabetz are consistent with the instant case’s disclosure of CSRM as stated at instant case’s para [00118]: “Computer-readable storage medium 509 may also be used to store some software functionalities described above persistently.” and at para [00123]: “A non-transitory machine-readable medium includes any mechanism for storing information in a form readable by a machine (e.g., a computer). For example, a machine- readable (e.g., computer-readable) medium includes a machine (e.g., a computer) readable storage medium (e.g., read only memory ("ROM"), random access memory ("RAM"), magnetic disk storage media, optical storage media, flash memory devices).”.)]
managing data collection in a distributed system where data is collected in a data aggregator of the distributed system and from a data collector of the distributed system that is operably connected to the data aggregator via a communication system, (see para [00311]: “The weather sensor data collection program (513) receives weather sensor data from one or more weather sensor data sources (346), which includes providers of weather-related collected data measured or otherwise gathered by an external weather sensor. External weather sensors can include weather sensors such as, for example, those listed in Tables 8 and 9. Exemplary weather sensor data sources include ground weather stations, aggregators of ground weather station data, connected vehicle fleet data collection and management systems, commercial airlines, and any other aggregator and/or supplier of data from external weather sensors.” Also see para [00174]: “Computer systems can be implemented using virtual or physical deployments, or by using a combination of these means. In some implementations, the servers may be physically located together, or they may be distributed in remote locations, such as in shared hosting facilities or in virtualized facilities (e.g.“the cloud”)”.) the method comprising:
obtaining, by the data aggregator, a data set for the data collector (see para [00458]: “One important optimization by the cadence manager is the determination on whether a specific cadence instance can reuse some or all of prior collected data and forecasts or whether it is more efficient to fully process and calculate each element of the cadence instance. Accordingly, the cadence manager implements one of two mechanisms for creating and updating cadence instances, depending upon the current status of cadence instance(s) that have been generated in the past and the current data collection state.”);
obtaining, by the data aggregator and using the data set, a feature relationship inference model comprising trained neural networks adapted to generate inferences for features of the data set (see para [00206]: “Cadence instances may include information generated by internal processes, such as machine learning models that are generated by the modelling and prediction server, and are then used to process collected data and produce new tile layers of data based, at least in part, upon the predictions. For example, For example, trained machine learning model, for example a neural network, may be used to calculate a probability of a forecast event being rain, mist, or fog based upon past historical weather data combined with current collected and forecast generated data.”.);
selecting, by the data aggregator and using the feature relationship inference model, a data reduction plan based on acceptable error thresholds associated with the features (see [00448]: “The ML model validation module (679) retrieves a trained ML model from the system database (320), retrieves evaluation data (i.e. testing and validation data) from the ML training data store, and performs testing and validation operations using the trained model and the retrieved testing and validation data. In some exemplary embodiments, the ML validation module generates a quality metric, e.g., a model accuracy or performance metric such as variance, mean standard error, receiver operating characteristic (ROC) curve, or precision-recall (PR) curve, associated with the trained ML model. For example, the ML model validation model generates the quality metric by executing the model and comparing predictions generated by the model to observed outcomes.”.);
configuring, by the data aggregator, the data collector to send reduced size data based on the data reduction plan (see para [00458]: “The cadence manager is responsible for optimizing run-time resource utilization during the processing of cadence instances. One important optimization by the cadence manager is the determination on whether a specific cadence instance can reuse some or all of prior collected data and forecasts or whether it is more efficient to fully process and calculate each element of the cadence instance. Accordingly, the cadence manager implements one of two mechanisms for creating and updating cadence instances, depending upon the current status of cadence instance(s) that have been generated in the past and the current data collection state.”. );
obtaining, by the data aggregator, the reduced size data from the configured data collector (see para [00463]: If the cadence manager selects the option to copy and update a prior forecast tile layer, the cadence manager often still has to run any missing processing programs in order to complete a new cadence instance. Even though the missing processing programs and forecast cycles have to be run, the compute and time savings of these “shortcut” approaches significantly reduces the amount of computing cycles requires to produce the next forecast cycle, and substantially reduces the amount of time required as well. In some cases, the time savings may exceed 75, 80, 85, 90, 95, or even 98%, resulting in corresponding forecast calculation times (assuming a 10 minute forecast cycle) of 2.5 min, 2.0 min.1.5 min, 1.0 min, 30 sec, or 15 sec respectively.”. Also see claim 48: “ where the calculation of a specific cadence forecast is performed based upon the newly collected data and only the portions of the forecast effected by the newly collected data are updated.”.); and
reconstructing, by the data aggregator, and using a first inference model that is a twin of a second inference model used by the data collector to implement the data reduction plan, data upon which the reduced size data is based using the feature relationship inference model to obtain a representation of the data having error within the acceptable error thresholds (see para [00542]: “In an exemplary embodiment, the fog inference program includes an expert systems module (948) that retrieves one or more fog time series rule ML models from a system database (320) and implements the one or more fog time series rule ML models to process input data, for a current (Mi) fog LWC tile layer (2048) and one or more previous cadence instance fog LWC tile layers from fog inference data database (998), to produce output data including time series based fog inference decisions. The fog inference program (919) uses the time series rule ML models to confirm a preliminary fog inference and determine a confirmed fog inference (or in the absence of a confirmation, refute the preliminary fog inference). In an embodiment, the fog inference program increases a confidence indication associated with each confirmed fog inference, for example a medium confidence indication or a numerical value representing a medium confidence.”. Also see para [00160]: “Furthermore, the described systems supports efficiency optimizations in managing forecasts, and for deriving a second forecast from a first forecast without recalculating the entire forecast. These optimizations can improve the forecast calculation times by up to 95%, reducing a 10 minute forecast cycle to under 30 seconds. This improvement permits near real- time forecast generation.”.).
Elkabetz does not explicitly teach using a first inference model that is a twin of a second inference model used by the data collector to implement the data reduction plan.
Moloney, however, analogously teaches using a first inference model that is a twin of a second inference model used by the data collector to implement the data reduction plan (see para [0057]: “In some implementations, a machine learning system such as the example in Fig. 14 may be provided, which can simulate the ability to classify object categories from a few training examples. Such a system can also eliminate the need to create large, multi-class datasets to effectively train the corresponding machine learning model. You can also select the machine learning model that does not require training for multiple classes. The machine learning model can be used to recognize an object (for example, a product, a person, an animal, or another object) by feeding the model a single image of that object for the system along with a comparison image. If the comparison image is not recognized by the system, the machine learning model (for example, a Siamese network) is used to determine that the objects do not match..”. Also see para [0058]: “In some implementations, a Siamese network may be used as the machine learning model, which has been trained using the synthetic training data introduced, for example, in the examples above.”)
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Elkabetz and Moloney before him or her, to modify the non-transitory machine-readable medium of claim 11 to include attributes of a twin inference model to implement a data reduction plan in order to evaluate similarity of outputs and make comparisons between them (see Elkabetz at para [0058]: “A comparison block (e.g., 1620) may be provided to evaluate the similarity of the outputs of the two identical networks and compare the determined degree of similarity with a threshold.”).
Regarding claim 12:
Elkabetz in view of Moloney teaches the non-transitory machine-readable medium of claim 11.
Elkabetz further teaches wherein the feature relationship inference model comprises a neural network (see para [00206]: “Cadence instances may include information generated by internal processes, such as machine learning models that are generated by the modelling and prediction server, and are then used to process collected data and produce new tile layers of data based, at least in part, upon the predictions. For example, trained machine learning model, for example a neural network, may be used to calculate a probability of a forecast event being rain, mist, or fog based upon past historical weather data combined with current collected and forecast generated data.”.)
Regarding claim 15:
Elkabetz in view of Moloney teaches the non-transitory machine-readable medium of claim 11.
Elkabetz further teaches wherein the data reduction plan indicates: a first subset of the features that are to be indicated by the reduced size data and a second subset of the features that are not to be indicated by the reduced size data (see para [00206]: “Cadence instances may include information generated by internal processes, such as machine learning models that are generated by the modelling and prediction server, and are then used to process collected data and produce new tile layers of data based, at least in part, upon the predictions. For example, For example, trained machine learning model, for example a neural network, may be used to calculate a probability of a forecast event being rain, mist, or fog based upon past historical weather data combined with current collected and forecast generated data. In an exemplary embodiment, the probability of rain, mist, or fog may be used by future processing steps, such as NowCasting and/or RVR calculations to determine the contribution of rain and/or fog to the precipitation and visibility forecasts.” Also see [00208]: “The system may also enforce an interstitial delay between cadence cycles if desired. Cadence cycle timing may vary based upon weather or upon the results of one or more previous cadence cycle processing steps. For example, cadence cycle length and collection interval length may be increased during clear weather and decreased during stormy weather.”);
a quantization level for the first subset of the features (see para [00454: “The server also implements one or more data management (e.g. applying transforms, tile layer compares, copying, and blending), weather inference, and weather forecast programs. These programs are used create aspects of the forecast data for the system. Generally, the modeling and prediction server programs retrieve data of the forecast program required type and time window from the system database (320) and from other sources of data provided by the system. ”. Also see table 2.); and
a window duration that defines when the reduced size data is to be provided by the configured data collector to the data aggregator (see para [00454: “The server also implements one or more data management (e.g. applying transforms, tile layer compares, copying, and blending), weather inference, and weather forecast programs. These programs are used create aspects of the forecast data for the system. Generally, the modeling and prediction server programs retrieve data of the forecast program required type and time window from the system database (320) and from other sources of data provided by the system. ”. Also see table 2.)
Regarding claim 16:
Elkabetz teaches a data aggregator (see para [00311]: “Exemplary weather sensor data sources include ground weather stations, aggregators of ground weather station data, connected vehicle fleet data collection and management systems, commercial airlines, and any other aggregator and/or supplier of data from external weather sensors.”.) comprising:
a processor (see para [00318]: “Processing is carried out by one or more processors (605) running specialized programs as described below.”); and
a memory coupled to the processor to store instructions, which when executed by the processor, cause the processor to perform operations (see para [00318]: “ Processing is carried out by one or more processors (605) running specialized programs as described below. The programs are stored in or executed in transient or persistent memory (600).”) for
obtaining, by the data aggregator, a data set for the data collector (see para [00458]: “. One important optimization by the cadence manager is the determination on whether a specific cadence instance can reuse some or all of prior collected data and forecasts or whether it is more efficient to fully process and calculate each element of the cadence instance. Accordingly, the cadence manager implements one of two mechanisms for creating and updating cadence instances, depending upon the current status of cadence instance(s) that have been generated in the past and the current data collection state.”);
obtaining, by the data aggregator and using the data set, a feature relationship inference model comprising trained neural networks adapted to generate inferences for features of the data set (see para [00206]: “Cadence instances may include information generated by internal processes, such as machine learning models that are generated by the modelling and prediction server, and are then used to process collected data and produce new tile layers of data based, at least in part, upon the predictions. For example, For example, trained machine learning model, for example a neural network, may be used to calculate a probability of a forecast event being rain, mist, or fog based upon past historical weather data combined with current collected and forecast generated data
”);
selecting, by the data aggregator and using the feature relationship inference model, a data reduction plan based on acceptable error thresholds associated with the features (see [00448]: “The ML model validation module (679) retrieves a trained ML model from the system database (320), retrieves evaluation data (i.e. testing and validation data) from the ML training data store, and performs testing and validation operations using the trained model and the retrieved testing and validation data. In some exemplary embodiments, the ML validation module generates a quality metric, e.g., a model accuracy or performance metric such as variance, mean standard error, receiver operating characteristic (ROC) curve, or precision-recall (PR) curve, associated with the trained ML model. For example, the ML model validation model generates the quality metric by executing the model and comparing predictions generated by the model to observed outcomes.”);
configuring, by the data aggregator, the data collector to send reduced size data based on the data reduction plan (see para [00458]: “The cadence manager is responsible for optimizing run-time resource utilization during the processing of cadence instances. One important optimization by the cadence manager is the determination on whether a specific cadence instance can reuse some or all of prior collected data and forecasts or whether it is more efficient to fully process and calculate each element of the cadence instance. Accordingly, the cadence manager implements one of two mechanisms for creating and updating cadence instances, depending upon the current status of cadence instance(s) that have been generated in the past and the current data collection state.”. );
obtaining, by the data aggregator, reduced size data from the configured data collector (see para [00463]: If the cadence manager selects the option to copy and update a prior forecast tile layer, the cadence manager often still has to run any missing processing programs in order to complete a new cadence instance. Even though the missing processing programs and forecast cycles have to be run, the compute and time savings of these “shortcut” approaches significantly reduces the amount of computing cycles requires to produce the next forecast cycle, and substantially reduces the amount of time required as well. In some cases, the time savings may exceed 75, 80, 85, 90, 95, or even 98%, resulting in corresponding forecast calculation times (assuming a 10 minute forecast cycle) of 2.5 min, 2.0 min.1.5 min, 1.0 min, 30 sec, or 15 sec respectively.”. Also see claim 48: “ where the calculation of a specific cadence forecast is performed based upon the newly collected data and only the portions of the forecast effected by the newly collected data are updated.”.); and
reconstructing, by the data aggregator, data upon which the reduced size data is based using the feature relationship inference model to obtain a representation of the data having error within the acceptable error thresholds (see para [00542]: “In an exemplary embodiment, the fog inference program includes an expert systems module (948) that retrieves one or more fog time series rule ML models from a system database (320) and implements the one or more fog time series rule ML models to process input data, for a current (Mi) fog LWC tile layer (2048) and one or more previous cadence instance fog LWC tile layers from fog inference data database (998), to produce output data including time series based fog inference decisions. The fog inference program (919) uses the time series rule ML models to confirm a preliminary fog inference and determine a confirmed fog inference (or in the absence of a confirmation, refute the preliminary fog inference). In an embodiment, the fog inference program increases a confidence indication associated with each confirmed fog inference, for example a medium confidence indication or a numerical value representing a medium confidence.”. Also see para [00160]: “Furthermore, the described systems supports efficiency optimizations in managing forecasts, and for deriving a second forecast from a first forecast without recalculating the entire forecast. These optimizations can improve the forecast calculation times by up to 95%, reducing a 10 minute forecast cycle to under 30 seconds. This improvement permits near real- time forecast generation.”.).
Elkabetz does not explicitly teach using a first inference model that is a twin of a second inference model used by the data collector to implement the data reduction plan.
Moloney, however, analogously teaches using a first inference model that is a twin of a second inference model used by the data collector to implement the data reduction plan (see para [0057]: “In some implementations, a machine learning system such as the example in Fig. 14 may be provided, which can simulate the ability to classify object categories from a few training examples. Such a system can also eliminate the need to create large, multi-class datasets to effectively train the corresponding machine learning model. You can also select the machine learning model that does not require training for multiple classes. The machine learning model can be used to recognize an object (for example, a product, a person, an animal, or another object) by feeding the model a single image of that object for the system along with a comparison image. If the comparison image is not recognized by the system, the machine learning model (for example, a Siamese network) is used to determine that the objects do not match..”. Also see para [0058]: “In some implementations, a Siamese network may be used as the machine learning model, which has been trained using the synthetic training data introduced, for example, in the examples above.”)
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Elkabetz and Moloney before him or her, to modify the system of claim 16 to include attributes of a twin inference model to implement a data reduction plan in order to evaluate similarity of outputs and make comparisons between them (see Elkabetz at para [0058]: “A comparison block (e.g., 1620) may be provided to evaluate the similarity of the outputs of the two identical networks and compare the determined degree of similarity with a threshold.”).
Claims 3, 4, 13, 14, 18, and 19 are rejected under 35 U.S.C 103 as being unpatentable over Elkabetz et al. (WO2019126707A1 hereinafter referred to as Elkabetz) in view of Jia et al. (US20240414073A1 hereinafter referred to as Jia) in further view Kulkarni et al. (US20230259740A1 hereinafter referred to as Kulkarni) and in further view of Moloney et al. (DE112019002589T5 hereinafter referred to as Moloney).
Regarding claim 3:
Elkabetz in view of Moloney teaches the method of claim 2.
Elkabetz in view of Moloney does not explicitly teach wherein the trained neural network comprises hidden layers of nodes adapted to predict a first feature of the features based on a second feature of the features or the trained neural network being trained with a self-supervised learning process.
Jia, however, teaches in analogous wherein the trained neural network comprises (see para [0033]: “Optionally, after S204, the first communication device may also send the collected data to the second communication device. In this way, the second communication device may train the AI model based on the received data. The AI model may be deployed on the second communication device side. It can be understood that data collection performed by the first communication device may be a continuous process. In this way, the AI model may continuously predict the target communication service, and may also continuously perform iterative training based on the data collected by the first communication device.”.)
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Elkabetz, Moloney, and Jia before him or her, to modify the method of claim 3 to include attributes of improve accuracy and performance (see Jia at para [0033]: “This is helpful to improve subsequent prediction accuracy and improve communication system performance.”.).
Elkabetz in view of Jia do not teach the use of hidden layers in their implementations of neural networks.
Kulkarni, however, teaches in analogous hidden layers in their implementation of a neural network (see para [0020]: “ For example, the model feature extractor (102) may include an input layer and one or more hidden layers. The feature extraction reformats, combines, and transforms input into a new set of features”).
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Elkabetz, Moloney, Jia, and Kulkarni before him or her, to modify the method of claim 3 to include attributes of hidden layers to allow for a subset of neural network layers (see Kulkarni at para [0021]: “For the neural network model, the model feature aggregator includes a second subset of neural network layers. For example, the model feature aggregator may include one or more hidden layers and the output layer.” ).
Elkabetz in view of Jia in further view of Kulkarni does not explicitly teach the trained neural network being trained with a self-supervised learning process.
Moloney, however, teaches in analogous the trained neural network being trained with a self-supervised learning process (see para [0065]: “In some implementations, self-supervised learning can be performed on a machine learning model in a training phase, such as in the example above in Fig. 19 .”.).
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Elkabetz, Jia, Kulkarni, and Moloney before him or her, to modify the method of claim 3 to include attributes of the trained neural network being trained with a self-supervised learning process in order to allow for datasets that contain unlabeled data (see Moloney at para [0065]: “In such an example, it is not necessary to have datasets with labeled soil survey data.” )
Regarding claims 13 and 18:
Claims 13 and 18 recite analogous limitations to claim 3 and therefore are rejected on the same grounds as claim 3.
Regarding claim 4:
Elkabetz in view of Moloney in further view of Jia and further in view of Kulkarni teaches the method of claim 3.
Elkabetz further teaches wherein the first feature comprises a first type of measurement data and the second feature comprises a second type of measurement data different from the first type of measurement data (see para [00496]: “The encoding and image feature comparison aspects of this technique may encode and compare only those tile layers and parts of tile layers that are specified for comparison by the cadence manager as contextually relevant. For example, by encoding only specific tiles from a tile layer, a region (part) of a tile layer, a complete tile layer, a set of tile layers, or a combination of specific regions of a specified set of tile layers, specific features of weather data may be exposed such as shapes of rain fields at specified precipitation rates, weather features such as convective storm cells, frontal boundaries, wind fields, and the like. Image feature analysis may be used to identify these weather features, which can then be stored in a weather objects database (350) and tracked across a plurality of these generated images”).
Regarding claims 14 and 19:
Claims 14 and 19 recite analogous limitations to claim 3 and therefore are rejected on the same grounds as claim 4.
Claim 9 is rejected under 35 U.S.C 103 as being unpatentable over Elkabetz et al. (WO2019126707A1 hereinafter referred to as Elkabetz) in view of Moloney et al. (DE112019002589T5 hereinafter referred to as Moloney) in further view of Chen et al. (US20230332976A1 hereinafter referred to as Chen).
Regarding claim 9:
Elkabetz in view of Moloney teaches the method of claim 1.
Elkabetz further teaches quantization of features of the data set (see para [00414]: “ o Q - a given quantization level in the specific pro forma satellite-to-earth station or satellite link segment.”.);
predictability of the feature of the data set with the feature relationship inference model (see para [00138]: “Figure 20 depicts an illustrative modeling and prediction server of the described system, according to an illustrative embodiment. [00139] Figure 21 illustrates an exemplary process flowchart for an exemplary cadence instance recalculation and propagation method for updating the forecast stacks to reflect newly collected data, according to an illustrative embodiment.”.);
reconstructability of the features of the data set using twin inference models hosted by the configured data collector and the configured data aggregator (see para [00466]: “Once a determination is made by the cadence manager (905) to perform a partial calculation and update within a cadence instance, several steps occur. First the portions of the tile layers to be copied to the cadence instance along with their corresponding prior tile layers selected from one or more of prior collected data tile layers, processed collected data tile layers, forecast tile layers, forecast post-processing tile layers, and weather product tile layers. The identified prior tile layers are propagated by copying to the current cadence instance.”. Also see paras [00492]-[00494]: “ If the cadence manager selects the option to copy and update a prior forecast tile layer, the cadence manager often still has to run one or more processing programs in order to complete some of the tile layers of the new cadence instance. For example, if a forecast is copied from once cadence cycle to another, the copied forecast will need is last forecast cycle run to complete the new forecast. [00493] Process and programs run by the cadence manager [00494] The cadence manager has a number of cadence specific programs that may be performed at specific times in the cadence cycle to create and manage the cadence data structures. These programs include collection, post-collection, pre-forecast processing programs as described herein. ”.) [(Examiner’s note: i.e., emphasis added. The portions of the cadence instance could be taken by a person having ordinary skill in the art using broadest reasonable interpretation in light of the specification to signify a twin, or copy, or the previous cadence instance. With both cadence instances, the current ML model, is able to be constructed, or reconstructed given the copy of data.)]; and
computing resource costs for transmitting the features of the data set from the configured data collector to the data aggregator (see para [00457]: “The cadence manager determines which programs are to be processed next as part of a cadence instance, when they are to be processed, and in some embodiments, determines which resources are used by those programs (e.g. which processor a specific program is executed by)”. Also see para [00458]: “The cadence manager is responsible for optimizing run-time resource utilization during the processing of cadence instances.”.).
Elkabetz does not explicitly teach wherein the data reduction plan is obtained using a genetic algorithm or the use of an objective function.
Chen, however, teaches in analogous wherein the data reduction plan is obtained using a genetic algorithm (see para [0311]: “FIG. 24 illustrates a flowchart for an evolutionary computation based optimization procedure with use of the genetic algorithm (GA), according to some embodiments.”.) and
an objective function (see para [0312]: “At each generation, the fitness of each individual can be evaluated based on the user-defined objective function, and an updated population of solutions can be created by using genetic operators such as ranking, selection, crossover and mutation. This evolutionary computation approach can eliminate the need to calculate the first derivative and/or the second derivative (as done in some optimization methods) and is suitable to solve complex optimization problems.”).
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Elkabetz, Moloney, and Chen, before him or her, to method of claim 9 to include attributes of genetic algorithms and objective functions in order to solve complex optimization problems, such as data reduction (see Chen at para [0312]: “This evolutionary computation approach can eliminate the need to calculate the first derivative and/or the second derivative (as done in some optimization methods) and is suitable to solve complex optimization problems.”.).
Pertinent Prior Art
The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure:
“Two level data aggregation protocol for prolonging lifetime of periodic sensor networks” — Al-Qurabat et al. — discloses reducing data with quantization and sliding window techniques within the context of a data aggregator
“Not Every Bit Counts: Data-Centric Resource Allocation for Correlated Data gathering in Machine-to-Machine Wireless Networks” — Hsieh et al. — discloses using quantization techniques to reduce data within the context of a data aggregator
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Andrew A Bracero whose telephone number is (571) 270-0592. The examiner can normally be reached Monday - Friday 9:00a.m. - 5:00 p.m. ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, David Yi can be reached Monday - Friday 9:00a.m. - 5:00 p.m. ET at (571) 270-7519. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ANDREW BRACERO/Examiner, Art Unit 2126
/DAVID YI/Supervisory Patent Examiner, Art Unit 2126