DETAILED ACTION
This action is in response to the filing on 12/29/2025. Claims 1-20, are pending and have been considered below.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-9 and 11-20 are rejected under 35 U.S.C. 103 as being unpatentable over SUN et al. (US 2021/0010351 A1), hereinafter Sun, in view of Ouyang et al. (US 2024/0212857 A1), hereinafter Ouyang.
Regarding claim 1, Sun teaches A system comprising: one or more processors, coupled to memory, configured to perform operations including: (According to at least one aspect, an example system for forecasting well productivity accurately and efficiently, in real-time, using deep learning neural networks is provided. The system can include at least one processor coupled with at least one computer-readable storage medium having stored therein instructions which, when executed by the at least one processor, causes the system to [see Sun, para. 21]):
segmenting a time series range into a first segment for an instance of time, wherein the first segment is associated with a first value for a target feature and a first timestamp for the first value (Sun discloses segmenting a time series of a week into 7 days, with each day being a separate segment, segments 1 to 6 correspond to Monday through Saturday, and each segment has all of the input data and well constraints for that day as well as a timestamp being the day itself [see para. 71 and FIG. 7]. For the first segment, choose segment 6 which corresponds to Saturday, as well as a first value for a target feature and a first timestamp [see para. 71]);
segmenting the time series range into an input segment associated with a plurality of input features and a segment timestamp less than or equal to the first timestamp (Sun discloses segmenting a time series of a week into 7 days, with each day being a separate segment, segments 1 to 6 correspond to Monday through Saturday, and each segment has all of the input data and well constraints for that day as well as a timestamp being the day itself [see para. 71 and FIG. 7]. For the input segment, any of segments 1 to 5, or all of segments 1-5, can be chosen as the input segment, which will correspond with a plurality of input features and a timestamp less than the first timestamp which corresponds to segment 6 [see para. 71]);
generating a model trained with input comprising values for the target feature and timestamps (The model can be generated using a many-to-many time series model architecture. Input features from one or more time stamps or periods of time (i.e., an input sequence) can be used to forecast an output response at multiple time stamps or periods of time in the future. [see Sun, para. 43]) less than or equal to the segment timestamp (Sun discloses generating a trained model with one or more time stamps or periods of time [see para. 43], for the model to be used on segment 6 of FIG. 7, it would have had to have been trained prior to or during segments 1-6 [see para. 71 and FIG. 7] which includes segments 1-5 which correspond to the segment timestamp. Thus, the trained model is generated with timestamps less than or equal to the segment timestamp).
However, Sun fails to teach identifying, based on the model, one or more input features each having a respective impact satisfying an impact threshold on the first value for the target feature; and generating an explanation of the first value for the target feature based on the one or more identified input features.
In the same field of endeavor, Ouyang teaches:
identifying, based on the model, one or more input features each having a respective impact satisfying an impact threshold on the first value for the target feature (Ouyang discloses identifying important features that may have a weightage greater than a threshold [see Ouyang, para. 58-59]);
generating an explanation of the first value for the target feature based on the one or more identified input features (Ouyang discloses using a model interpretable explanation generator to provide indications explaining each individual prediction of the model, for example, providing an indication of one or more features that contributed [see Ouyang, para. 57]).
It would have been obvious to one of ordinary skill, in the art at the time before the effective filing date of the invention to incorporate identifying, based on the model, one or more input features each having a respective impact satisfying an impact threshold on the first value for the target feature; and generating an explanation of the first value for the target feature based on the one or more identified input features as suggested in Ouyang into Sun because both methods are directed to machine learning on time series data [see Sun, Abstract; see Ouyang, Abstract and para. 69]. Incorporating the teaching of Ouyang into Sun would provide indications for explaining each individual prediction of the neural network model and explains how a model made a determination [see Ouyang, para. 57].
Regarding claim 2, the combination of Sun and Ouyang as applied in claim 1 above teaches all the limitations of claim 1 and further teaches:
wherein identifying the one or more input features includes generating a plurality of impact metrics associated with the one or more input features, the impact metrics being based on the model, the first value, and values of the one or more input features (Ouyang discloses identifying important features that contribute to the prediction by determining weightages for the features [see Ouyang, para. 58-59]).
Regarding claim 3, the combination of Sun and Ouyang as applied in claim 2 above teaches all the limitations of claim 2 and further teaches:
wherein generating the explanation of the first value for the target feature includes transforming a representation of at least one of the one or more input features based on at least one of the impact metrics (Ouyang suggests an explanation output as part of the user interface which includes highlighting features that contributed to the estimate [see Ouyang, para. 71 and FIGs. 11A-11B] and identifying important features [see Ouyang, para. 58-59]. Thus, it would have been obvious to highlight important features (i.e., transform their representation in the user interface based on the features being important).).
Regarding claim 4, the combination of Sun and Ouyang as applied in claim 3 above teaches all the limitations of claim 3 and further teaches:
wherein generating the explanation of the first value for the target feature includes generating at least one user interface presentation including transformed representation of the at least one input feature. (Ouyang suggests an explanation output as part of the user interface which includes highlighting features that contributed to the estimate [see Ouyang, para. 71 and FIGs. 11A-11B] and identifying important features [see Ouyang, para. 58-59]. Thus, it would have been obvious to highlight important features (i.e., transform their representation in the user interface based on the features being important)).
Regarding claim 5, the combination of Sun and Ouyang as applied in claim 1 above teaches all the limitations of claim 1 and further teaches:
wherein operations further include generating the model with input including values of the plurality of input features and values of the target feature (The model can be generated using a many-to-many time series model architecture. Input features from one or more time stamps or periods of time (i.e., an input sequence) can be used to forecast an output response at multiple time stamps or periods of time in the future. A time stamp may represent input for a period of time such as one day or one week, or another period of time. As an example, the input sequence may include a temporal time series, spatial numerical data, spatial images or maps, and well constraint information. Each time stamp may be associated with temporal data, spatial numerical data, spatial image information, and well constraint information. [see Sun, para. 43]).
Regarding claim 6, the combination of Sun and Ouyang as applied in claim 1 above teaches all the limitations of claim 1 and further teaches:
wherein at least one of the input timestamps is less than the first timestamp (Sun discloses segmenting a time series of a week into 7 days, with each day being a separate segment, segments 1 to 6 correspond to Monday through Saturday, and each segment has all of the input data and well constraints for that day as well as a timestamp being the day itself [see para. 71 and FIG. 7]. For the input segment which corresponds to any of segments 1 to 5, or all of segments 1-5, all of the segments have timestamps less than the first timestamp which corresponds to segment 6).
Regarding claim 7, the combination of Sun and Ouyang as applied in claim 1 above teaches all the limitations of claim 1 and further teaches:
wherein the first timestamp corresponds to a current time, and the segment timestamp corresponds to a past time (Sun discloses that once the generated model is trained and validated it can be used to forecast for one or more timestamps [see para. 69]. Sun further discloses segmenting a time series of a week into 7 days, with each day being a separate segment, segments 1 to 6 correspond to Monday through Saturday, and each segment has all of the input data and well constraints for that day as well as a timestamp being the day itself, as well as a 7th segment which is to be forecasted [see para. 71 and FIG. 7]. For the 7th segment to be forecasted it must be in the future, since it corresponds to Sunday of the current week, and we have data for Monday through Saturday of the current week, then the 6th segment of Saturday must be the current day. Thus, the first timestamp which corresponds to the 6th segment is the current day, and segments 1 to 5 corresponds to the past days).
Regarding claim 8, the combination of Sun and Ouyang as applied in claim 1 above teaches all the limitations of claim 1 and further teaches:
wherein the model is trained with input including the values of the input features and the first value for the target feature (Sun discloses generating a trained model with one or more time stamps or periods of time [see para. 43], for the model to be used on segment 6 of FIG. 7, it would have had to have been trained prior to or during segments 1-6 [see para. 71 and FIG. 7]. Thus, when the model is training during segments 1-6, it is trained with input including the input features from the input segment which corresponds to segments 1-5, as well as the first value of the target feature from the first segment which corresponds to segment 6).
Regarding claim 9, Sun as applied in claim 1 above teaches all the limitations of claim 1 and further teaches:
wherein each impact metric of the plurality of impact metrics is associated with a respective input feature of the plurality of input features (Ouyang discloses identifying important features that contribute to the prediction by determining weightages for the features, such that the features have their respective weightages [see Ouyang, para. 58-59]).
Regarding claim 11, claim 11 contains substantially similar limitations to those found in claim 1 above. Consequently, claim 11 is rejected for the same reasons.
Regarding claim 12, claim 12 contains substantially similar limitations to those found in claim 2 above. Consequently, claim 12 is rejected for the same reasons.
Regarding claim 13, claim 13 contains substantially similar limitations to those found in claim 3 above. Consequently, claim 13 is rejected for the same reasons.
Regarding claim 14, claim 14 contains substantially similar limitations to those found in claim 4 above. Consequently, claim 14 is rejected for the same reasons.
Regarding claim 15, claim 15 contains substantially similar limitations to those found in claim 5 above. Consequently, claim 15 is rejected for the same reasons.
Regarding claim 16, claim 16 contains substantially similar limitations to those found in claim 6 above. Consequently, claim 16 is rejected for the same reasons.
Regarding claim 17, claim 17 contains substantially similar limitations to those found in claim 7 above. Consequently, claim 17 is rejected for the same reasons.
Regarding claim 18, claim 18 contains substantially similar limitations to those found in claim 8 above. Consequently, claim 18 is rejected for the same reasons.
Regarding claim 19, claim 19 contains substantially similar limitations to those found in claim 9 above. Consequently, claim 19 is rejected for the same reasons.
Regarding claim 20, claim 20 contains substantially similar limitations to those found in claim 1. Therefore it is rejected for the same reason as claim 1 above. Additionally, the combination of Sun and Ouyang further teaches:
A computer readable medium including one or more instructions stored thereon and executable by a processor to (According to at least one aspect, an example system for forecasting well productivity accurately and efficiently, in real-time, using deep learning neural networks is provided. The system can include at least one processor coupled with at least one computer-readable storage medium having stored therein instructions which, when executed by the at least one processor, causes the system to [see Sun, para. 21]).
Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over SUN et al. (US 2021/0010351 A1), hereinafter Sun, in view of Ouyang et al. (US 2024/0212857 A1), hereinafter Ouyang, as applied in claim 1 above, in view of SUKHI et al. (US 2020/0151014 A1) hereinafter Sukhi.
Regarding claim 10, the combination of Sun and Ouyang as applied in claim 1 above teaches all the limitations of claim 1.
However, the combination of Sun and Ouyang fails to teach generate at least one user interface presentation including at least one calendar object associated with the time series structure.
In the same field of endeavor, Sukhi teaches:
wherein operations further include generating at least one user interface presentation including at least one calendar object associated with a time series structure (Sukhi discloses a time series database table with calendar-based features [see para. 12] as well as a UI to display the time series database table [see para. 19 and FIG. 4]. Thus, there is a UI including calendar-based features associated with the time series data).
It would have been obvious to one of ordinary skill, in the art at the time before the effective filing date of the invention to incorporate wherein operations further include generating at least one user interface presentation including at least one calendar object associated with a time series structure as suggested in Sukhi into the combination of Sun and Ouyang because both systems incorporate forecasting models (see Sun, Abstract; see Sukhi, Abstract). Incorporating the teaching of Sukhi into the combination of Sun and Ouyang would allow access to see forecast[s] of upcoming resource requirement[s] (see para. 38).
Response to Amendment
The amendments to the specification, filed 12/29/2025, have been fully considered and are accepted, the objections to the specification are respectfully withdrawn.
The replacement drawings, filed 12/29/2025, have been fully considered and are accepted, the objections to the drawings are respectfully withdrawn.
The amendments to the claims, filed 12/29/2025, have been fully considered and are accepted, the objections to the claims are respectfully withdrawn.
Response to Arguments
Applicant’s arguments, filed 12/29/2025, traversing the rejection of claims 1-20 under 35 U.S.C. 101 have been fully considered and are persuasive, the rejections are respectfully withdrawn.
Applicant’s arguments with respect to claim(s) 1-20 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Katuwal et al. (US 2021/0012897 A1) teaches a user interface for explaining a machine learning model with respect to a feature over a period of time.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
/J.T.B./Examiner, Art Unit 2143
/JENNIFER N WELCH/Supervisory Patent Examiner, Art Unit 2143