DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Remarks
This communication is considered fully responsive to the Amendment filed on 1/13/26.
101 rejection of claim 20 withdrawn since amended accordingly.
112 rejection withdrawn since amended accordingly.
Response to Arguments
Applicant’s 2/4/24 arguments with respect to claims have been considered but are moot in view of new ground(s) of rejection.
Claim Objections
Claim(s) 4 is/are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claim 9 is rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention.
It is noted that the Applicant has failed to point to the specification in order to show the support for such amendments that have been made to the claims. Therefore, the Examiner will rely on the wording and possible synonyms of the amended subject matter within the specification to determine proper support for these amendments. A general search for this wording and possible synonyms within the specification for limitations in amendments was not found including the following:
Claim 9. (Currently amended) The method of claim 1, wherein the sending information further includes information about a message length of the target message, a recipient count, and a network condition at the scheduled sending time, and
the controlling of the message sending module includes adjusting the sending schedule of the target message based on a determination that the predicted future load is greater than or equal to a reference value.
Therefore, the Examiner submits that the amendments lack proper support within the specification. However, the Examiner will assume that these amendments have proper support in order to advance prosecution. The Applicant is requested to specifically point out the specific page and line and/or paragraph numbers and/or figures where such support for these amendments are disclosed within the specification.
Furthermore, applicant is put on notice that claim limitations disclosed only in original claims and not disclosed elsewhere in the original specification will be given broadest reasonable interpretation for examining purposes and future amendments to add additional information on such limitations may be treated as new matter and action taken accordingly.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-2, 13-14 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Publication No. 2025/0117825 to Longo et al. (“Longo”) in view of U.S. Patent Publication No. 2020/0136975 to Arora et al. (“Arora”) and further in view of U.S. Patent Publication No. 2022/0398133 to Gaddam et al. (“Gaddam”).
As to claim 1, Longo discloses a processor-implemented method for controlling message sending, performed by at least one computing device(Longo: fig 1-14), the method comprising:
acquiring sending information of a target message, the sending information includes at least information about a sending schedule of the target message (Longo: fig 1-14, [0007-173]: fig 3 ... features (acquiring sending information of a target message ...) used for training 308 include one or more of email data (one example of a target message), body of email, subject of the email, user engagement in response to an email, such as Internet Protocol address of the recipient, data (load) and time (sending schedule) of the email and the engagement (the sending information includes at least information about a sending schedule of the target message), email address associated with the engagement, user response (e.g., click a link, delete email, unsubscribe from future communications), and the like ... the features are described with reference to email communications, the same features are also available from other types of communications, such as SMS communications (other types of target message(s)) [0055] ... communications may be promotional (e.g., to send promotional information) or transactional (to perform a transaction such as a purchase) (still other types of target message(s)) and for example, one or both types of communications may be utilized for the training data 304 and an ML model 310 is generated to classify communications into promotional or transactional (see with [0055] above - acquiring sending information of a target message, the sending information includes at least information about a sending schedule of the target message) [0056] ... fig 7-13 ).
Longo did not explicitly disclose configuring input data of a machine-learning model based on the sending information, the machine-learning model being trained to predict a future load of a message sending module (emphasis added).
Specifically, Longo discloses configuring input data of a machine-learning model based on the sending information, the machine-learning model being trained to predict a future frequency of a message sending module (emphasis added)( (Longo: fig 1-14, [0007-173]: fig 7-13 ... method 800 for transmission frequency optimization (such as transmission frequency optimization) ... from operation 806, the method 800 flows to operation 808 where the ML algorithm is trained to obtain the frequency model (see with fig 3 & [0055-56] above - configuring input data of a machine-learning model based on the sending information ... the frequency model generates recommendations, at operation 814, for the best frequency to send communications to the selected recipient e.g., 3 times a week, 5 times a month, every Monday) (see with [00111] below - the machine-learning model being trained to predict a future frequency of a message sending module) ... the output of the frequency model includes the number of communications to be sent per week, but other types of outputs are possible, such as the number of communications per week, the number of communications per weekend, the number of communications per month, the number of communications for a given day of the week, etc (the machine-learning model being trained to predict a future frequency ...) and these recommended frequencies are automatically incorporated into the message transmission pipeline (... of a message sending module) and used when sending the communications [0105;109-111]).
Nonetheless, Longo did not explicitly disclose configuring input data of a machine-learning model based on the sending information, the machine-learning model being trained to predict a future load of a message sending module (emphasis added).
For clarity, Arora discloses configuring input data of a machine-learning model based on the sending information, the machine-learning model being trained to predict a future load of a message sending module (emphasis added) (Arora: fig 1-5, [0002-107]: ... in response to the current MoNH (measure of network health (MoNH) (configuring input data of a machine-learning model ...) and/or the one or more predicted future MonH values (a future load) output by the machine learning models (the machine-learning model being trained to predict a future load ...), the system can recommend and/or perform one or more actions ... the one or more actions may themselves be determined by one or more machine learning models.... actions can also include one or more network operations that alter the performance of a particular subsystem, a hardware component, or a software component and, for example, the system can send an instruction to reset a particular component or to adjust a setting of a particular subsystem (... of a message sending module) [0010-11] ... the analytics module 144 determines a MoNH (measure of network health (MoNH) 161 that describes the operational condition of a particular subsystem, subgroup, or element of the system 100 and, for example, the analytics module 144 can determine a MoNH for a particular satellite gateway (a message sending module) 110a, 110b, or 110c, or for a particular subsystem (a message sending module) (see with [0010-11] above - the machine-learning model being trained through a task of predicting a future load of a message sending module) [0038] ... a communication system can train and use machine-learning models to output a current condition ... current condition can be, for example, a measure of network health (MoNH) providing quantitative metric indicating overall operability at a particular time ... uses a traffic forecasting model trained using time series data indicating historical traffic data (input data of a machine-learning model based on the sending information) to predict expected levels of traffic at future times (see with [0010-11; 38] - the machine-learning model being trained through a task of predicting a future load of a message sending module) ... can be calculated using those forecasts e.g. as a ratio of forecasted traffic for a time period and actual traffic for a time period ... model may be trained to use other input features to generate a forecasted amount of traffic [0003] ... a MoNH 151 for a particular satellite gateway 110a-c (a message sending module) or particular subsystem 112a-c [0038] ... forecasted amount of network traffic for a specified period can be predicted based on historical network data using the one or more machine learning models (see with [010-11; 0003;38] above - the machine-learning model being trained through a task of predicting a future load of a message sending module) [0043]).
Longo and Arora are analogous art because they are from the same field of endeavor with respect to machine-learning models.
Before the effective filing date, for AIA , it would have been obvious to a person of ordinary skill in the art to incorporate the strategies of Arora into the method of Longo. The suggestion/motivation would have been to train and use machine-learning models to output current conditions indicating a system’s capabilities (Arora: [0002]).
Longo and Arora further disclose predicting the future load of the message sending module based on the input data using the machine-learning model, and controlling the message sending module based on the predicted future load (Arora: fig 1-5, [0002-107]: ... in response to the current MoNH (measure of network health (MoNH) and/or the one or more predicted future MonH values (a future load) output by the machine learning models (predicting the future load of the message sending module based on the input data using the machine-learning model), the system can recommend and/or perform one or more actions (controlling ...) ... the one or more actions may themselves be determined by one or more machine learning models (controlling ... based on the predicted future load).... actions can also include one or more network operations that alter the performance of a particular subsystem, a hardware component, or a software component and, for example, the system can send an instruction to reset a particular component or to adjust a setting of a particular subsystem (controlling the message sending module based on the predicted future load) [0010-11] ... the analytics module 144 determines a MoNH (measure of network health (MoNH) 161 that describes the operational condition (controlling ...) of a particular subsystem, subgroup, or element of the system 100 and, for example, the analytics module 144 can determine a MoNH for a particular satellite gateway (a message sending module) 110a, 110b, or 110c, or for a particular subsystem (a message sending module) (see with [0010-11] above - controlling the message sending module based on the predicted future load) [0038]).
Longo did not explicitly disclose wherein the predicting the future load of the message sending module includes predicting a load at a scheduled sending time of the target message.
Gaddam discloses wherein the predicting the future load of the message sending module includes predicting a load at a scheduled sending time of the target message (Gaddam: fig 1-24, [0004-]: ... fig 1 ... testing may comprise load testing of one or more of the components ... the transaction (target message) prediction layer 143 uses one or more machine learning techniques to predict a transaction load for the one or more components (wherein the predicting the future load of the message sending module ...) at one or more time periods (... at a scheduled sending time of the target message) and the transaction prediction layer 143 schedules the load testing at a time period based on the predicted transaction load and for example, the transaction prediction layer 143 schedules the load testing at a time period when the predicted transaction load for a given component is at a maximum or is relatively high when compared with other time periods or is at a desired value for load testing (wherein the predicting the future load of the message sending module includes predicting a load at a scheduled sending time of the target message) [0067] ... the machine learning layer 144 of the transaction prediction layer 143 analyzes a current series of transactions of a plurality of components and provides a forecast analysis of the maximum load occurrence for each of the components, which facilitates prediction of the time at which the maximum and minimum component load will occur (wherein the predicting the future load of the message sending module includes predicting a load at a scheduled sending time of the target message) and based on the predicted time at which the maximum and minimum component load will occur (predicting the future load of the message sending module based on the input data using the machine-learning model ...) memory and instances are predicted as per the load requirement (... and controlling the message sending module based on the predicted future load) [0068] ... time series data comprises a set of ordered data points with respect to time and prediction is performed using time series analysis, which uses different machine learning algorithms to extract certain statistical information and characteristics of past time series data in order to predict future values ... the forecasting of the time series data is based at least in part on an additive model where non-linear trends are fit with different time periods at different levels of granularity (e.g., yearly, weekly, and daily seasonality) (wherein the predicting the future load of the message sending module includes predicting a load at a scheduled sending time of the target message) as well as with holiday effects [0069] ... multistep-ahead prediction is used to predict a sequence of values in a time series and predictive model is applied step by step and the predicted value of the current time step is used to determine its value in the next time step and the model is trained on time series data collected regularly over a given time period and predicted result comprises a future transaction load of applications over a future time period (predicting the future load of the message sending module based on the input data using the machine-learning model ...) which will be used to procure necessary amounts of memory and space in advance to handle, for example, instances of high load at a given time [0070]).
Longo, Arora and Gaddam are analogous art because they are from the same field of endeavor with respect to testing components (such as sending module(s)).
Before the effective filing date, for AIA , it would have been obvious to a person of ordinary skill in the art to incorporate the strategies of Gaddam into the method of Longo and Arora. The suggestion/motivation would have been to provide a testing platform that provides end-2-end (E2E) visibility including business lifecycle views, transaction views and metrics generation in connection with component testing (Gaddam: [0059]).
As to claim 2, Longo, Arora and Gaddam disclose wherein the configuring of the input data of the machine-learning model includes configuring the input data by reflecting the sending information in processing status information of the message sending module (Longo: fig 1-14, [0007-173]: ... training data is a dataset used as input to a training algorithm that generates the model e.g. historical engagement data from email messages (wherein the configuring of the input data of the machine-learning model includes configuring the input data by reflecting the sending information in processing status information ...) [0047] ... optimization API to access a model e.g. send-time optimization API ... provides intelligent platform leveraging vast amounts of data from service delivery processes to create intelligence-based products for managing communications (... by reflecting the sending information in processing status information of the message sending module) and drive better outcomes by improving communications delivery ... better outcomes predicted or derived from data items corresponding to data output from an application of a trained model to a particular data set and tracked and/or stored for subsequent analysis (wherein the configuring of the input data of the machine-learning model includes configuring the input data by reflecting the sending information in processing status information of the message sending module) [0052];
Arora: fig 1-5, [0002-107]: ... technique collects information about the status of various network components and generates feature vectors to represent the state of the communication network at different times (... includes configuring the input data by reflecting the sending information in processing status information of the message sending module) ... a series of data sets includes network state feature vectors, predicted traffic, actual traffic and the MoNH for different time periods ... a machine learning model is trained, using these data sets to predict MoNH based on lagged or delayed values of the network state feature vector (wherein the configuring of the input data of the machine-learning model includes configuring the input data by reflecting the sending information in processing status information of the message sending module) [0004]).
For motivation, see rejection of claim 1.
As to claims 13-14, see similar rejection to claims 1-2, respectively, where system is taught by the method.
As to claim 20, see similar rejection to claim 1 where the medium is taught by the method.
Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Publication No. 2025/0117825 to Longo et al. (“Longo”) in view of U.S. Patent Publication No. 2020/0136975 to Arora et al. (“Arora”), U.S. Patent Publication No. 2022/0398133 to Gaddam et al. (“Gaddam”) and further in view of U.S. Patent Publication No. 2021/0211315 to Kubba et al. (“Kubba”).
As to claim 3, Longo, Arora and Gaddam disclose the method of claim 2.
For motivation, see rejection of claim 1.
Longo did not explicitly disclose wherein the processing status information includes a cumulative sending amount to date, a remaining sending amount, and a current sending speed.
Kubba disclose wherein the processing status information includes a cumulative sending amount to date, a remaining sending amount, and a current sending speed (Kubba: fig 1-9, [0005-58]: ... usage patterns are analyzed for both past and current activity, for example, a subscriber usage pattern can be analyzed from the previous month up to the current day (wherein the processing status information includes a cumulative sending amount to date ...) to determine if they have been flagged as super heavy user and predict whether their current usage trend will result in being identified as a (future) super heavy user [0027] ... an additional dataset e.g. 2 months is used to predict a model and target super heavy users for both months based on a pre-trained model ... super heavy users, when they are continuously on FAP for the last 5 days, traffic flow weights i.e. traffic shaping weights are determined ... recent usage and billing information examined to determine where an identified subscriber’s usage remains well above the data cap (... a remaining sending amount ...) [0051] ... system continually monitors traffic data in order to detect super heavy users exceeding their allocated bandwidth (speeds) or violating usage policies ... process is repeated at predetermined or regular intervals (... past and/or current sending speed(s)) [0031] ... usage patterns of every subscriber currently active within the system are analyzed to determine amount and type of data they have been using (see with [0027;51;31] - wherein the processing status information includes a cumulative sending amount to date, a remaining sending amount, and a current sending speed) [0026]).
Longo, Arora and Gaddam and Kubba are analogous art because they are from the same field of endeavor with respect to machine-learning models.
Before the effective filing date, for AIA , it would have been obvious to a person of ordinary skill in the art to incorporate the strategies of Kubba into the method of Longo, Arora and Gaddam. The suggestion/motivation would have been to provide a DCM model used to predict and target super heavy users (Kubba: [0051]).
Claims 5-6 and 15-16 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Publication No. 2025/0117825 to Longo et al. (“Longo”) in view of U.S. Patent Publication No. 2020/0136975 to Arora et al. (“Arora”), U.S. Patent Publication No. 2022/0398133 to Gaddam et al. (“Gaddam”) and further in view of U.S. Patent Publication No. 2022/0225126 to Ma et al. (“Ma”).
As to claim 5, Longo, Arora and Gaddam disclose the method of claim 1.
For motivation, see rejection of claim 1.
Longo did not explicitly disclose wherein the predicting of the load of the message sending module according to the sending of the target includes predicting the load of the message sending module in response to a sending schedule of the target message being registered.
Ma discloses wherein the predicting of the future load of the message sending module includes predicting the future load of the message sending module in response to the sending schedule of the target message being registered (Ma: fig 1-23, [0010-112]: ... a node (the message sending module) automatically generates predicted resource status information according to configuration in a predicted resource request and/or because predicted resource status in the node (predicting of the future load of the message sending module ...) is too high (load) or too low (load) (... includes predicting the future load of the message sending module ...) [0251] ... prediction content prediction interval (sending schedule): used to indicate the prediction start and end time of a resource states ( see with below fig 19 and [0241-245; 235; 23] - ... in response to a sending schedule of the target message being registered) [0253-254] ... resource status: used to indicate resource status ... resource status parameters include one or more of ... TNL capacity indicator (see with [0251] - predicting the future load of the message sending module), radio resource status (see with [0251] - predicting the future load of the message sending module) ... number of active UEs (see with [0251] - predicting the future load of the message sending module), radio resource control (RRC) connections (see with [0251] - predicting the future load of the message sending module), slice available capacity (see with [0251] - predicting the future load of the message sending module), hardware (HW) load indicator (see with [0251] - predicting the future load of the message sending module) etc [0256] ... see fig 19 block 1601 resource prediction request prediction ID, prediction registration request (... the target message being registered), prediction time interval, prediction content, prediction reporting period (... in response to a sending schedule of the target message being registered) ... prediction registration request: used to indicate start, end and addition of predictions ... prediction content: used to indicate parameters needed to be predicted include one or more of: TNL capacity indicator (), radio resource status (see with [0251; 256] - predicting the load of the message sending module) ... number of active UEs, radio resource control (RRC) connections, slice available capacity, hardware (HW) load indicator etc ... purpose of the AI model can be, for example, to predict load condition of a node, cell on node, beam of a cell on node, PLMN on a node ... load condition may be the maximum value of information available to be used and/or value of information currently used and/or currently available to be used (remaining) and/or ratio of the above (see with fig 19 and [0251-256] - wherein the predicting of the future load of the message sending module includes predicting the future load of the message sending module in response to the sending schedule of the target message being registered) [0241-245; 235; 23]).
Longo, Arora, Gaddam and Ma are analogous art because they are from the same field of endeavor with respect to predicting resource status (load).
Before the effective filing date, for AIA , it would have been obvious to a person of ordinary skill in the art to incorporate the strategies of Ma into the method of Longo, Arora and Gaddam. The suggestion/motivation would have been to provide for automatically generating resource status (load) information according to configuration and/or because predicted resource status in a node is too high or too low (Ma: [0251]).
As to claim 6, see similar rejection to claim 5 where the method is taught by the method.
As to claim 6, Longo, Arora, Gaddam and Ma further disclose wherein the input data includes information regarding the scheduled sending time of the target message (Ma: fig 1-23, [0010-112]: ... prediction content prediction interval (sending schedule): used to indicate the prediction start and end time of a resource states (see with [0260] - wherein the input data includes information regarding the scheduled sending time of the target message) [0253-254] ... a second node sends a message containing predicted resource warning information to a first node to notify of the predicted load in the second node is too high or too low [0260]), and
the predicting of the future load of the message sending includes predicting the future load of the message sending module at the scheduled sending time (Ma: fig 1-23, [0010-112]: ... prediction content prediction interval (at the scheduled sending time): used to indicate the prediction start and end time of a resource states (see with [0260] - ... includes predicting the future load of the message sending module at the scheduled sending time) [0253-254] ... a second node sends a message containing predicted resource warning information to a first node to notify of the predicted load in the second node is too high or too low (see with [0253-254] - the predicting of the future load of the message sending module includes predicting the future load of the message sending module at the scheduled sending time) [0260]).
For motivation, see rejection of claim 5.
As to claims 15-16, see similar rejection to claims 5-6, respectively, where system is taught by the method.
Claims 7-9 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Publication No. 2025/0117825 to Longo et al. (“Longo”) in view of U.S. Patent Publication No. 2020/0136975 to Arora et al. (“Arora”), U.S. Patent Publication No. 2022/0398133 to Gaddam et al. (“Gaddam”) and further in view of U.S. Patent Publication No. 2016/0353469 to Kim et al. (“Kim”).
As to claim 7, Longo, Arora and Gaddam disclose the method of claim 6.
For motivation, see rejection of claim 6.
Longo did not explicitly disclose determining a maximum adjustment range for a sending speed of the message sending module based on the predicted future load; and gradually increasing or decreasing the sending speed of the message sending module so that the sending speed of the message sending module is adjusted to the maximum adjustment range at the scheduled sending time.
Kim discloses determining a maximum adjustment range for a sending speed of the message sending module based on the predicted load (Kim: fig 1-8, [0008-67]: ... if simply scheduling a user having long average delay time, only a user having long waiting time due to the channel that is not good is schedule and thus it is difficult to allocate resources to a user having a good channel; however, if scheduling is performed on the basis of average delay time longer than average channel occupation (... based on the predicted future load), the above problem can be addressed ... scheduling is performed so as to minimize average delay time itself simultaneously scheduling performed in which perceived throughput is maximized (see with [0064] - determining a maximum adjustment range for a sending speed of the message sending module ...) ... and considers both fairness and perceived throughput performance [0050] ... a first time may be updated on the basis of size of transmitted data that remains in buffer of base station and the current speed with respect to terminals (see with [0050] - determining a maximum adjustment range for a sending speed of the message sending module ...) ... determined on the basis of buffer status reports transmitted [0064] ); and
gradually increasing or decreasing the sending speed of the message sending module so that the sending speed of the message sending module is adjusted to the maximum adjustment range at the scheduled sending time (Kim: fig 1-8, [0008-67]: ... y(t) is a value added to users not selected where a specific user is selected at each scheduling time, for example, if only one of three users is selected during scheduling, the two remaining users commonly have a y(t) >= T(t) updated for users’ selected for each transmission time (gradually increasing or decreasing the sending speed of the message sending module ...) ... it may be considered as a delay value of the file that is a current allocation target (gradually increasing or decreasing the sending speed of the message sending module ...) [0054] ...an output y*(t) indicates a delay time value allocated to each user packet ... in order to maximize perceived throughput of each packet (message), scheduler determines expected optimum delay (maximum adjustment range) ... designated as a reference by scheduler for a performance gain (see with [0054;44] - gradually increasing or decreasing the sending speed of the message sending module so that the sending speed of the message sending module is adjusted to the maximum adjustment range at the scheduled sending time) [0055] ... decision making unit 434 selects a user being scheduled on the basis of the scheduling metric for each user ... variable updating unit 435 updates variables required to calculate the scheduling metric (see with [0054-55] - gradually increasing or decreasing the sending speed of the message sending module so that the sending speed of the message sending module is adjusted to the maximum adjustment range at the scheduled sending time) [0044]).
Longo, Arora, Gaddam and Kim are analogous art because they are from the same field of endeavor with respect to scheduling.
Before the effective filing date, for AIA , it would have been obvious to a person of ordinary skill in the art to incorporate the strategies of Kim into the method of Longo, Arora and Gaddam. The suggestion/motivation would have been to provide scheduling that considers both fairness and perceived throughput performance (Kim: 0050]).
As to claim 8, see similar rejection to claim 7 where method is taught by the method.
As to claim 8, Longo, Arora, Gaddam and Kim further disclose wherein the controlling of the message sending module includes decreasing a sending speed of the message sending module based on a determination that the predicted future load is greater than or equal to a reference value (Kim: fig 1-8, [0008-67]: ... If simply scheduling a user having long average delay time, only a user having long waiting time due to the channel that is not good is scheduled, and thus it is difficult to allocate the resources to the user having a good channel (see with [0054-55;44] – delay value is reference value acting as threshold that increases selection of previously not selected users and thus decreasing speed of currently selected user’s message(s) being selected to be sent) [0050] ... y(t) is a value added to users not selected where a specific user is selected at each scheduling time, for example, if only one of three users is selected during scheduling, the two remaining users commonly have a y(t) >= T(t) updated for users’ selected for each transmission time (see with [0050] – delay value is reference value acting as threshold that increases selection of previously not selected users and thus decreasing speed of currently selected user’s message(s) being selected to be sent) ... it may be considered as a delay value of the file that is a current allocation target (see with [0050] – delay value is reference value acting as threshold that increases selection of previously not selected users and thus decreasing speed of currently selected user’s message(s) being selected to be sent) [0054] ...an output y*(t) indicates a delay time value allocated to each user packet ... in order to maximize perceived throughput of each packet (message), scheduler determines expected optimum delay (maximum adjustment range) ... designated as a reference by scheduler for a performance gain (see with [0050;54;44] – delay value is reference value acting as threshold that increases selection of previously not selected users and thus decreasing speed of currently selected user’s message(s) being selected to be sent.) [0055] ... decision making unit 434 selects a user being scheduled on the basis of the scheduling metric for each user ... variable updating unit 435 updates variables required to calculate the scheduling metric see with [0050;54;44] – delay value is reference value acting as threshold that increases selection of previously not selected users and thus decreasing speed of currently selected user’s message(s) being selected to be sent) [0044]).
For motivation, see rejection of claim 7.
As to claim 9, see similar rejection to claims 7-8.
As to claim 9, Longo, Arora, Gaddam and Kim further disclose wherein the sending information further includes information about a length of the target message, a recipient count, and a network condition at the scheduled sending time (Longo: fig 1-14, [0007-173]: ... a probability of engagement p(engage) is calculated using the components that affect optimal message engagement: send time (condition at the scheduled sending time), channel (network condition), frequency, recipient attributes (recipient count), and message attributes (length of the target message) [0064-67]).
For motivation, see rejection of claim 7.
Claims 10-12 and 17-19 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Publication No. 2025/0117825 to Longo et al. (“Longo”) in view of U.S. Patent Publication No. 2020/0136975 to Arora et al. (“Arora”), U.S. Patent Publication No. 2022/0398133 to Gaddam et al. (“Gaddam”) and further in view of U.S. Patent Publication No. 2019/0164081 to Deluca et al. (“Deluca”).
As to claim 10, Longo, Arora and Gaddam disclose the method of claim 1.
For motivation, see rejection of claim 1.
Longo did not explicitly disclose sending a monitoring request notification to a terminal of an administrator of the message sending module based on the determination that the predicted load is greater than or equal to a reference value, wherein the monitoring request notification is configured to provide the administrator with a management interface having a sending speed control function of the message sending module.
Deluca discloses sending a monitoring request notification to a terminal of an administrator of the message sending module based on the determination that the predicted load is greater than or equal to a reference value (Deluca: fig 1-9, [0007-67]: ... referring to geofence configurator administrator interface 600, user can view depictions of anticipated numbers of breaches for various depicted geofences of different sizes ... for example, in area 614 it is depicted a smaller geofence 602 predicted to have 9814 breaches (that the predicted load is greater than or equal to a reference value) in the specified timeframe, whereas a larger geofence 603 predicted to have 20,091 breaches (that the predicted load is greater than or equal to a reference value) ... manager system 110 configured so displayed number of breaches is automatically updated when administrator user adjusts time period for geofence(s) ... including simultaneously displaying indicator of performance of candidate geofence(s) [0050] ... manager system 110 runs outputting process 115 to provide one or more outputs based on deployed geofence(s) being breached (sending a monitoring request notification to a terminal of an administrator ...) ... provided output includes e.g. notification or communication to initiate machine learning process 116 monitoring performance of deployed geofence(s) ... predicting process 114 can be updated and accuracy and reliability of predicting performed increased over time ... manager system 110 can examiner results of performance of predicting process 117 and based on examining can adjust predicting process 117 (see with [0050] - sending a monitoring request notification to a terminal of an administrator of the message sending module based on the determination that the predicted load is greater than or equal to a reference value) [0029-31]),
wherein the monitoring request notification is configured to provide the administrator with a management interface having a sending speed control function of the message sending module (Deluca: fig 1-9, [0007-67]: ... referring to geofence configurator administrator interface 600, user can view depictions of anticipated numbers of breaches for various depicted geofences of different sizes (wherein the monitoring request notification is configured to provide the administrator with a management interface ...) ... for example, in area 614 it is depicted a smaller geofence 602 predicted to have 9814 breaches in the specified timeframe, whereas a larger geofence 603 predicted to have 20,091 breaches ... manager system 110 configured so displayed number of breaches is automatically updated (sending speed control function) when administrator user adjusts time period for geofence(s) (see with [0052] - ... having a sending speed control function of the message sending module) ... including simultaneously displaying indicator of performance of candidate geofence(s) [0050] ... geofence configurator administrator interface 600 configured so that when a user changes the size of the display perimeter, a predicted number of breaches during the specified time displayed in area 614 616 automatically changes (see with [0050] - ... having a sending speed control function of the message sending module) e.g. will expectedly increase if perimeter made larger or decrease if perimeter made smaller (see with [0050] - ... having a sending speed control function of the message sending module) ... or user can enter a target number of breaches in area 614 616 e.g. can preset to a certain value e.g. 5000 breaches or 30,000 breaches (see with [0052] - ... having a sending speed control function of the message sending module) ... in response to target number, manager system 110 automatically generates iteratively a number of candidate geofences yielding target number of breaches (see with [0050] - ... having a sending speed control function of the message sending module) ... can be configured so that upon a prespecified input a displayed candidate geofence is deployed [0052]).
Longo, Arora, Gaddam and Deluca are analogous art because they are from the same field of endeavor with respect to predicting process.
Before the effective filing date, for AIA , it would have been obvious to a person of ordinary skill in the art to incorporate the strategies of Deluca into the method of Longo, Arora and Gaddam. The suggestion/motivation would have been to provide a predicting process that can be updated and accuracy and reliability of predicting performed can increase over time (Deluca: [0030]).
As to claim 11, see similar rejection to claim 10 where the method is taught by the method.
As to claim 11, Longo, Arora, Gaddam and Deluca further disclose wherein the monitoring request notification is configured to provide the administrator with sending schedule information for pre-registered messages (Deluca: fig 1-9, [0007-67]: or user can enter a target number of breaches in area 614 616 e.g. can preset to a certain value e.g. 5000 breaches or 30,000 breaches (see with [0050] - pre-registered messages) ... in response to target number, manager system 110 automatically generates iteratively a number of candidate geofences yielding target number of breaches (see with [0050] - pre-registered messages) ... can be configured so that upon a prespecified input a displayed candidate geofence is deployed [0052]).
For motivation, see rejection of claim 10.
As to claim 12, see similar rejection to claims 1, 11-13 where the method is taught by the method.
As to claim 12, Longo, Arora, Gaddam and Deluca further disclose acquiring a new machine-learning model (Deluca: fig 1-9, [0007-67]: referring to geofence configurator administrator interface 600, user can view depictions of anticipated numbers of breaches for various depicted geofences of different sizes ... for example, in area 614 it is depicted a smaller geofence 602 predicted to have 9814 breaches (acquiring a new machine-learning model) in the specified timeframe, whereas a larger geofence 603 predicted to have 20,091 breaches (acquiring a new machine-learning model) [0050]); and
controlling sending of a message different from the target message using the new machine-learning model (Deluca: fig 1-9, [0007-67]: ... manager system 110 configured so displayed number of breaches is automatically updated when administrator user adjusts time period for geofence(s) (see with [0052] - controlling sending of a message different from the target message using the new machine-learning model) ... including simultaneously displaying indicator of performance of candidate geofence(s) [0050] ... geofence configurator administrator interface 600 configured so that when a user changes the size of the display perimeter, a predicted number of breaches during the specified time displayed in area 614 616 automatically changes (see with [0050] - controlling sending of a message different from the target message using the new machine-learning model) e.g. will expectedly increase if perimeter made larger or decrease if perimeter made smaller (see with [0050] - controlling sending of a message different from the target message using the new machine-learning model) ... or user can enter a target number of breaches in area 614 616 e.g. can preset to a certain value e.g. 5000 breaches or 30,000 breaches (see with [0050] - controlling sending of a message different from the target message using the new machine-learning model) ... in response to target number, manager system 110 automatically generates iteratively a number of candidate geofences yielding target number of breaches (see with [0050] - controlling sending of a message different from the target message using the new machine-learning model) ... can be configured so that upon a prespecified input a displayed candidate geofence is deployed (see with [0050] - controlling sending of a message different from the target message using the new machine-learning model) [0052]),
wherein the new machine-learning model is trained using processing status information including load information of the message sending module and message sending history collected after training of the machine-learning model is completed (Deluca: fig 1-9, [0007-67]: ... manager system 110 can use historical data of location area 2121 or repository 112 to construct data on actual number of breaches that would have occurred if candidate geofence had been deployed at a previous one or more periods of time (wherein the new machine-learning model is trained using processing status information including load information of the message sending module ...) ... factors(s) can be local traffic maps ... that take into account traffic conditions as may be influenced ... can increase weight(s) associated with factor(s) where time period of a geofence is close to (message sending history collected) current time where traffic patterns can be expected to be more accurate (... and message sending history collected after training of the machine-learning model is completed) [0054-55]).
For motivation, see rejection of claim 10.
As to claims 17-19, see similar rejection to claims 10-12, respectively, where the system is taught by the method.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JUNE SISON whose telephone number is (571)270-5693. The examiner can normally be reached 9:00 am - 5:00 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Emmanuel Moise can be reached at 571-272-3865. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JUNE SISON/Primary Examiner, Art Unit 2455