DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This action is responsive to the Amendment filed on October 15, 2025. Claims 1, 3, 4, and 6-18 are amended. Claims 2 and 5 are cancelled. Claims 19 and 20 are new. Claims 1, 3, 4, and 6-20 are pending in the case. Claims 1, 16, 17, and 18 are the independent claims.
This action is final.
Applicant’s Response
In the Amendment filed on October 15, 2025, Applicant amended the claims and provided arguments in response to the rejections of the claims under 35 USC 101, 102, 103, and 112 in the previous office action.
Response to Argument/Amendment
Applicant’s amendments to the claims in response to the rejection of the claims under 35 USC 101 are acknowledged, and Applicant’s associated arguments have been fully considered. Applicant’s arguments are persuasive. Therefore, the rejection is withdrawn.
Applicant’s amendments to the claims in response to the rejection of the claims under 35 USC 112 are acknowledged, and Applicant’s associated arguments have been fully considered. Applicant’s arguments are persuasive. Therefore, the rejection is withdrawn. However, new grounds of rejection are provided below
Applicant’s amendments to the claims in response to the rejections of the claims under 35 USC 102 and 103 are acknowledged, and Applicant’s associated arguments have been fully considered. Applicant argues that Sikka and the other cited references does not disclose, teach, or suggest limitations of the amended independent claims including “generate one or more first untrained models with different sizes by using each of a plurality of trained models…learning models including the plurality of trained models and the one or more first untrained models…one of the pieces of model information selected by the input made by the user or one of the pieces of model information that is input by the user,” because Sikka “is limited to the context of a single neural network model being designed, analyzed, or modified at a time….Sikka does not contemplate or enable presenting model information of multiple models, whether trained or untrained. Nor does Sikka disclose selecting one piece of model information of one model from among the multiple models based on user input, and training only the one model. There is no disclosure of a mechanism by which a user could, for example, browse, compare, or select between different models, whether trained, or untrained. Nor is there any teaching of a user-driven selection process for model information. Therefore Sikka does not disclose or suggest the functionality of managing a plurality of learning models, nor does it disclose user-driven selection of model information of the plurality of learning models for training purposes. Therefore, Sikka fails to disclose, or even teach or suggest” the limitations quoted above.
Applicant’s arguments are not persuasive. First, the some of the features argued by Applicant (i.e. training only one model, to the extent that a teaching involving training multiple models would not read on the limitations, the user being able to browse, compare, select between different models, etc.) are not recited in the amended independent claims. Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993).
Further, Sikka clearly recites an overall AI model which is made up of a plurality of agents, where each agent includes a neural network (and is therefore analogous to a model as recited in the claims; see Sikka as cited in the previous office action and below). Moreover, Sikka teaches that each of these agents may be initially untrained, but then is trained based on training data. Subsequently, each agent/model can be modified (creating a new untrained, with respect to the modifications at least, model), and this modified agent/model can also be trained (at least to the extent necessary to enable comparison of performance characteristics, such as accuracy, of different versions of a model as captured during training for the different versions of the models; see Sikka as cited in the previous office action and below). Additionally, Sikka teaches that the user, via a graphical user interface, can select each of these various agents for inclusion in an overall AI model, and graphically arrange them as they are to be implemented within the overall AI model, and that each of the agents may also be selected via the GUI in order to set various properties/values, view and modify program code, view and modify neural network architecture, etc. (see Sikka as cited in the previous office action and below, including at least Figs. 5, 6, 35-37, etc., and corresponding text). Therefore, contrary to Applicant’s arguments, Sikka clearly teaches: presenting model information of multiple models (such as the plurality of neural network based agents in Fig. 5); selecting one piece of model information of one model from among the multiple models based on user input, and training the model; a user browsing, comparing, and selecting between different models; and a user-driven selection process for model information. Therefore Sikka does teach the functionality of managing a plurality of learning models, and/or user-driven selection of model information of the plurality of learning models for training purposes, which are the features argued by Applicant.
Therefore, the rejection is maintained below.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1, 3, 4, and 6-18 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
With respect to claims 1 and 16-18, the claim recites “train…a second untrained model generated by using a trained model based on a first untrained model corresponding to one of the pieces of model information selected by the input made by the user or one of the pieces of model information that is input by the user.” It cannot be determined how the various recited models and model information are related to one another. For example, with respect to the limitation “train…a second untrained model generated by using a trained model based on a first untrained model corresponding to one of the pieces of model information,” it is unclear what role the following limitations are intended to play: (1) by using a trained model; (2) based on a first untrained model; and (3) corresponding to one of the pieces of model information (i.e. that all three of these recitations are intended to further define how the second untrained model is generated). For example, it is unclear whether any, or all, of the first, second and third limitations are intended to further define how the training of the second untrained model is to be performed (i.e. the second untrained model is trained by using a trained model, based on a first untrained model, and/or corresponding to one of the pieces of model information), or if any, or all, of the first, second and third limitations are intended to further define how the second untrained model is generated (i.e. the second untrained model is generated by using a trained model, based on a first untrained model, and/or corresponding to one of the pieces of model information). Further, to the extent that the limitation “corresponding to one of the pieces of model information,” is not intended to further define the training or generating of the second untrained model, it is unclear whether it is instead intended to further define the trained model (i.e. the trained model corresponding to one of the pieces of model information), or the first untrained model (i.e. the first untrained model corresponding to one of the pieces of model information). Since this phrase is followed by “selected by the input made by the user or…input by the user,” it also cannot be determined exactly which of the various recited trained/untrained models, model information, etc., is required to be selected by the user. It is noted that, prior to this, the claim only recites receiving an input made by a user, but this input is not necessarily linked to any particular selection; therefore, the claim does not appear to provide any context which would clarify the recited selection/input by the user. In the interest of providing full examination on the merits, this limitation is interpreted as requiring that the user provide some input related to a model (such as in order to select, modify, define, and/or arrange it on a GUI), that the model be generated/based upon some other model (such as a previously trained model), and that the model be trained (though not necessarily directly in response to the user’s selection).
With respect to claims 3, 4, and 6-15, these claims depend upon independent claim 1 and inherit the deficiencies identified above with respect to claim 1. Therefore, these claims are rejected on the same basis as is identified with respect to claim 1 above.
Claim Rejections – 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1, 12, and 14-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Sikka et al. (US 20210012210 A1).
With respect to claims 1 and 16-18, Sikka teaches a learning device comprising: one or more hardware processors configured to function as a model generation unit, an output control unit, a receiving unit, and a training unit, to perform respective method steps; a computer program product comprising a non-transitory computer-readable recording medium on which a program executable by a computer is recorded, the program instructing the computer to perform the method; a learning system comprising: a display device; and one or more hardware processors configured to function as the output control unit, receiving unit, and the training unit to perform the method steps (e.g. paragraphs 0056-0060, devices including processors to execute software applications and memory/storage storing the software applications; paragraph 0224-0227, described subject matter embodied as system, method, or computer program product; storage medium storing program for use in connection with instruction execution system; instructions executed by processor to implement described acts, etc.); and the learning method, comprising:
generating (by the model generation unit) one or more first untrained models with different sizes by using each of a plurality of trained models (e.g. paragraph 0077, GUI panel 500 of Fig. 5 includes agent panel 510; paragraph 0078, agent panel 510 includes list of available agents that perform specific tasks; agents are neural network based agents; paragraph 0079, user interactions causing agents selected from panel 510 to be arranged to produce AI model, which is a collection of neural networks; paragraph 0080, neural network based agents trained based on training data; paragraphs 0092-0094, updating design generation GUI to expose underlying data associated with agent, such as various panels and a graphical depiction of the network architecture with which the user can interact to apply modifications to the neural network; correspondingly updating and re-compiling based on user modifications; paragraph 0182, Fig. 35, GUI showing two different versions of neural network; user interacting with modification element 3504 to increase or decrease the size of a given layer included in the network architecture; performing comparative analysis with different versions of network architecture to generate additional performance data; Figs. 36 and 37, showing multiple different versions of a neural network having different layer sizes, along with respective model information including performance and accuracy information; i.e. an overall AI model may be generated and trained based on a series of agents (which are themselves neural networks, and therefore analogous to models, which are also trained, such that an untrained model, the overall AI model, can be generated using a plurality of trained models, i.e. the trained agents making up the AI model), where overall AI model itself has not yet been trained/is untrained (i.e. as a whole));
outputting (by the output control unit, to the display device) pieces of model information on each of learning models including the plurality of trained models and the one or more first untrained models (e.g. paragraph 0077, GUI panel 500 of Fig. 5 includes agent panel 510; paragraph 0078, agent panel 510 includes list of available agents that perform specific tasks; agents are neural network based agents; paragraph 0079, user interactions causing agents selected from panel 510 to be arranged to produce AI model, which is a collection of neural networks; paragraph 0081, displaying underlying data associated with any of agents in response to user input, such as displaying associated program code; paragraph 0082, Fig. 6, displaying underlaying data associated with agent superimposed over GUI panel 500; paragraph 0179, Fig. 32, network description GUI depicting performance data associated with neural network, such as accuracy graph; paragraph 0180, Fig. 33, GUI depicting other performance data associated with neural network; paragraph 0185, Fig. 36, network description GUI displaying comparative performance data associated with different versions of given neural network; alternate network architectures (with increased/decreased layer sizes, etc.) displayed along with accuracy graph 3610 including plots 3612 and 3622 that represent the accuracy of different versions of the neural network, corresponding to respective neural network architectures; paragraph 0186, Fig. 37, displaying other comparative performance data associated with different versions of given neural network; comparison panel 3700 including alternate network architectures (with increased/decreased layer sizes) and comparison panels 3712 and 3722, corresponding to those network architectures, convey various performance data associated with the respective network architectures (including correctness, memory usage, training, time, inference time, etc.), allowing the user to evaluate whether modifications made to the neural network increase or decrease performance; i.e. the GUI can display both the plurality of agents which are neural networks/ models making up the overall AI model as well as their arrangement to form the overall AI model, and can further display various different types of information regarding the agents and overall model, including program code, architectural details, performance information, etc., analgous to displaying pieces of model information for both a plurality of trained models (agents/neural networks) and the one or more first untrained models (the overall AI model that the agents are included within));
receiving (by the receiving unit) input made by a user (e.g. paragraphs 0086-0091, Fig. 7, design generation GUI used to depict agents/neural networks; receiving configuration of agents forming AI model via GUI, such as user dragging and dropping agents within design area, etc.; receiving agent definition via user interaction with design generation GUI, where agent definition defines neural network that needs to be trained based on training data; compiling agent definition to generate compiled code which implements various layers of neural network and connections between layers; synthesizing compiled code to generate initial version of the network; instantiating instance of the network; initial version of the network is untrained; paragraph 0092-0094, updating design generation GUI to expose underlying data associated with agent, such as various panels and a graphical depiction of the network architecture with which the user can interact to apply modifications to the neural network; correspondingly updating and re-compiling based on user modifications); and
training (by the training unit), among the pieces of model information that are output by the output control unit, a second untrained model generated by using a trained model based on a first untrained model corresponding to one of the pieces of model information selected by the input made by the user or one of the pieces of model information that is input by the user (e.g. paragraphs 0092-0094, updating design generation GUI to expose underlying data associated with agent, such as various panels and a graphical depiction of the network architecture with which the user can interact to apply modifications to the neural network; correspondingly updating and re-compiling based on user modifications; paragraph 0182, Fig. 35, GUI showing two different versions of neural network; user interacting with modification element 3504 to increase or decrease the size of a given layer included in the network architecture; performing comparative analysis with different versions of network architecture to generate additional performance data; paragraph 0184-0185, alternate versions of network architectures generated based on user modifications to network architecture; performance of different versions of model during training; Figs. 36 and 37, showing multiple different versions of a neural network having different layer sizes, along with respective model information including performance and accuracy information; paragraph 0198, generating performance data for each version of neural network; for given version of neural network, performance data indicating how accuracy of neural network changes during training; i.e. an initial/first version of an overall neural network/AI model may be generated based on a series of agents (which are themselves neural networks, and therefore analogous to models, which are also trained, such that an untrained model, the overall AI model, can be generated using a plurality of trained models, i.e. the trained agents making up the AI model) and the overall AI model or one of its constituent agents/neural networks can subsequently be modified, such as by changing the size of one or more layers, to generate additional, new versions of the trained neural network/model, where these additional/new versions have not specifically been trained and are therefore untrained, but then may be subsequently trained (i.e. in order to at least provide comparisons between different version of the models’ performance during training), analogous to training a second untrained model which is one of the output pieces of model information, such as training an untrained modified version of a neural network/model/agent, by using a trained model based on a first untrained model (i.e. using a trained agent/model which is a part of an untrained overall AI model), corresponding to one of the piece of model information selected/input by the user (i.e. where the user is able to select any of the agents and subsequently modify the code, architecture, etc. of the agents, such that the subsequent training of the modified agent corresponds to a piece of model information selected/input by the user)).
With respect to claim 19, Sikka teaches a learning device comprising: one or more hardware processors configured to function as (e.g. paragraphs 0056-0060, devices including processors to execute software applications and memory/storage storing the software applications; paragraph 0224-0227, described subject matter embodied as system, method, or computer program product; storage medium storing program for use in connection with instruction execution system; instructions executed by processor to implement described acts, etc.):
a model generation unit to generate one or more first untrained models with different sizes by using each of a plurality of trained models (e.g. paragraph 0077, GUI panel 500 of Fig. 5 includes agent panel 510; paragraph 0078, agent panel 510 includes list of available agents that perform specific tasks; agents are neural network based agents; paragraph 0079, user interactions causing agents selected from panel 510 to be arranged to produce AI model, which is a collection of neural networks; paragraph 0080, neural network based agents trained based on training data; paragraphs 0092-0094, updating design generation GUI to expose underlying data associated with agent, such as various panels and a graphical depiction of the network architecture with which the user can interact to apply modifications to the neural network; correspondingly updating and re-compiling based on user modifications; paragraph 0182, Fig. 35, GUI showing two different versions of neural network; user interacting with modification element 3504 to increase or decrease the size of a given layer included in the network architecture; performing comparative analysis with different versions of network architecture to generate additional performance data; Figs. 36 and 37, showing multiple different versions of a neural network having different layer sizes, along with respective model information including performance and accuracy information; i.e. an overall AI model may be generated and trained based on a series of agents (which are themselves neural networks, and therefore analogous to models, which are also trained, such that an untrained model, the overall AI model, can be generated using a plurality of trained models, i.e. the trained agents making up the AI model), where overall AI model itself has not yet been trained/is untrained (i.e. as a whole));
an output control unit to output pieces of model information on each of learning models including the plurality of trained models and the one or more first untrained models (e.g. paragraph 0077, GUI panel 500 of Fig. 5 includes agent panel 510; paragraph 0078, agent panel 510 includes list of available agents that perform specific tasks; agents are neural network based agents; paragraph 0079, user interactions causing agents selected from panel 510 to be arranged to produce AI model, which is a collection of neural networks; paragraph 0081, displaying underlying data associated with any of agents in response to user input, such as displaying associated program code; paragraph 0082, Fig. 6, displaying underlaying data associated with agent superimposed over GUI panel 500; paragraph 0179, Fig. 32, network description GUI depicting performance data associated with neural network, such as accuracy graph; paragraph 0180, Fig. 33, GUI depicting other performance data associated with neural network; paragraph 0185, Fig. 36, network description GUI displaying comparative performance data associated with different versions of given neural network; alternate network architectures (with increased/decreased layer sizes, etc.) displayed along with accuracy graph 3610 including plots 3612 and 3622 that represent the accuracy of different versions of the neural network, corresponding to respective neural network architectures; paragraph 0186, Fig. 37, displaying other comparative performance data associated with different versions of given neural network; comparison panel 3700 including alternate network architectures (with increased/decreased layer sizes) and comparison panels 3712 and 3722, corresponding to those network architectures, convey various performance data associated with the respective network architectures (including correctness, memory usage, training, time, inference time, etc.), allowing the user to evaluate whether modifications made to the neural network increase or decrease performance; i.e. the GUI can display both the plurality of agents which are neural networks/ models making up the overall AI model as well as their arrangement to form the overall AI model, and can further display various different types of information regarding the agents and overall model, including program code, architectural details, performance information, etc., analgous to displaying pieces of model information for both a plurality of trained models (agents/neural networks) and the one or more first untrained models (the overall AI model that the agents are included within));
a receiving unit to receive input made by a user (e.g. paragraphs 0086-0091, Fig. 7, design generation GUI used to depict agents/neural networks; receiving configuration of agents forming AI model via GUI, such as user dragging and dropping agents within design area, etc.; receiving agent definition via user interaction with design generation GUI, where agent definition defines neural network that needs to be trained based on training data; compiling agent definition to generate compiled code which implements various layers of neural network and connections between layers; synthesizing compiled code to generate initial version of the network; instantiating instance of the network; initial version of the network is untrained; paragraph 0092-0094, updating design generation GUI to expose underlying data associated with agent, such as various panels and a graphical depiction of the network architecture with which the user can interact to apply modifications to the neural network; correspondingly updating and re-compiling based on user modifications); and
a training unit to train a second untrained model generated by using a trained model based on one of the pieces of model information that is received by the receiving unit and that is input by the user (e.g. paragraphs 0092-0094, updating design generation GUI to expose underlying data associated with agent, such as various panels and a graphical depiction of the network architecture with which the user can interact to apply modifications to the neural network; correspondingly updating and re-compiling based on user modifications; paragraph 0182, Fig. 35, GUI showing two different versions of neural network; user interacting with modification element 3504 to increase or decrease the size of a given layer included in the network architecture; performing comparative analysis with different versions of network architecture to generate additional performance data; paragraph 0184-0185, alternate versions of network architectures generated based on user modifications to network architecture; performance of different versions of model during training; Figs. 36 and 37, showing multiple different versions of a neural network having different layer sizes, along with respective model information including performance and accuracy information; paragraph 0198, generating performance data for each version of neural network; for given version of neural network, performance data indicating how accuracy of neural network changes during training; i.e. an initial/first version of an overall neural network/AI model may be generated based on a series of agents (which are themselves neural networks, and therefore analogous to models, which are also trained, such that an untrained model, the overall AI model, can be generated using a plurality of trained models, i.e. the trained agents making up the AI model) and the overall AI model or one of its constituent agents/neural networks can subsequently be modified, such as by changing the size of one or more layers, to generate additional, new versions of the trained neural network/model, where these additional/new versions have not specifically been trained and are therefore untrained, but then may be subsequently trained (i.e. in order to at least provide comparisons between different version of the models’ performance during training), analogous to training a second untrained model which is one of the output pieces of model information, such as training an untrained modified version of a neural network/model/agent, by using a trained model (i.e. using a trained modified agent/model selected/generated based on user input to generate a second untrained model which is to be subsequently trained, or using a trained model to generate a modified version of the trained model, where the modified version of the model has not been specifically trained following the modification, and subsequently training the modified version of the model)).
With respect to claim 20, Sikka teaches a learning device comprising: one or more hardware processors configured to function as (e.g. paragraphs 0056-0060, devices including processors to execute software applications and memory/storage storing the software applications; paragraph 0224-0227, described subject matter embodied as system, method, or computer program product; storage medium storing program for use in connection with instruction execution system; instructions executed by processor to implement described acts, etc.):
a model generation unit to generate one or more first untrained models with different sizes by using each of a plurality of trained models (e.g. paragraph 0077, GUI panel 500 of Fig. 5 includes agent panel 510; paragraph 0078, agent panel 510 includes list of available agents that perform specific tasks; agents are neural network based agents; paragraph 0079, user interactions causing agents selected from panel 510 to be arranged to produce AI model, which is a collection of neural networks; paragraph 0080, neural network based agents trained based on training data; paragraphs 0092-0094, updating design generation GUI to expose underlying data associated with agent, such as various panels and a graphical depiction of the network architecture with which the user can interact to apply modifications to the neural network; correspondingly updating and re-compiling based on user modifications; paragraph 0182, Fig. 35, GUI showing two different versions of neural network; user interacting with modification element 3504 to increase or decrease the size of a given layer included in the network architecture; performing comparative analysis with different versions of network architecture to generate additional performance data; Figs. 36 and 37, showing multiple different versions of a neural network having different layer sizes, along with respective model information including performance and accuracy information; i.e. an overall AI model may be generated and trained based on a series of agents (which are themselves neural networks, and therefore analogous to models, which are also trained, such that an untrained model, the overall AI model, can be generated using a plurality of trained models, i.e. the trained agents making up the AI model), where overall AI model itself has not yet been trained/is untrained (i.e. as a whole));
an output control unit to output pieces of model information on each of learning models including the plurality of trained models and the one or more first untrained models (e.g. paragraph 0077, GUI panel 500 of Fig. 5 includes agent panel 510; paragraph 0078, agent panel 510 includes list of available agents that perform specific tasks; agents are neural network based agents; paragraph 0079, user interactions causing agents selected from panel 510 to be arranged to produce AI model, which is a collection of neural networks; paragraph 0081, displaying underlying data associated with any of agents in response to user input, such as displaying associated program code; paragraph 0082, Fig. 6, displaying underlaying data associated with agent superimposed over GUI panel 500; paragraph 0179, Fig. 32, network description GUI depicting performance data associated with neural network, such as accuracy graph; paragraph 0180, Fig. 33, GUI depicting other performance data associated with neural network; paragraph 0185, Fig. 36, network description GUI displaying comparative performance data associated with different versions of given neural network; alternate network architectures (with increased/decreased layer sizes, etc.) displayed along with accuracy graph 3610 including plots 3612 and 3622 that represent the accuracy of different versions of the neural network, corresponding to respective neural network architectures; paragraph 0186, Fig. 37, displaying other comparative performance data associated with different versions of given neural network; comparison panel 3700 including alternate network architectures (with increased/decreased layer sizes) and comparison panels 3712 and 3722, corresponding to those network architectures, convey various performance data associated with the respective network architectures (including correctness, memory usage, training, time, inference time, etc.), allowing the user to evaluate whether modifications made to the neural network increase or decrease performance; i.e. the GUI can display both the plurality of agents which are neural networks/ models making up the overall AI model as well as their arrangement to form the overall AI model, and can further display various different types of information regarding the agents and overall model, including program code, architectural details, performance information, etc., analogous to displaying pieces of model information for both a plurality of trained models (agents/neural networks) and the one or more first untrained models (the overall AI model that the agents are included within));
a receiving unit to receive input made by a user (e.g. paragraphs 0086-0091, Fig. 7, design generation GUI used to depict agents/neural networks; receiving configuration of agents forming AI model via GUI, such as user dragging and dropping agents within design area, etc.; receiving agent definition via user interaction with design generation GUI, where agent definition defines neural network that needs to be trained based on training data; compiling agent definition to generate compiled code which implements various layers of neural network and connections between layers; synthesizing compiled code to generate initial version of the network; instantiating instance of the network; initial version of the network is untrained; paragraph 0092-0094, updating design generation GUI to expose underlying data associated with agent, such as various panels and a graphical depiction of the network architecture with which the user can interact to apply modifications to the neural network; correspondingly updating and re-compiling based on user modifications); and
a training unit to train, among the pieces of model information that are output by the output control unit, a second untrained model corresponding to one of the pieces of model information selected by the input made by the user (e.g. paragraph 0063, GUI providing user with tools for designing and connecting agents; training neural networks included in agents; paragraph 0071, training initial network to generate trained network; paragraph 0073, hyperparameter panel receiving hyperparameters influencing how neural network is trained from user; paragraph 0080, training neural network based agents within AI model based on training data; paragraphs 0092-0094, updating design generation GUI to expose underlying data associated with agent, such as various panels and a graphical depiction of the network architecture with which the user can interact to apply modifications to the neural network; correspondingly updating and re-compiling based on user modifications; paragraph 0182, Fig. 35, GUI showing two different versions of neural network; user interacting with modification element 3504 to increase or decrease the size of a given layer included in the network architecture; performing comparative analysis with different versions of network architecture to generate additional performance data; paragraph 0184-0185, alternate versions of network architectures generated based on user modifications to network architecture; performance of different versions of model during training; Figs. 36 and 37, showing multiple different versions of a neural network having different layer sizes, along with respective model information including performance and accuracy information; paragraph 0198, generating performance data for each version of neural network; for given version of neural network, performance data indicating how accuracy of neural network changes during training; i.e. one of the multiple agents/neural networks (either prior to or following modification) selected, input, and/or interacted with by user via GUI is trained).
With respect to claim 12, Sikka teaches all of the limitations of claim 1 as previously discussed, and further teaches wherein the output control unit outputs a graph representing a relation between accuracy and performance included in each of the pieces of model information (e.g. paragraph 0179, Fig. 32, displaying accuracy graph 3210 representing how accuracy of neural network changes over time during training; paragraph 0182, Fig. 34, displaying amount of memory consumed when executing neural network in the form of memory chart 3410 which is a bar graph indicating amount of memory consumed during execution of each layer set forth in network architecture; paragraph 0185, Fig. 6, displaying comparative performance data for different versions of neural network, including accuracy graph 3610 with plots representing accuracy of different versions of neural network during training; paragraph 0186, Fig. 37, displaying other comparative performance data associated with different versions of neural network, including correctness, memory usage, training time, and inference time for each depicted network version; i.e. both accuracy (as shown in Figs. 32 and 36) and performance (as shown in Fig. 34) information, including on a subcomponent/piece/layer level (as shown in Fig. 34) can be generated and displayed to the user, where each of these types of information can be displayed for each version of the network/model, within a single display (as shown in Fig. 37)).
With respect to claim 14, Sikka teaches all of the limitations of claim 1 as previously discussed, and further teaches wherein the output control unit outputs a computation amount that is performance included in each of the pieces of model information, the computation amount being output for each type of computation (e.g. paragraph 0182, Fig. 34, displaying amount of memory consumed when executing neural network in the form of memory chart 3410 which is a bar graph indicating amount of memory consumed during execution of each layer set forth in network architecture; paragraph 0186, Fig. 37, displaying other comparative performance data associated with different versions of neural network, including correctness, memory usage, training time, and inference time for each depicted network version; i.e. performance information may be displayed with respect to each individual subcomponent/layer of the model/network (i.e. where each layer may be associated with a type of computation), and performance information may also be displayed with respect to performance as indicated by a relative amounts of time required for different types of computation, such as for training time, inference time, etc.).
With respect to claim 15, Sikka teaches all of the limitations of claim 1 as previously discussed, and further teaches wherein the output control unit outputs a change amount and the evaluation data, the change amount indicating a change between an inference result for evaluation data by a learning model before resizing and an inference result for the evaluation data by the learning model after the resizing (e.g. paragraph 0186, Fig. 37, displaying other comparative performance data associated with different versions of neural network, including correctness, memory usage, training time, and inference time for each depicted network version; i.e. as shown in Fig. 37, the alternative networks/models having increased and decreased layer size (as compared to the original model) are displayed as having various changes/differences, such as +2 or -5 percent correctness, +18 or -14 percent memory usage, -10 or +22 percent training time, and +4 or -3 percent inference time).
Claim Rejections – 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 3 and 4 are rejected under 35 U.S.C. 103 as being unpatentable over Sikka in view of Yamamoto et al. (US 20190378014 A1).
With respect to claim 3, Sikka teaches all of the limitations of claim 1 as previously discussed. Sikka does not explicitly disclose wherein the model generation unit generates the one or more first untrained models with different sizes by executing pruning and morphing, the pruning being executed to determine a channel number ratio between layers of a trained model, the morphing being executed to increase or reduce the number of channels included in the layers while maintaining the channel number ratio between the layers determined by the pruning.
However, Yamamoto teaches wherein the model generation unit generates the one or more first untrained models with different sizes by executing pruning and morphing, the pruning being executed to determine a channel number ratio between layers of a trained model, the morphing being executed to increase or reduce the number of channels included in the layers while maintaining the channel number ratio between the layers determined by the pruning (e.g. paragraph 0036, reducing load of filters in trained model at each layer by channel units, referred to as “pruning”; paragraph 0041, accompanying channel deletion in L layer, number of channels in the L layer input reduced to two; number of output channels from the L layer remains at two; however, supposed that pruning is also performed at the L+1 layer, then the number of output channels from the L layer would also be reduced commensurate to the reduction in the number of channels of the L+1 layer; i.e. the system is configured to perform pruning/deletion of channels in multiple different layers, and to determine relative corresponding numbers of input and output channels between layers, analogous to determining a channel number ratio between layers of the trained model (i.e. as provided in the example, determining an amount of output layers in layer L and a number of input layers in layer L+1, and determining a value such that commensurate/corresponding pruning of each may occur; in the example of paragraph 0041, the number of output channels in the L layer is 2 and the number of input channels in the L+1 layer is also 2, so the ratio/relation between these numbers of channels may be that the numbers of channels in each layer is equal (i.e. 1:1, etc.)); in addition, as cited, pruning/morphing may be applied to output or input channels of one layer, and a commensurate pruning/morphing of corresponding output or input channels in another layer may be performed, analogous to morphing to increase or reduce the number of channels included in the layers while maintaining the channel number ratio between the layers determined by the pruning (i.e. in the example of paragraph 0041, it has been determined that there are the same number of output channels in layer L and input channels in layer L+1, and this proportion/ratio/relationship is maintained by reducing a commensurate number of channels in a layer when a number of channels is reduced in the other layer)).
Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention having the teachings of Sikka and Yamamoto in front of him to have modified the teachings of Sikka (directed to techniques for creating, analyzing, and modifying neural networks), to incorporate the teachings of Yamamoto (directed to neural network load reduction) to include the capability to determine a proportion/ratio/relationship between a number of output channels in a first layer and a number of input channels in a second layer and, when modifying the number of channels in one of the layers (such as removing a channel from the second layer), maintain the proportion/ratio/relationship by modifying a commensurate number of channels in the other layer (such as removing a channel from the first layer, as taught by Yamamoto). One of ordinary skill would have been motivated to perform such a modification in order to reduce the load of filters in a trained model as described in Yamamoto (paragraph 0036).
With respect to claim 4, Sikka in view of Yamamoto teaches all of the limitations of claim 3 as previously discussed, and Yamamoto further teaches wherein the model generation unit adjusts the number of channels of each of the layers of a generated first untrained model to a value satisfying a predetermined setting condition (e.g. paragraphs 0075-0077, selecting channels satisfying predetermined relationship between output feature values and predetermined threshold value as redundant channels; channel having output feature value below threshold value considered to not be of much importance, and is accordingly selected as a redundant channel; utilizing threshold value to select redundant channels whose output feature values expressed by statistic which is below threshold value, allowing efficient number of redundant channels to be selected; deleting redundant channels from the layer, reducing the number of channels in the layer; i.e. the numbers of channels in the layers are adjusted, such as reduced via deletion, using a predetermined value/relationship for determining which channels are redundant/unimportant, such that the remaining number of channels in the layer satisfies the predetermined value/relationship).
Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention having the teachings of Sikka and Yamamoto in front of him to have modified the teachings of Sikka (directed to techniques for creating, analyzing, and modifying neural networks), to incorporate the teachings of Yamamoto (directed to neural network load reduction) to include the capability to adjust the numbers of channels in each of the layers to a value which satisfies a predetermined value/relationship (as taught by Yamamoto). One of ordinary skill would have been motivated to perform such a modification in order to reduce the load of filters in a trained model as described in Yamamoto (paragraph 0036).
Claims 6, 8, 9, 11, and 13 are rejected under 35 U.S.C. 103 as being unpatentable over Sikka in view of Clemons et al. (US 20230111375 A1).
With respect to claim 6, Sikka teaches all of the limitations of claim 1 as previously discussed, and further teaches wherein the one or more hardware processors are further configured to function as an accuracy estimation unit to estimate accuracy by using each of the plurality of trained models, and the output control unit outputs model information including the accuracy and performance of each of the plurality of trained models (e.g. paragraph 0179, Fig. 32, GUI displaying performance panel 3200 including accuracy graph; paragraph 0185, Fig. 36, displaying comparative performance data associated with different versions of given neural network, including plots representing accuracy of different versions of neural network during training; i.e. for each of the trained agents/neural networks, an accuracy may be determined/estimated and displayed via a GUI).
Sikka does not explicitly disclose wherein the one or more hardware processors are further configured to function as an accuracy estimation unit to estimate accuracy of each of the one or more first untrained models, and the output control unit outputs model information including the estimated accuracy and performance of each of the one or more first untrained models.
However, Clemons teaches wherein the one or more hardware processors are further configured to function as an accuracy estimation unit to estimate accuracy of each of the one or more first untrained models, and the output control unit outputs model information including the estimated accuracy and performance of each of the one or more first untrained models (e.g. paragraph 0027, applying configuration settings to augmented neural network and measuring accuracy of the output; determining correlation between configuration settings and performance constraints for desired level of accuracy; paragraph 0036, providing configuration settings, input tensors, etc. to performance estimation unit which produces a performance estimate of the augmented neural network model for the selected configuration settings, using the selected configuration settings and input tensor; performance estimation unit measures performance metrics during inference and updates the performance estimates for the configuration settings; paragraph 0039, outputting estimate of augmented neural network model performance; reduction in accuracy occurring from running augmented neural network model based on the constraints instead of the original neural network model; paragraph 0045, identifying configuration settings having highest accuracy that meets the target performance metric; paragraph 0054, tradeoff estimation used to identify configuration settings that maximize accuracy while satisfying target metric value; bins containing expected execution time, expected accuracy, and associated configuration settings; i.e. the system estimates and outputs performance information such as expected execution time and expected accuracy for a given set of augmented neural network configuration settings (corresponding to an augmented neural network which has not been specifically trained/is untrained with respect to the specific augmentation/performance cosntraints)).
Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention having the teachings of Sikka and Clemons in front of him to have modified the teachings of Sikka (directed to techniques for creating, analyzing, and modifying neural networks), to incorporate the teachings of Clemons (directed to augmenting and dynamically configuring a neural network model for real-time systems) to include the capability to estimate and output performance information such as expected/estimated execution time and expected/estimated accuracy for a given set of augmented neural network configuration settings (i.e. of a modified version of an original neural network). One of ordinary skill would have been motivated to perform such a modification in order to allow for dynamic reconfiguration of neural networks to meet specific performance constraints without requiring intervening training for the specific performance constraints as described in Clemons (paragraph 0023).
With respect to claim 13, Sikka teaches all of the limitations of claim 1 as previously discussed, and Sikka further teaches wherein the output control unit outputs information indicating performance of each of layers of each of the learning models defined by one of the pieces of model information (e.g. paragraph 0182, Fig. 34, displaying amount of memory consumed in form of memory chart 3410 which indicates amount of memory consumed during execution of each layer; i.e. for each trained agent/neural network, corresponding layer-level performance information may be displayed).
Sikka does not explicitly disclose wherein the output control unit outputs information indicating the number of channels. However, Clemons teaches wherein the output control unit outputs information indicating the number of channels (e.g. paragraphs 0026-0027, determining selected configuration settings for dynamically configuring augmented neural network based on performance constraints; augmentations implemented to convert trained model into augmented model include reductions in numbers of channels input and output to layers; paragraph 0035, configuration table 112 encoding modifications implemented via augmentations, including specifying how many of a layer’s input channels to retain; paragraph 0036, storing performance estimates in configuration table 112; paragraph 0045, receiving performance constraints, selecting configuration settings, storing configuration settings in configuration table 112; paragraph 0050, configuration settings written to configuration table 112; i.e. information indicating a number of channels, such as a number of channels to be retained may be output by at least storing the information indicating the number of channels in a configuration table).
Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention having the teachings of Sikka and Clemons in front of him to have modified the teachings of Sikka (directed to techniques for creating, analyzing, and modifying neural networks), to incorporate the teachings of Clemons (directed to augmenting and dynamically configuring a neural network model for real-time systems) to include the capability to output configuration settings, including information indicating a number of channels to be retained for layers of the neural network. One of ordinary skill would have been motivated to perform such a modification in order to allow for dynamic reconfiguration of neural networks to meet specific performance constraints without requiring intervening training for the specific performance constraints as described in Clemons (paragraph 0023).
With respect to claim 8, Sikka in view of Clemons teaches all of the limitations of claim 6 as previously discussed, and Clemons further teaches wherein the accuracy estimation unit estimates the accuracy each of the one or more first untrained models generated by deletion of channels included in each of the plurality of trained models, the accuracy of the first untrained model being estimated on the basis of the accuracy of the trained model, importance of each of channels included in a layer of the trained model, and importance of each of the channels included in a layer of the first untrained model ((e.g. paragraph 0027, applying configuration settings to augmented neural network and measuring accuracy of the output; determining correlation between configuration settings and performance constraints for desired level of accuracy; paragraph 0036, providing configuration settings, input tensors, etc. to performance estimation unit which produces a performance estimate of the augmented neural network model for the selected configuration settings, using the selected configuration settings and input tensor; performance estimation unit measures performance metrics during inference and update the performance estimates for the configuration settings; paragraph 0038, execution graph of augmented neural network modified compared with execution graph of original neural network, such as by removing input channels to a layer, zeroing weight planes for a given input, reducing number of output channels for a layer, removing an entire weight from the layer, etc.; reducing input channels selectively applied to reduce input channels differently for each consumer or recipient while reducing output channels causes all consumers or recipients to receive reduced number of output channels; paragraph 0039, outputting estimate of augmented neural network model performance; reduction in accuracy occurring from running augmented neural network model based on the constraints instead of the original neural network model; paragraph 0045, identifying configuration settings having highest accuracy that meets the target performance metric; paragraph 0054, tradeoff estimation used to identify configuration settings that maximize accuracy while satisfying target metric value; bins containing expected execution time, expected accuracy, and associated configuration settings; paragraph 0116, indicating that assignments of weights are based on importance; i.e. the system estimates and outputs performance information such as expected execution time and expected accuracy for a given set of augmented neural network configuration settings by deleting channels (and associated weights) from the model, based on the accuracy of the original model and the importance of the channels in the models, where this importance is indicated via selection or non-selection of the channels for deletion as indicated by its corresponding effect on associated weights which are indicative of importance).
Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention having the teachings of Sikka and Clemons in front of him to have modified the teachings of Sikka (directed to techniques for creating, analyzing, and modifying neural networks), to incorporate the teachings of Clemons (directed to augmenting and dynamically configuring a neural network model for real-time systems) to include the capability to estimate and output performance information such as expected/estimated execution time and expected/estimated accuracy for a given set of augmented neural network configuration settings, such as deletion of channels in the augmented/untrained model, based on the accuracy of the trained model. One of ordinary skill would have been motivated to perform such a modification in order to allow for dynamic reconfiguration of neural networks to meet specific performance constraints without requiring intervening training for the specific performance constraints as described in Clemons (paragraph 0023).
With respect to claim 9, Sikka in view of Clemons teaches all of the limitations of claim 6 as previously discussed, and Clemons further teaches wherein the accuracy estimation unit estimates the accuracy of each of the one or more first untrained models by using one or more of the plurality of trained models each having a parameter size difference from a first untrained model as an accuracy estimation target, the parameter size difference being equal to or smaller than a threshold value (e.g. paragraph 0027, applying configuration settings to augmented neural network and measuring accuracy of the output; determining correlation between configuration settings and performance constraints for desired level of accuracy; paragraph 0036, providing configuration settings, input tensors, etc. to performance estimation unit which produces a performance estimate of the augmented neural network model for the selected configuration settings, using the selected configuration settings and input tensor; performance estimation unit measures performance metrics during inference and update the performance estimates for the configuration settings; paragraph 0039, outputting estimate of augmented neural network model performance; reduction in accuracy occurring from running augmented neural network model based on the constraints instead of the original neural network model; paragraph 0045, identifying configuration settings having highest accuracy that meets the target performance metric; paragraph 0054, tradeoff estimation used to identify configuration settings that maximize accuracy while satisfying target metric value; bins containing expected execution time, expected accuracy, and associated configuration settings; i.e. the system estimates and outputs performance information such as expected execution time and expected accuracy for a given set of augmented neural network configuration settings).
Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention having the teachings of Sikka and Clemons in front of him to have modified the teachings of Sikka (directed to techniques for creating, analyzing, and modifying neural networks), to incorporate the teachings of Clemons (directed to augmenting and dynamically configuring a neural network model for real-time systems) to include the capability to estimate and output performance information such as expected/estimated execution time and expected/estimated accuracy for a given set of augmented neural network configuration settings (i.e. of a modified version of an original neural network). One of ordinary skill would have been motivated to perform such a modification in order to allow for dynamic reconfiguration of neural networks to meet specific performance constraints without requiring intervening training for the specific performance constraints as described in Clemons (paragraph 0023).
With respect to claim 11, Sikka in view of Clemons teaches all of the limitations of claim 6 as previously discussed, and Clemons further teaches wherein the accuracy estimation unit estimates again accuracy of a learning model in accordance with one of the pieces of model information that is changed when a change of the one of the pieces of model information, which is output, is received (e.g. paragraph 0023, dynamically configuring augmented neural network model to adapt to real-time changes in performance constraints; paragraph 0036, providing configuration settings, input tensors, etc. to performance estimation unit which produces a performance estimate of the augmented neural network model for the selected configuration settings, using the selected configuration settings and input tensor; performance estimation unit measures performance metrics during inference and update the performance estimates for the configuration settings; paragraph 0037, specific configuration settings reused ay augmented neural network model until performance metric/target is updated; paragraph 0039, outputting estimate of augmented neural network model performance; reduction in accuracy occurring from running augmented neural network model based on the constraints instead of the original neural network model; paragraph 0045, identifying configuration settings having highest accuracy that meets the target performance metric; paragraph 0054, tradeoff estimation used to identify configuration settings that maximize accuracy while satisfying target metric value; bins containing expected execution time, expected accuracy, and associated configuration settings; i.e. the performance constraints for the neural network model may change dynamically, in real-time, such that the cited processes including accuracy estimation will be repeated when a change in the required information for the model (such as a desired accuracy or performance) occurs).
Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention having the teachings of Sikka and Clemons in front of him to have modified the teachings of Sikka (directed to techniques for creating, analyzing, and modifying neural networks), to incorporate the teachings of Clemons (directed to augmenting and dynamically configuring a neural network model for real-time systems) to include the capability to estimate and output performance information such as expected/estimated execution time and expected/estimated accuracy for a given set of augmented neural network configuration settings (i.e. of a modified version of an original neural network), where the configuration settings may change dynamically in real time, such that the estimated performance and accuracy information is updated based on updates to the configuration settings. One of ordinary skill would have been motivated to perform such a modification in order to allow for dynamic reconfiguration of neural networks to meet specific performance constraints without requiring intervening training for the specific performance constraints as described in Clemons (paragraph 0023).
Claims 7 and 10 are rejected under 35 U.S.C. 103 as being unpatentable over Sikka in view of Clemons, further in view of Guttmann (US 20180336509 A1).
With respect to claim 7, Sikka in view of Clemons teaches all of the limitations of claim 6 as previously discussed. Sikka and Clemons do not explicitly disclose wherein the accuracy estimation unit estimates the accuracy of each of the one or more first untrained models by interpolation and extrapolation using each of the plurality of trained models.
However, Guttmann teaches wherein the accuracy estimation unit estimates the accuracy of each of the one or more first untrained models by interpolation and extrapolation using each of the plurality of trained models (e.g. paragraph 0010, updating inference model based on processing resources; utilizing updated inference model; paragraphs 0153-0154, selecting inference model with best estimated performances for available resources; rules for selection of inference model with best estimated performance including selection of inference model with best estimated accuracy; performance of inference model estimated by interpolating and extrapolating the performances of the inference model when utilized with other available processing resources from past records of the performances of the inference model when utilized using other processing resources, by using machine learning model trained to estimate performances of the inference model when utilized using different processing resources such as based on the properties of the inference model, etc.; i.e. an updated model may be obtained (analogous to an untrained model), and the performance, including accuracy, of the model may be estimated by performing interpolation and extrapolation based on the historical (i.e. prior to update/trained) model performance).
Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention having the teachings of Sikka, Clemons, and Guttmann in front of him to have modified the teachings of Sikka (directed to techniques for creating, analyzing, and modifying neural networks) and Clemons (directed to augmenting and dynamically configuring a neural network model for real-time systems), to incorporate the teachings of Guttmann (directed to use of inference models, such as in a dataset management system) to include the capability to estimate and output performance information such as expected/estimated accuracy for an untrained model (such as an updated version of a previously trained model) by using interpolation and extrapolation based on the trained model. One of ordinary skill would have been motivated to perform such a modification in order to allow for usage of inference models based on available processing resources, and performance of personalized quality assurance of inference models as described in Guttmann (paragraph 0010-0011).
With respect to claim 10, Sikka in view of Clemons teaches all of the limitations of claim 6 as previously discussed. Sikka and Clemons do not explicitly disclose wherein the accuracy estimation unit excludes, from the plurality of trained models used for accuracy estimation of each of the one or more first untrained models, a trained model whose change amount of performance relative to the performance before resizing is equal to or smaller than a threshold value, among trained models generated by the resizing.
However, Guttmann teaches wherein the accuracy estimation unit excludes, from the plurality of trained models used for accuracy estimation of each of the one or more first untrained models, a trained model whose change amount of performance relative to the performance before resizing is equal to or smaller than a threshold value, among trained models generated by the resizing (paragraph 0010, updating inference model based on processing resources; utilizing updated inference model; paragraphs 0153-0154, selecting inference model with best estimated performances for available resources; rules for selection of inference model with best estimated performance including selection of inference model with best estimated accuracy; performance of inference model estimated by interpolating and extrapolating the performances of the inference model when utilized with other available processing resources from past records of the performances of the inference model when utilized using other processing resources, by using machine learning model trained to estimate performances of the inference model when utilized using different processing resources such as based on the properties of the inference model, etc.; paragraph 0163-0164, comparing updated inference model with inference model of step 920 to determine if the update to the updated inference model is below a selected threshold; when it is determined that the update is below the threshold, withholding and/or foregoing step 960, utilizing the updated inference model; i.e. where a first original inference model is provided, and then an updated inference model is considered for use, if the change amount between the original and updated inference models is less than a threshold amount, the updated inference model may not be utilized, and may therefore be excluded, such as from subsequent accuracy estimations used in selecting inference models for use/utilization).
Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention having the teachings of Sikka, Clemons, and Guttmann in front of him to have modified the teachings of Sikka (directed to techniques for creating, analyzing, and modifying neural networks) and Clemons (directed to augmenting and dynamically configuring a neural network model for real-time systems), to incorporate the teachings of Guttmann (directed to use of inference models, such as in a dataset management system) to include the capability to exclude from use (including from future/subsequent accuracy estimations) an updated version of a model which has a change amount when compared to an original model which is below a threshold value. One of ordinary skill would have been motivated to perform such a modification in order to allow for usage of inference models based on available processing resources, and performance of personalized quality assurance of inference models as described in Guttmann (paragraph 0010-0011).
It is noted that any citation to specific pages, columns, lines, or figures in the prior art references and any interpretation of the references should not be considered to be limiting in any way. “The use of patents as references is not limited to what the patentees describe as their own inventions or to the problems with which they are concerned. They are part of the literature of the art, relevant for all they contain,” In re Heck, 699 F.2d 1331, 1332-33, 216 USPQ 1038, 1039 (Fed. Cir. 1983) (quoting in re Lemelson, 397 F.2d 1006, 1009, 158 USPQ 275, 277 (GCPA 1968)). Further, a reference may be relied upon for all that it would have reasonably suggested to one having ordinary skill the art, including nonpreferred embodiments. Merck & Co, v. Biocraft Laboratories, 874 F.2d 804, 10 USPQ2d 1843 (Fed. Cir.), cert, denied, 493 U.S. 975 (1989). See also Upsher-Smith Labs. v. Pamlab, LLC, 412 F,3d 1319, 1323, 75 USPQ2d 1213, 1215 (Fed. Cir, 2005): Celeritas Technologies Ltd. v. Rockwell International Corp., 150 F.3d 1354, 1361, 47 USPQ2d 1516, 1522-23 (Fed. Cir. 1998).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JEREMY L STANLEY whose telephone number is (469)295-9105. The examiner can normally be reached on Monday-Friday from 9:00 AM to 5:00 PM CST.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Abdullah Al Kawsar, can be reached at telephone number (571) 270-3169. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from Patent Center and the Private Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from Patent Center or Private PAIR. Status information for unpublished applications is available through Patent Center and Private PAIR for authorized users only. Should you have questions about access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) Form at https://www.uspto.gov/patents/uspto-automated- interview-request-air-form.
/JEREMY L STANLEY/
Primary Examiner, Art Unit 2127