DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 11/10/2025 has been entered.
Status of Claims
Claims 1, 7, 11, and 20 have been amended by Applicant. Claims 3, 10, and 13 are cancelled and new claims 23 and 24 have been added. Claims 1-2, 4-9, 11-12, and 14-24 are currently pending.
Response to Arguments
Claim Rejections under 35 U.S.C. 103
The rejection of claims 1-2, 4-9, 11-12, and 14-22 under 35 U.S.C. 103 have been withdrawn based on Applicant’s amendments to claims 1, 11, and 20. However, upon further consideration and in view of said amendments a new grounds of rejection has been made herein.
Applicant’s argues (in pages 12 and 14) against the amended limitations in claims 1, 11 and 20 and new claims 23 and 24 as it pertains to the teachings of the Leeman-Munk reference. Applicant’s arguments with respect new claims 23-24 and the amended limitations of claims 1, 11, and 20 are moot because the new ground of rejection and the rejection of claims 23 and 24 do not rely on Leeman-Munk for any teaching or matter specifically challenged in the argument.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or non-obviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1-4, 6-7, 11-14, 16, and 18-24 are rejected under 35 U.S.C. 103 as being unpatentable over Seide et al. (US 20180336461 A1, filed Jun. 15, 2017 and published Nov. 22, 2018) in view of Hammond et al. (US 20200250583 A1, filed Jan, 26, 2017 and published Jul. 27, 2017), and further in view of Leeman-Munk et al. (US 20180095632 A1, filed Oct. 3, 2017 and published Apr. 5, 2018), and in further view of Ayala et al. (US 20070094168 A1 filed Aug. 12, 2005 and published Apr. 26, 2007)
Regarding claim 1, Seide teaches a computer-implemented method for generating a neural network (Seide, Abstract, teaches methods, systems, machine-readable media, and devices which operate a neural network defined by user code.), the method comprising:
receiving, via one or more text fields that are displayed withing within a graphical user interface one or more text inputs that specify a neural network definition in at least one of the program code or mathematical notation, wherein the neural network definition corresponds to the neural network (Seide, Paragraph [0061] teaches input code may be received via a user interface, wherein the input code [ i.e., “input code” reading on in at least one of the program code] is user generated code that configures, organizes, specifies, trains and/or operates a neural network. The neural network operating system may present a graphical user interface to the user that includes a text input field for receiving user generated code.);
generating an architectural representation of the neural network based on the neural network definition in the at least one program code or mathematical notation, wherein the architectural representation graphically depicts one or more neural network layers included in the neural network and topological information associated with the neural network (Seide, Paragraph [0064] further teaches a graph module initially assembles the neural network model [i.e., architectural representation] according to the commands of the user; Seide, Paragraph [0023] teaches a user designs and trains a neural network using a graphical user interface by providing user code; Seide, Paragraph [0061] teaches input code may be received via a user interface, wherein the input code [ i.e., “input code” reading on in at least one of the program code] is user generated code that configures, organizes, specifies, trains and/or operates a neural network. The neural network operating system may present a graphical user interface to the user that includes a text input field for receiving user generated code.; Seide, Paragraph [0023], teaches in one example, a user designs and trains a neural network, using a graphical user interface, by providing user code. The user code designates input layers, interior hidden layers, and output layers. The user code may also train the neural network, apply the neural network, or perform other operations related to the neural network.; Seide, Paragraph [0064], teaches the graph module 140 initially assembles the neural network model according to the commands of the user. As indicated, the graph module 140 may include input layers, hidden layers, output layers, or others as one skilled in the art may appreciate.);
However, Seide does not distinctly disclose:
displaying the architectural representation within the graphical user interface;
receiving a modification to the architectural representation of the neural network via the graphical user interface;
updating, based on the modification to the architectural representation of the neural network and without user interaction with the at least one of program code or mathematical notation, a portion of the at least one of program code or mathematical notation corresponding to a portion of the architectural representation of the neural network impacted by the modification to the architectural representation of the neural network to generate a modified neural network definition corresponding to the neural network;
generating an updated architectural representation of the neural network based on the modified neural network definition
displaying, within the graphical user interface, the updated architectural representation and text specifying the modified neural network definition in at least one program code or mathematical notation; and
training the neural network based on the modified neural network definition and the one or more hyperparameters
Nevertheless Hammond teaches:
receiving a modification to the architectural representation of the neural network via the graphical user interface (Hammond, Paragraph [0047] teaches and “Edit” short cut in the IDE for accessing a text editor for creating or modifying a source code in a pedagogical programming language defining a mental mode; A “Design” short cut in the IDE for accessing mental-model designer for creating or modifying a mental model; Hammond, Paragraph [0159] further teaches Author-based modification of the mental model by the typing in the textual mode can automatically modify the mental model in the mental-model designer, and author-based modification of the mental model by the mouse gestures in the graphical mode can automatically modify the mental-model in the text editor.; [Note: the mental model, as taught by Hammond, being understood to read on the architectural representation of the neural network, via the graphical user interface is read by the IDE editor.]);
updating, based on the modification to the architectural representation of the neural network and without user interaction with the at least one of program code or mathematical notation, a portion of the at least one of program code or mathematical notation corresponding to a portion of the architectural representation of the neural network impacted by the modification to the architectural representation of the neural network to generate a modified neural network definition corresponding to the neural network; (Hammond, Paragraph [0047] teaches and “Edit” short cut in the IDE for accessing a text editor for creating or modifying a source code in a pedagogical programming language defining a mental model [Note: modifying the program code that defines the mental model understood to read on updating the program code of the portion of the architectural representation of the neural network, as claimed.]; Hammond, Paragraph [0158] further teaches a method of an AI engine include, in some embodiments, receiving a source code, generating an assembly code, proposing a neural-network layout, building an AI model, and training the AI model. Receiving the source code can include receiving the source code through an API exposed to a GUI. The GUI can be configured to enable an author to define a mental model with a pedagogical programming language, the mental model including an input, one or more concept nodes, one or more optional stream nodes, and an output. The GUI can be further configured to enable the author to define the mental model in a textual mode, a graphical mode, or both the textual mode and the graphical mode. Generating the assembly code can include generating the assembly code from the source code with a compiler of the AI engine configured to work with the GUI. Proposing a neural-network layout can include proposing the neural-network layout including one or more neural-network layers from the assembly code with an architect AI-engine module of the AI engine. Building the AI model can include building the AI model including the one or more neural-network layers from the neural-network layout with a learner AI-engine module of the AI engine. Training the AI model can include training the AI model on the mental model with an instructor AI-engine module of the AI engine.; Hammond, Paragraph [0159] teaches the GUI can be an IDE including a text editor and a mental model designer. The text editor can be configured to enable the author to define the mental model including one or more curriculums for training the AI model respectively on the one or more concept nodes via typing in the textual mode. The mental-model designer can be configured to enable the author to define the mental model via mouse gestures in the graphical mode. Author-based modification of the mental model by the typing in the textual mode can automatically modify the mental model in the mental-model designer, and author-based modification of the mental model by the mouse gestures in the graphical mode can automatically modify the mental-model in the text editor.); and
generating an updated architectural representation of the neural network based on the modified neural network definition (Hammond, Paragraph [0037] teaches AI systems and methods provided herein enable users such as software developers to design a neural network layout or neural network topology 102, build a neural network 104, train the neural network 104 to provide a trained neural network 106, and deploy the trained neural network 106 as a deployed neural network 108 in any of a number of desired ways. For example, the trained AI model or the trained neural network 106 can be deployed in or used with a software application or a hardware-based system; Hammond, Paragraph [0049] teaches the IDE is configured such that author-based modifications of the mental model made by the typing in the textual mode are automatically replicated in the mental model as represented in the mental-model designer [Note: Hammond 0049 understood to read on the limitation as claimed].; Hammond Paragraph [0089] further teaches The AI engine takes in a description of a problem and how one would go about teaching concepts covering aspects of the problem to be solved, and the AI engine compiles the coded description into lower-level structured data objects that a machine can more readily understand, builds a network topology of the main problem concept and sub-concepts covering aspects of the problem to be solved, trains codified instantiations of the sub-concepts and main concept, and executes a trained AI model containing one, two, or more neural networks.;).
Hammond further teaches:
receiving, via the one or more text fields displayed within the graphical user interface, one or more hyperparameters associated with training of the neural network (Hammond, Paragraph [0130] teaches if the BRAIN server picks Deep Q-Learning for training a mental model, it would also pick an appropriate topology, hyper-parameters, and initial weight values for synapses.);
and training the neural network based on the modified neural network definition and the one or more hyperparameters (Hammond, Paragraph [0049] teaches the IDE is configured such that author-based modifications of the mental model made by the typing in the textual mode are automatically replicated in the mental model as represented in the mental-model designer. Likewise, author-based modifications of the mental model by the mouse gestures in the graphical mode are automatically replicated in the mental-model as represented in the text editor.; Hammond, Paragraph [0050] further teaches the IDE is also configured to enable an author to access training data from a training-data source such as through the “Data” short cut; analytical tools for analyzing aspects of training a neural network such as a graphical representation of a trained neural network's performance through the training pane 422; and tools for configuring and deploying a trained neural network through the “Deploy” short cut or the deployment configurator 416.; Hammond, [claim 4] further teaches “wherein the GUI is configured to enable an author to define a proposed model with a pedagogical programming language, the proposed model including an input, one or more concept nodes, and an output, and wherein the GUI is further configured to enable the author to provide a program annotation specifying an execution behavior for the proposed model; generate an assembly code from the source code with a compiler of an artificial intelligence (“AI”) engine; and build an executable, trained AI model based on the proposed model including a neural-network layout having one or more layers derived from the assembly code.”; Hammond, Paragraph [0130] teaches if the BRAIN server picks Deep Q-Learning for training a mental model, it would also pick an appropriate topology, hyper-parameters, and initial weight values for synapses. A benefit of having the heuristics available to be used programmatically is that the BRAIN server is not limited to a single choice; it can select any number of possible algorithms, topologies, etc., train a number of BRAINS in parallel, and pick the best result.).
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to have modified the operating a neural network defined by user code, as taught by Seide, with the graphical user interface to an artificial intelligence engine, as taught by Hammond, as it can select any number of possible algorithms, topologies, etc., train a number of BRAINS in parallel, and pick the best result. (Hammond, Paragraph [0130))
Although the combination at least suggests the limitation displaying the architectural representation within the graphical user interface, Leeman-Munk more clearly teaches the limitation as provided below.
Leeman-Munk teaches displaying the architectural representation within the graphical user interface; (Leeman-Munk, Paragraph [0055] teaches certain aspects and features of the present disclosure relate to a graphical user interface (GUI) for visualizing a deep neural network. The GUI can include a node-link diagram that visually represents nodes (neurons) in the deep neural network and connections (links) between the nodes. [i.e., architectural representation within the GUI]; Leeman-Munk, Paragraph [0155] teaches FIG. 11 is an example of a GUI 1100 for visualizing deep neural networks according to some aspects. The GUI 1100 can enable a user to (1) explore a deep neural network using one or more visualizations of the deep neural network; (2) quickly determine information about the deep neural network based on color coding; (3) flexibly control and focus on desired visual information with threshold, inspection, and tooltip operations; (4) explore, discover, and compare patterns associated with the deep neural network; or (5) any combination of these. A user may be able to analyze a deep neural network from new perspectives and uncover insights into how the deep neural network functions using the GUI 1100.; Leeman-Munk, Paragraph [0173], teaches in some examples, a user may be able to select the “Animate” tab 1134 to cause the GUI 1100 to enter an animation mode. An example of a GUI 1702 in animation mode is shown in FIG. 17. In animation mode, GUI 1702 can animate the visualization using aggregate values resulting from multiple inputs into the deep neural network.)
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to have modified the operating a neural network defined by user code, as taught by Seide in view of Hammond, to further include the interactive visualizations of a neural network, as taught by Leeman-Munk, in order to provide an intuitive, easy-to-use GUI that can enable users to obtain a better understanding of how a deep neural network is operating, why the deep neural network is making certain decisions, and how the deep neural network produces final results. This may lead to a better understanding of how to train and build deep neural networks that are more efficient, robust, and accurate. (Leeman-Munk, Paragraph [0058])
Examiner believes that the combination teaches or at least suggests displaying, within the graphical user interface, the updated architectural representation and text specifying the modified neural network definition in at least one program code or mathematical notation (Leeman-Munk [0159] teaching in the example shown in FIG. 11, the symbols in the node-link diagram 1102 are color coded to represent how the deep neural network responded to a user input (specifically, the word “animation”). A user may be able to provide any desired input via input box 1132. A computing device can receive the user input, feed the user input into the deep neural network, and update the color coding of the node-link diagram 1102 based on the results. For example, a representation of a node in the input layer 1104 can be color coded to indicate a weight of the node in the deep neural network (e.g., in response to user input).). However, Ayala more clearly teaches the limitation as provided below.
Ayala teaches displaying, within the graphical user interface, the updated architectural representation and text specifying the modified neural network definition in at least one program code or mathematical notation (Ayala, Paragraph [0086] teaches the network topology panel 168 of the display interface window 160 is shown after further configuration steps taken by a user to establish neuron connections. At the point shown, the user may have selected a particular neuron, such as neuron N 1,1, via the mouse pointer or other graphical user interface selection mechanism, to reveal configuration details regarding the neuron or to facilitate the configuration thereof. To that end, the network editor panel 164 is modified from the view shown in FIG. 4, which is generally directed to the display of information at a network level, to instead display information regarding the selected neuron. In this exemplary embodiment, the panel 164 reveals a number of input data boxes for user specification of the neuron bias, learning rate, activation function, and one or more parameters for the activation function. The bias value may be entered directly by the user or set via selection of a random number generator button shown adjacent thereto. The activation function may be selected from a drop-down menu made available in connection with the corresponding input field. Updates to the neuron parameters via the panel 164 may then be revealed in the network data panel 174 prior to selection of a different neuron for configuration. To return to the network level view (or version) of the editor panel 164, the user may point and click the mouse pointer button at a portion of the network topology panel 168 not having a neuron or connection.; Ayala [0036] further teaches FIG. 8 is a simplified depiction of an exemplary window providing a network configuration tool in support of specification of a customized activation function for use in the configuration and training of the artificial neural network; [Note: Fig. 8, 184 shows displayed mathematical functions – reading on specifying the modified neural network definition in …mathematical notation]).
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to have modified the operating a neural network defined by user code, as taught by Seide in view of Hammond and Leeman-Munk, to further include other types of interactive visualizations [i.e. GUIs] of neural network architectures and updating neural network features, as taught by Ayala, in order to improve a user’s ability to manage pattern data sets for both training and testing. Furthermore, the interactive graphical user interface to visualize neural network architectures, as taught by Ayala, does not require mastering programming language knowledge, thereby overcoming drawbacks in the prior art. (Ayala, Paragraph [0126] and [0008]).
Regarding claim 2, the combination of Seide in view of Hammond, Leeman-Munk and Ayala teaches all of the limitations of claim 1, and Hammond further teaches wherein the at least one of program code or mathematical notation defines one or more neural network layers (Hammond, Paragraph [0003] teaches the GUI is further configured to enable the author to provide a program annotation indicating an execution behavior for the source code, to generate an assembly code from the source code with a compiler of an artificial intelligence (“AI”) engine configured to work with the GUI; and to build an executable, trained AI model including a neural-network layout having one or more layers derived from the assembly code.; Hammond, Paragraph [0035] teaches The architect module can be configured to propose a neural-network layout with one or more neural-network layers from the assembly code. The learner module can be configured to build the AI model with the one or more neural-network layers from the neural-network layout proposed by the architect module.).
Motivation to combine same as stated in claim 1.
Regarding claim 4, the combination of Seide in view of Hammond, Leeman-Munk and Ayala teaches all of the limitations of claim 1, and the combination further teaches wherein the modification to the architectural representation of the neural network comprises an addition of one or more neural network layers to the architectural representation of the neural network or a removal of one or more neural network layers from the architectural representation of the neural network (Leeman-Munk, Paragraph [0059] teaches information displayed in the GUI may enable a designer of a deep neural network to optimize the deep neural network to reduce (i) the number of processing cycles executed by the deep neural network, (ii) the amount of memory consumed by the deep neural network, (iii) the amount of memory accesses performed by the deep neural network, (iv) or any combination of these. As a particular example, a designer of a deep neural network can use the GUI to determine that certain hidden layers (or nodes) produce repetitive results or are otherwise extraneous. So, the designer can remove these hidden layers (or nodes) to reduce the amount of unnecessary processing that is performed by the neural network.; Note, Hammond, Paragraph [0158] further teaches proposing a neural-network layout can include proposing the neural-network layout including one or more neural-network layers from the assembly code with an architect AI-engine module of the AI engine. Building the AI model can include building the AI model including the one or more neural-network layers from the neural-network layout with a learner AI-engine module of the AI engine)
Motivation to combine same as stated in claim 1.
Regarding claim 6, the combination of Seide in view of Hammond, Leeman-Munk, and Ayala teaches all of the limitations of claim 1, and Leeman-Munk further teaches wherein the modification to the architectural representation of the neural network comprises a change to at least one connection between at least two neural network layers included in the architectural representation of the neural network (Leeman-Munk, Paragraph [0059] teaches information displayed in the GUI may enable a designer of a deep neural network to optimize the deep neural network to reduce (i) the number of processing cycles executed by the deep neural network, (ii) the amount of memory consumed by the deep neural network, (iii) the amount of memory accesses performed by the deep neural network, (iv) or any combination of these. As a particular example, a designer of a deep neural network can use the GUI to determine that certain hidden layers (or nodes) produce repetitive results or are otherwise extraneous. So, the designer can remove these hidden layers (or nodes) to reduce the amount of unnecessary processing that is performed by the neural network.; Leeman-Munk, Paragraph [0160] further teaches In some examples, the node-link diagram 1102 may also be updated in response to the thresholding to only display the connections between color-coded nodes; Leeman-Munk, Paragraph [0161] further teaches the GUI 1100 can include additional threshold controls 1122a-c for enabling a user to manipulate the number of connections visually displayed between layers in the node-link diagram 1102.; Leeman-Munk, Paragraph [0162] further teaches The GUI 1202 of FIG. 12A includes a node-link diagram showing all of the connections (as lines) between nodes. Limited thresholding has been applied. The GUI 1204 of FIG. 12B shows the node-link diagram after more thresholding has been applied. The number of connections visually displayed in the node-link diagram is substantially reduced as a result of the thresholding. This can enable a user to selectively inspect features of interest in the deep neural network.).
Motivation to combine same as stated above for claim 4.
Regarding claim 7, the combination of Seide in view of Hammond, Leeman-Munk, and Ayala teaches all of the limitations of claim 1, and the combination further teaches further comprising displaying the neural network definition in at least one of program code or mathematical notation within the graphical user interface (Ayala [0036] further teaches FIG. 8 is a simplified depiction of an exemplary window providing a network configuration tool in support of specification of a customized activation function for use in the configuration and training of the artificial neural network; [Note: Fig. 8, 184 shows displayed mathematical functions – reading on specifying the modified neural network definition in …mathematical notation]).
Motivation to combine same as stated above for claim 1.
Regarding claim 11,
Claim 11 (as amended) recites the same and/or analogous limitations as claim 1. Hence it is rejected under the same rationale and motivation as claim 1 (as amended)
Seide further teaches one or more non-transitory computer-readable media storing program instructions that, when executed by one or more processor, cause the one or more processors to generate a neural network (Seide, Abstract, teaches methods, systems, machine-readable media, and devices which operate a neural network defined by user code.)
Regarding claim 12,
Claim 12 (as amended) recites the same and/or analogous limitations as claim 2 (as amended). Hence it is rejected under the same rationale and motivation as claim 2 (as amended).
Regarding claim 14,
Claim 14 recites the same and/or analogous limitations as claim 4. Hence it is rejected under the same rationale and motivation as claim 4.
Regarding claim 16, the combination of Seide in view of Hammond, Leeman-Munk, and Ayala teaches all of the limitations of claim 11, and the combination further teaches wherein the modification to the architectural representation of the neural network comprises a change to at least one connection between at least two neural network layers included in the architectural representation of the neural network wherein the modification to the architectural representation of the neural network comprises a change to at least one connection between at least two neural network layers included in the architectural representation of the neural network (Leeman-Munk, Paragraph [0059] teaches information displayed in the GUI may enable a designer of a deep neural network to optimize the deep neural network to reduce (i) the number of processing cycles executed by the deep neural network, (ii) the amount of memory consumed by the deep neural network, (iii) the amount of memory accesses performed by the deep neural network, (iv) or any combination of these. As a particular example, a designer of a deep neural network can use the GUI to determine that certain hidden layers (or nodes) produce repetitive results or are otherwise extraneous. So, the designer can remove these hidden layers (or nodes) to reduce the amount of unnecessary processing that is performed by the neural network.; Leeman-Munk, Paragraph [0160] further teaches In some examples, the node-link diagram 1102 may also be updated in response to the thresholding to only display the connections between color-coded nodes; Leeman-Munk, Paragraph [0161] further teaches the GUI 1100 can include additional threshold controls 1122a-c for enabling a user to manipulate the number of connections visually displayed between layers in the node-link diagram 1102.; Leeman-Munk, Paragraph [0162] further teaches The GUI 1202 of FIG. 12A includes a node-link diagram showing all of the connections (as lines) between nodes. Limited thresholding has been applied. The GUI 1204 of FIG. 12B shows the node-link diagram after more thresholding has been applied. The number of connections visually displayed in the node-link diagram is substantially reduced as a result of the thresholding. This can enable a user to selectively inspect features of interest in the deep neural network.).
Motivation to combine same as stated above for claim 4.
Regarding claim 20,
Claim 20 (as amended) recites the same and/or analogous limitations as claim 1. Hence it is rejected under the same rationale and motivation as claim 1 (as amended)
Seide further teaches a system, comprising: one or more memories storing a software application; and one or more processors that, when executing the software application, are configured to perform the steps (Seide, Abstract, teaches methods, systems, machine-readable media, and devices which operate a neural network defined by user code.; memory and hardware processor of the system taught in Paragraph [0115]).
Regarding claim 21, the combination of Seide in view of Hammond, Leeman-Munk, and Ayala teaches all of the limitations of claim 1 and the combination further teaches wherein the modification to the architectural representation of the neural network comprises removal of a neural network layer included in the one or more neural network layers, and wherein updating the portion of the at least one of program code or mathematical notation comprises automatically removing, from the portion of the at least one of program code or mathematical notation, one or more lines corresponding to the neural network layer (Leeman-Munk, Paragraph [0059] teaches in the GUI may enable a designer of a deep neural network to optimize the deep neural network to reduce (i) the number of processing cycles executed by the deep neural network, (ii) the amount of memory consumed by the deep neural network, (iii) the amount of memory accesses performed by the deep neural network, (iv) or any combination of these. As a particular example, a designer of a deep neural network can use the GUI to determine that certain hidden layers (or nodes) produce repetitive results or are otherwise extraneous. So, the designer can remove these hidden layers (or nodes) to reduce the amount of unnecessary processing that is performed by the neural network.; Leeman-Munk, Paragraph [0006] teaches The program code can cause the processing device to generate a matrix of symbols to be positioned in a graphical user interface. Each symbol in the matrix can indicate a feature-map value that represents a likelihood of a particular feature being present or absent at a location in an input to a convolutional neural network. Each column in the matrix can have feature-map values generated by convolving the input to the convolutional neural network with a respective filter for identifying a specific feature in the input. The program code can cause the processing device to generate a node-link diagram to be positioned in the graphical user interface. The node-link diagram can represent a feed forward neural network that forms part of the convolutional neural network. The node-link diagram can include a first row of symbols representing an input layer to the feed forward neural network.; Leeman-Munk, teaches the GUI 1100 of FIG. 11 includes four layers (an input layer 1104, hidden layers 1106-1108, and an output layer 1110). But other deep neural networks can include tens or hundreds of layers having hundreds or thousands of nodes each. Some examples can include features to enable the GUI 1100 to scale for different-sized deep neural networks.; Leeman-Munk, Paragraph [0180] further teaches the GUI 1100 can provide the ability to interactively “activate” or “deactivate” [i.e., “add” or “remove”] individual layers or groups of layers.).
Motivation to combine same as stated in claim 1.
Regarding claim 22, the combination of Seide in view of Hammond, Leeman-Munk, and Ayala teaches all of the limitations of claim 1 and the combination further teaches wherein receiving the modification to the architectural representation of the neural network comprises: receiving, via the graphical user interface, a selection of a portion of the architectural representation; and receiving, via the graphical user interface, one or more modifications to one or more parameters associated with the portion of the architectural representation (Leeman-Munk, Paragraph [0217] teaches in block 3508, the processing device causes the display device to display an updated version of the matrix in which the single row or the single column is expanded. The processing device can cause the display device to display the updated version of the matrix based on detecting the interaction in block 3506. An example of an expanded row is shown in FIG. 30 and an example of expanded columns is shown in FIG. 28.; Leeman-Munk Paragraph [0218] teaches In block 3510, the processing device detects another interaction with a symbol in the matrix (e.g., a cell in the matrix of cells 2506). Examples of the interaction can include a selecting or hovering over the symbol. The processing device can detect the interaction based on input signals from a user input device.; Leeman-Munk Paragraph [0219] teaches In block 3512, the processing device modifies the graphical user interface to show one or more additional matrices (e.g., additional matrices 2604 of FIG. 26). The processing device can modify the graphical user interface based on detecting the interaction in block 3510.; Leeman-Munk, Paragraph [0159] teaches a computing device can receive the user input, feed the user input into the deep neural network, and update the color coding of the node-link diagram 1102 based on the results. For example, a representation of a node in the input layer 1104 can be color coded to indicate a weight of the node in the deep neural network (e.g., in response to user input). A representation of a connection between a node in the input layer 1104 and another node in the hidden layer 1106 can be color coded to indicate the result of multiplying a first weight of the connection (in the deep neural network) by a second weight of the node from the input layer 1104. A representation of a node in a hidden layer 1106, 1108 can be color coded to indicate a value determined by summing the weights of all of the connections to the node and passing the result through a rectified linear unit function. A representation of a connection between a node in the hidden layer 1108 and another node in the output layer 1110 can be color coded to indicate the result of multiplying a first weight of the connection by a second weight of the node from the hidden layer 1108. A representation of a node in the output layer 1110 can be color coded to indicate a value determined by summing the weights of all the connections coming into the node and then normalizing the result to represent the probability. The node-link diagram 1102 can be color coded to represent any number and combination of information, which can be generated in response to any number and combination of inputs to the deep neural network. [Note: weights are being understood as a parameter of the neural network architecture]
Motivation to combine same as stated in claim 1.
Regarding claim 23, the combination of Seide in view of Hammond, Leeman-Munk, and Ayala teaches all of the limitations of claim 1 and the combination further teaches wherein the architectural representation that is displayed within the graphical user interface comprises one or more first shapes associated with a first neural network layer included in the neural network and one or more second shapes associated with a second neural network layer included in the neural network, and wherein the one or more first shapes differ in size from the one or more second shapes to indicate that the first neural network layer differs in type from the second neural network layer (Ayala, Paragraph [0081] teaches the identification of each layer, graphical representations of the neurons are depicted in a network topology panel 168 that generally forms a pallet for graphical configuration of the network topology or structure. The network topology panel 168 generally provides the user with an interface to graphically configure the network topology. For example, and as shown in FIG. 4, a connection has been established between neurons N1,1 and N2,1. A user may then employ the mouse pointer to connect the neurons by clicking on one neuron and dragging the mouse pointer toward another neuron. A form of "drag-and-drop" operation then forms the connection. Neurons may alternatively or additionally be automatically connected in bulk fashion via a section 170 of the network editor panel 164, which provides an operators tab to reveal and "Execute Operator" option (via a button or the like). Implementing this option adds new connections to the network topology in every possible feedforward configuration.; See also, Figure 7, 168 [Note: Examiner is interpreting first and second shapes as the layers of first and second layers neural network. This interpretation is consistent with Applicant’s remarks pointing to Fig. 4 of Applicant’s disclosure as support for the amendments. Ayala displays first and second “shapes” [i.e., layers] in neural network architecture displayed in Figs. 4 and 7]).
Regarding claim 24, combination of further comprising displaying, within the graphical user interface, a natural language description of the modified neural network definition (Ayala, Paragraph [0082] teaches configuration display interface window 160 for configuring feedforward networks further includes a panel 172 for providing resource and size information for the neural network being configured. More specifically, a resources tab may be selected to reveal a table showing information directed to maximum, used, and available resources for support of the configuration. Selection of a sizes tab within the panel 172 provides user customizable parameters for the network topology panel 168. Ayala, Paragraph [0085] teaches the aforementioned panels of the configuration display interface 160 support configuration of the artificial neural network being configured, which in this case is a feedforward network having a user-customizable topology or structure along with user-selectable neuron, connection and other parameters. The panels of the display interface window 160 provide such functionality in a user-friendly, convenient manner via graphical user interface tools and selection mechanisms. The user is thus enabled to quickly configure and set up a feedforward network without having to use script or other programming language commands to define the layers, connections and other network details. As described below, the disclosed programming tool additionally provides functionality for facilitating the further configuration or customization steps (e.g., customizing the activation functions), as well as the management and analysis of the training and testing pattern data sets. [Note: the screenshots of the GUI, as taught by Ayala, show that the user is able to edit and see data regarding the neural network architecture in natural language – See exemplary Figures 7-8 and 11-13).
16. Claims 5, 15, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Seide in view of Hammond, Leeman-Munk, and Ayala (as applied to claim 1), and further in view of Feng et al. (US 20190205728 A1, Dec. 28, 2017 and published Jul. 4, 2019).
Regarding claim 5, the combination of Seide in view of Hammond and Leeman-Munk teaches all of the limitations of the claim 1, however the combination does not distinctly disclose wherein the modification to the architectural representation of the neural network comprises a change to at least one dimension associated with at least one neural network layer included in the architectural representation of the neural network.
Nevertheless, Feng teaches wherein the modification to the architectural representation of the neural network comprises a change to at least one dimension associated with at least one neural network layer included in the architectural representation of the neural network (Feng, Paragraph [0035] teaches flexible dimension which may be any user-selected value, wherein the flexible dimension can be manipulated within a visualization tool to allow a user to get a more customizable view.).
Before the effective filing date of the invention it would have been obvious to one of ordinary skill in the art to have modified the methods and systems which operate a neural network defined by user code, as taught by Seide in view of Hammond, Leeman-Munk and Ayala, to further include the program code to generate graphical visualization of a neural network including the graphical visualization of neural network layers, as taught by Feng, in order to advantageously provide a more intuitive understanding of data transformations, computation complexities, and parameter sizes. Furthermore, the method and system taught by Feng may also demonstrate how network architecture and/or topology influences neural performance and therefore be useful in determining an optimal system or network architecture or topology for training and/or executing a given neural network. (Feng, Paragraphs [0006] and [0007]).
Regarding claim 15, the combination of Seide in view of Hammond and Leeman-Munk teaches all of the limitations of claim 11, however, the combination does not distinctly disclose wherein the modification to the architectural representation of the neural network comprises a change to at least one dimension associated with at least one neural network layer included in the architectural representation of the neural network.
Nevertheless, Feng teaches wherein the modification to the architectural representation of the neural network comprises a change to at least one dimension associated with at least one neural network layer included in the architectural representation of the neural network (Feng, Paragraph [0035] teaches flexible dimension which may be any user-selected value, wherein the flexible dimension can be manipulated within a visualization tool to allow a user to get a more customizable view.).
Motivation to combine same as stated for claim 5.
Regarding claim 19, the combination of Seide in view of Hammond, Leeman-Munk, and Ayala teaches all of the limitations of claim 11, however, the combination does not distinctly disclose further comprising the step of displaying the modified neural network definition via the one or more text fields that are displayed within the graphical user interface.
Nevertheless, Feng teaches further comprising the step of displaying the modified neural network definition via the one or more text fields that are displayed within the graphical user interface (Feng, Abstract, teaches The method also includes displaying the graphical visualization of the neural network to the user.; ).
Motivation to combine same as stated for claim 5.
18. Claims 8 and 9 are rejected under 35 U.S.C. 103 over Seide in view of Hammond, Leeman-Munk, and Ayala, as applied to claim 1,and further in view of Zeiler et al. (US 20180089592 A1, filed Sep. 26, 2017 and published Mar. 29, 2018)
Regarding claim 8, the combination of Seide in view of Hammond, Leeman-Munk, and Ayala teaches all of the limitations of claim 1, however, the combination does not distinctly disclose wherein the neural network is encompassed within a first agent that is coupled to a second agent, wherein the second agent does not encompass any neural networks and includes additional program code that, when executed, processes an output of the neural network.
Nevertheless, Zeiler teaches wherein the neural network is encompassed within a first agent that is coupled to a second agent, wherein the second agent does not encompass any neural networks and includes program code that, when executed, processes an output of the neural network (Zeiler, Abstract, Paragraph [0037], and Figure 3A teach model representations may comprise first machine learning model coupled to non-machine learning model wherein the non-ML model processes the output of the model.; Zeiler, Paragraph [0024] teaches machine learning models such as neural networks).
Before the effective filing date of the invention it would have been obvious to one of ordinary skill in the art to have modified the methods and systems which operate a neural network defined by user code, as taught by Seide in view of Hammond, Leeman-Munk, and Ayala, to further include the user-selectable/connectable model representations, as taught by Zeiler, in order to overcome drawbacks in the prior art where artificial intelligence development is a slow process by facilitating collaborative collection/development of prediction models, prediction-model-incorporated software applications, related data, or other aspects. (Zeiler, Paragraphs [0003] and [0019]).
Regarding claim 9, the combination of Seide in view of Hammond, Leeman-Munk, and Ayala teaches all of the limitations of claim 1, however the combination does not distinctly teach further comprising storing the neural network definition and the architectural representation of the neural network as a selectable agent that comprises an element within an artificial intelligence model.
Nevertheless, Zeiler teaches further comprising storing the neural network definition and the architectural representation of the neural network as a selectable agent that comprises an element within an artificial intelligence model (Zeiler, Abstract teaches “user-selectable/connectable model representations may be provided via a user interface to facilitate artificial intelligence development”; Zeiler, Paragraph [0024] further teaches “service interface subsystem 112 may provide a service platform that enables a developer to develop one or more machine learning models (e.g., neural networks or other machine learning models)).).
Motivation to combine same as stated for claim 8.
19. Claim 17 is rejected under 35 U.S.C. 103 as being unpatentable over Seide in view of Hammond, Leeman-Munk, and Ayala, as applied to claims 1 and 11, and further in view of Venkataramani et al. (US 20180136912 A1, filed Nov. 17, 2017 and published May 17, 2018).
Regarding claim 17, the combination of Seide in view of Hammond and Leeman-Munk, and Ayala teaches all of the limitations of claim 11, and however, the combination does not distinctly disclose wherein the modification to the architectural representation of the neural network comprises a change to a layer type associated with at least one neural network layer included in the architectural representation of the neural network.
Nevertheless, Venkataramani teaches wherein the modification to the architectural representation of the neural network comprises a change to a layer type associated with at least one neural network layer included in the architectural representation of the neural network (Venkataramani, Paragraph [0045] teaches implementing different types of layers of a deep learning (DL) network via framework which may be implemented at least in part through a class hierarchy that may include a base class for a DL network layer and sub-classes that implement different types of layers of a DL network. The subclass types may represent and may be used to implement different types of layers of the deep learning network.).
Before the effective filing date of the invention it would have been obvious to one of ordinary skill in the art to have modified the methods and systems which operate a neural network defined by user code, as taught by Seide in view of Hammond, Leeman-Munk, and Ayala, to further include the developing and implementing different neural network layer types, as taught by Venkataramani, in order to overcome drawbacks in the prior art new layer types need to be developed, added, and fully optimized for multiple different end platforms. (Venkataramani, Paragraph [0039]).
Claim 18 is rejected under 35 U.S.C. 103 as being unpatentable over Seide in view of Hammond, Leeman-Munk, and Ayala, as applied to claims 1 and 11, and further in view of Fernando et al. (US 20200293899 A1, filed Apr. 27, 2020 and published Sep. 17, 2020)
Regarding claim 18, the combination of Seide in view of Hammond, Leeman-Munk, and Ayala teaches all of the limitations of claim 11, however the combination does not distinctly disclose further comprising executing the program code to cause the neural network to perform an inference operation
Nevertheless, Fernando teaches further comprising executing the program code to cause the neural network to perform an inference operation (Fernando, Paragraph [0026] teaches the method may further include designing a neural network according to the determined neural network architecture if any further design work is necessary, and/or constructing a neural network with the architecture…The method may further include using neural network for training and/or inference; or making the neural network available for use, for example for training and/or inference, via an API (application programming interface).).
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to have modified the operating a neural network defined by user code, as taught by Seide in view of Hammond, Leeman-Munk, and Ayala to further include the computer-implemented method for automatically determining a neural network architecture, as taught by Fernando, in order to save in computational resources (e.g. reduced processing time used by a processor unit with a given processing rate) compared to known algorithms for performing architecture searches. (Fernando, Paragraph [0030])
Conclusion
The following prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
KUDRITSKIY (US 20130254138 A1) – disclosing [0020] In an example embodiment, the method further includes: responsive to user interaction with the graphical representation in the first portion, modifying attributes of one or more components of the neural network, where the modification affects in real-time the graphical effects displayed in the second portion of the graphical user interface.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to BEATRIZ RAMIREZ BRAVO whose telephone number is 571-272-2156. The examiner can normally be reached Mon. - Fri. 7:30a.m.-5:00p.m..
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, USMAAN SAEED can be reached at 571-272-4046. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/B.R.B./Examiner, Art Unit 2146
/USMAAN SAEED/Supervisory Patent Examiner, Art Unit 2146