Prosecution Insights
Last updated: April 19, 2026
Application No. 16/982,589

INFORMATION PROCESSING METHOD, INFORMATION PROCESSING APPARATUS, AND PROGRAM

Final Rejection §103
Filed
Sep 21, 2020
Examiner
RAMIREZ BRAVO, BEATRIZ A
Art Unit
2146
Tech Center
2100 — Computer Architecture & Software
Assignee
Sony Corporation
OA Round
6 (Final)
63%
Grant Probability
Moderate
7-8
OA Rounds
4y 7m
To Grant
92%
With Interview

Examiner Intelligence

Grants 63% of resolved cases
63%
Career Allow Rate
61 granted / 97 resolved
+7.9% vs TC avg
Strong +29% interview lift
Without
With
+28.9%
Interview Lift
resolved cases with interview
Typical timeline
4y 7m
Avg Prosecution
18 currently pending
Career history
115
Total Applications
across all art units

Statute-Specific Performance

§101
19.9%
-20.1% vs TC avg
§103
53.5%
+13.5% vs TC avg
§102
10.1%
-29.9% vs TC avg
§112
14.4%
-25.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 97 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims Claims 1, 4, 5, 7, 9, 11, 18, and 19 have been amended by Applicant. Claims 3 and 12 are cancelled and new claims 20-21 have been added. Claims 1-2, 4-11, and 13-21 are currently pending. Response to Arguments The rejection of claims 1, 2, 5, 7-11, 13-14, and 18-19 under 35 U.S.C. 103 has been withdrawn in view of Applicant’s amendments to independent claims 1, 18, and 19. However, upon further consideration and in view of said amendments a new grounds or rejection has been made herein. The rejection of claim 3 under 35 U.S.C. 103 has been rendered moot in view of the cancellation of said claim. The rejection of claim 4 under 35 U.S.C. 103 has been withdrawn in view of Applicant’s amendments to independent claims 1, 18, and 19. However, upon further consideration and in view of said amendments a new grounds or rejection has been made herein. The rejection of claim 6 under 35 U.S.C. 103 has been withdrawn in view of Applicant’s amendments to independent claims 1, 18, and 19. However, upon further consideration and in view of said amendments a new grounds or rejection has been made herein. The rejection of claims 15-17 under 35 U.S.C. 103 has been withdrawn in view of Applicant’s amendments to independent claims 1, 18, and 19. However, upon further consideration and in view of said amendments a new grounds or rejection has been made herein. Applicant’s arguments with respect to claims 1, 18, and 19 (as amended) and dependent claims therefrom have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or non-obviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1, 2, 5, 7-11, 13-14, and 18-19 (as amended) are rejected under 35 U.S.C. 103 as being unpatentable over Dholakiya et al., “Expresso: A user-friendly GUI for designing, training, and using Convolutional Neural Networks” (2015), in view of Konertz et al. (US 20150262061 A1, filed on Sep. 12, 2014 and published Sep. 17, 2015), Birdwell et al. (US 20150106311 A1, filed on Oct. 14, 2014 and published Apr. 16, 2015), and JP2003-241965A (filed on 2002-02-08 and published 2003-08-29) Regarding claim 1, Dholakiya teaches an information processing method comprising: providing, by a processor, a form for creating a program to build a neural network based on arrangement of components and properties that are set in the components, each of the components representing a layer of a the neural network (Dholakiya, Abstract, teaches providing a convenient wizard-like graphical interface which guides the user through various common scenarios – data import, construction, and training of deep neural networks; Dholakiya, page 2, section 3.2.1 further teaches once the option is chosen to construct the network from scratch, the layers which are associated with data are added…As each layer is completed, the net design so far can be viewed.; See Fig. 2 and Fig. 3 understood to read on each of the components representing a layer of the neural network; Dholakiya, Section 4 teaches Expresso [i.e., the graphical interface tool] is written in Python and has been developed on machines running Linux Ubuntu 14.04.); … , wherein the providing a form further includes providing on the form… a unit formed of a plurality of components, the unit representing a plurality of layers of the neural network, providing the unit such that the unit can be arranged like the one of the components on the form, …(Dholakiya, page 2, col. 1, teaches modular nature of GUI subcomponents and intercomponent interfaces have been designed with ease of extensibility in mind [Note: the modular GUI in Dholakiya understood to read on “providing on the form”]; Dholakiya, Fig. 3 and Section 3.2.2 teaches allowing to save a net configuration by selecting different types of layers, here a net configuration is understood to refer to a unit as claimed;). However, Dholakiya does not distinctly disclose: wherein providing a form further includes providing, on the form, a function of, defining a unit formed of a plurality of components and presenting statistical information on the neural network on the form permitting the unit to be a nested unit which includes both at least one unit and the components therein in a nested manner, the at least one unit being reproduced without modification in a first other unit to form a nested unit and a second other unit to form a second nested unit, providing, on the form, a function of defining an argument that is commonly usable among a plurality of components that form the unit, performing control such that the defined argument is presented as part of properties of the unit arranged on the form, and performing control to enable a value of the defined argument that is commonly usable among the plurality of components that form the unit to be set, using the form, for the unit. Nevertheless, Konertz teaches presenting statistical information on the neural network on the form (Konertz, Paragraph [0083] teaches “the context panel may provide one or more forms of contextual information. In some aspects, the context panel may provide contextual information related to the dynamics and/or statistics of a model.” [Note: the context panel in Konertz is an interactive displayed context panel reading on “on the form” – See paragraphs [0007] and [0082] of Konertz]). Before the effective filing date of the claimed invention, it would have been obvious to one or ordinary skill in the art to modify the graphical interface for constructing neural networks, as taught by Dholakiya, to further include the context panel providing statistical information of a model, as taught by Konertz, given that with this information, the population of neurons may be modified either by manipulating the visualization or by updating the program code section to improve system or model efficiency. (Konertz, Paragraph [0093]). However, the combination does not distinctly disclose the limitation: permitting the unit to be a nested unit which includes both at least one unit and the components therein in a nested manner, the at least one unit being reproduced without modification in a first other unit to form a nested unit and a second other unit to form a second nested unit; providing, on the form, a function of defining an argument that is commonly usable among a plurality of components that form the unit, performing control such that the defined argument is presented as part of properties of the unit arranged on the form, and performing control to enable a value of the defined argument that is commonly usable among the plurality of components that form the unit to be set, using the form, for the unit. Nevertheless, Birdwell teaches permitting the unit to be a nested unit which includes both at least one unit and the components therein in a nested manner, the at least one unit being reproduced without modification in a first other unit to form a nested unit and a second other unit to form a second nested unit (Birdwell, Abstract, teaches a method and apparatus for constructing a neuroscience-inspired artificial neural network (NIDA) or a dynamic adaptive neural network array (DANNA) or combinations of substructures thereof comprises one of constructing a substructure of an artificial neural network for performing a subtask of the task of the artificial neural network or extracting a useful substructure based on one of activity, causality path, behavior and inputs and outputs... The method and apparatus supports constructing, using and reusing components and structures of a neuroscience-inspired artificial neural network dynamic architecture in software and a dynamic adaptive neural network array. [Note: “reusing” components and structures of an ANN understood to read on “the at least one unit being reproduced without modification in a first other unit to form a nested unit and a second other unit to form a second nested unit”, as claimed ]; Birdwell [0003] teaches the technical field relates to a method and apparatus for constructing a neuroscience-inspired artificial neural network (NIDA) or a dynamic adaptive neural network array (DANNA) or combinations of substructures thereof and, in particular, to the method and apparatus for constructing, using and reusing components and structures to support a neuroscience-inspired artificial neural network dynamic architecture in software or a DANNA or combinations of structures and substructures thereof or from artificial neural networks (ANNs); Birdwell, Paragraph [0285] further teaches the concept of inclusion of sub-networks, either as building blocks of the greater network or as additions to existing networks may be applied to DANNA's [i.e., sub-networks of a DANNA, as disclosed in Birdwell, understood as “in a nested manner”]. Such sub-networks can be parameterized, and the parameters can be selected or tuned using the methods of evolutionary optimization or other optimization methods such as gradient search and Newton's method. …in addition to adapting parameters of single elements in the network or of the network as a whole, we may also adapt parameters of entire sub-networks within the greater network. Sub-networks of one or more DANNAs can have sub-networks, resulting in sub-sub-networks of a higher-level network. In this manner, a hierarchical description of a network's structure can be maintained. [i.e., further reading on “in a nested manner]; Birdwell, Paragraph [0296] further teaches FIG. 26 shows possible uses and reuses for affective and multiple interacting networks 2626. There are multiple complex affective system types 2616, 2608 that may be constructed, used and reused. [Note: Birdwell [0296] further reading on “reusing” for multiple interacting networks reading understood to read on “the at least one unit being reproduced without modification in a first other unit to form a nested unit and a second other unit to form a second nested unit”, as claimed ]; Birdwell Paragraph [0259] further teaches NIDA (and DANNA) networks may be designed for a particular task within one of control, detection and classification applications using evolutionary optimization (for example, FIGS. 7A and 7B) and other optimization processes discussed herein. The design process determines the structure of the network (the number and placement of the neurons and synapses), the parameters of the network (such as the thresholds of the neurons and weights of the synapses), and the dynamics of the network (the delays of the synapses). We note advantages and some disadvantages to the use of evolutionary optimization (EO) to design NIDA networks (and networks in general). It is important to note that many of the network structures produced by evolutionary optimization may have equivalent behavior. A superficial example of this is that the same network rotated or translated in the three-dimensional space will behave exactly the same way as the original network. … This is one reason a visualization tool to explore the behavior of NIDA (or DANNA) networks is important.; See also Figs. 13, 19, and 26 illustrating networks and sub-networks in “a nested manner” and also illustrating a plurality of networks and subnetworks [i.e., reading on a first and a second units] reproduced without modification.) Before the effective filing date of the claimed invention, it would have been obvious to one or ordinary skill in the art to modify the graphical interface for constructing neural networks, as taught by Dholakiya in view of Konertz, to further include the method and apparatus for constructing a neuroscience-inspired artificial neural network (NIDA) or a dynamic adaptive neural network array (DANNA) or combinations of substructures thereof comprises one of constructing a substructure of an artificial neural network, as taught by Birdwell, as the newly constructed network does not have to relearn how to build the simple components; it can take advantage of those simple components that are pre-built, thus learning or training time can be reduced. (Birdwell, Paragraph [0275]. See also, Paragraph [0319]) However, the combination does not distinctly disclose: providing, on the form, a function of defining an argument that is commonly usable among a plurality of components that form the unit, performing control such that the defined argument is presented as part of properties of the unit arranged on the form, and performing control to enable a value of the defined argument that is commonly usable among the plurality of components that form the unit to be set, using the form, for the unit. Nevertheless, JP2003-241965A teaches: providing, on the form, a function of defining an argument that is commonly usable among a plurality of components that form the unit (JP2003-241965A, paragraph [0008] teaches a program creation support method is provided. In this program creation support method, a group of software parts classified into a data part having a field in which data is set as a terminal and a link part having an argument as a terminal is a group of element parts of a program configuration; JP2003-241965A, paragraph [0015] teaches the function (B3) is generally a multi-valued function with n inputs and m outputs, and transfers a value from the input side to the output side as it is or after conversion. (B5) Each field of the data part and each argument of the function and the predicate are collectively called a "terminal". A terminal has a direction in which it can transmit a value.), performing control such that the defined argument is presented as part of properties of the unit arranged on the form (JP2003-241965A, paragraph [0088] teaches as described in detail above, according to the present invention, a data component having a field to which data is set as a terminal is utilized by utilizing the software componentization technique of expressing the connecting means of the software component as a link component. A group of software parts classified into a link part having an argument as a terminal is defined as a group of element parts of the program configuration, and a static structure of the program in which the static connection structure of the element parts is determined is generated. Later, by setting the user-specified execution order for each user-specified element part of the element part group in the static structure of the program, the program structure was separated into two layers. It becomes easy to create and grasp each, and it is possible to assist the software developer in easily creating the program.), and performing control to enable a value of the defined argument that is commonly usable among the plurality of components that form the unit to be set, using the form, for the unit (JP2003-241965A, paragraph [0015] teaches the function (B3) is generally a multi-valued function with n inputs and m outputs, and transfers a value from the input side to the output side as it is or after conversion; JP2003-241965A [claim 1] further teaches in the program code generation step, when the searched part is the data part, the method is a first method for filling all fields of the data part, and an output terminal is output to the data part. If a value is given to all the input terminals of the link component with respect to all the link components connected to each other, the second method for transmitting the value to the output terminal of the link component is transmitted.). Before the effective filing date of the claimed invention it would have been obvious to one of ordinary skill in the art to have modified the graphical interface for constructing neural networks, as taught by Dholakiya in view of Konertz and Birdwell, to further include the program creation graphical user interface with user-specified functions, as taught by JP2003-241965A, in order to facilitate and make it easy for a software developer to create the program. (JP2003-241965A, Paragraph [0088]) Regarding claim 2, the combination of Dholakiya in view of Konertz, Birdwell, and JP2003-241965A teaches all of the limitations of claim 1, and the combination further teaches wherein the providing a form further includes providing the plurality of units such that the units are reusable (Birdwell, Abstract, teaches a method and apparatus for constructing a neuroscience-inspired artificial neural network (NIDA) or a dynamic adaptive neural network array (DANNA) or combinations of substructures thereof comprises one of constructing a substructure of an artificial neural network for performing a subtask of the task of the artificial neural network or extracting a useful substructure based on one of activity, causality path, behavior and inputs and outputs... The method and apparatus supports constructing, using and reusing components and structures of a neuroscience-inspired artificial neural network dynamic architecture in software and a dynamic adaptive neural network array. [Note: “reusing” components and structures of an ANN understood to read on “the at least one unit being reproduced without modification in a first other unit to form a nested unit and a second other unit to form a second nested unit”, as claimed ]; Birdwell [0003] teaches the technical field relates to a method and apparatus for constructing a neuroscience-inspired artificial neural network (NIDA) or a dynamic adaptive neural network array (DANNA) or combinations of substructures thereof and, in particular, to the method and apparatus for constructing, using and reusing components and structures to support a neuroscience-inspired artificial neural network dynamic architecture in software or a DANNA or combinations of structures and substructures thereof or from artificial neural networks (ANNs); Birdwell, Paragraph [0285] further teaches the concept of inclusion of sub-networks, either as building blocks of the greater network or as additions to existing networks may be applied to DANNA's [i.e., sub-networks of a DANNA, as disclosed in Birdwell, understood as “in a nested manner”]. Such sub-networks can be parameterized, and the parameters can be selected or tuned using the methods of evolutionary optimization or other optimization methods such as gradient search and Newton's method. …in addition to adapting parameters of single elements in the network or of the network as a whole, we may also adapt parameters of entire sub-networks within the greater network. Sub-networks of one or more DANNAs can have sub-networks, resulting in sub-sub-networks of a higher-level network. In this manner, a hierarchical description of a network's structure can be maintained. [i.e., further reading on “in a nested manner]; Birdwell, Paragraph [0296] further teaches FIG. 26 shows possible uses and reuses for affective and multiple interacting networks 2626. There are multiple complex affective system types 2616, 2608 that may be constructed, used and reused. [Note: Birdwell [0296] further reading on “reusing” for multiple interacting networks reading understood to read on “the at least one unit being reproduced without modification in a first other unit to form a nested unit and a second other unit to form a second nested unit”, as claimed ]; Birdwell Paragraph [0259] further teaches NIDA (and DANNA) networks may be designed for a particular task within one of control, detection and classification applications using evolutionary optimization (for example, FIGS. 7A and 7B) and other optimization processes discussed herein. The design process determines the structure of the network (the number and placement of the neurons and synapses), the parameters of the network (such as the thresholds of the neurons and weights of the synapses), and the dynamics of the network (the delays of the synapses). We note advantages and some disadvantages to the use of evolutionary optimization (EO) to design NIDA networks (and networks in general). It is important to note that many of the network structures produced by evolutionary optimization may have equivalent behavior. A superficial example of this is that the same network rotated or translated in the three-dimensional space will behave exactly the same way as the original network. … This is one reason a visualization tool to explore the behavior of NIDA (or DANNA) networks is important.; See also Figs. 13, 19, and 26). Motivation to combine same as stated in claim 1. Regarding claim 4, the combination of Dholakiya in view of Konertz, Birdwell, and JP2003-241965A teaches all of the limitations of claim, and the combination further teaches wherein the providing the function of defining the argument further includes controlling the argument based on an argument component corresponding to the argument, which is an object distinct from the components representing the layers, that is arranged on the form and properties that are set in the argument component (JP2003-241965A, paragraph [0008] teaches a program creation support method is provided. In this program creation support method, a group of software parts classified into a data part having a field in which data is set as a terminal and a link part having an argument as a terminal is a group of element parts of a program configuration; JP2003-241965A, paragraph [0015] teaches the function (B3) is generally a multi-valued function with n inputs and m outputs, and transfers a value from the input side to the output side as it is or after conversion. (B5) Each field of the data part and each argument of the function and the predicate are collectively called a "terminal". A terminal has a direction in which it can transmit a value.). Motivation to combine same as stated above for claim 1. Regarding claim 5, the combination of Dholakiya in view of Konertz, Birdwell, and JP2003-241965A teaches all of the limitations of claim 1, and the combination further teaches wherein the providing a form further includes displaying, on the form, the unit using a predetermined visual representation not depending on a type of the components representing the layers of the neural network that form the unit and not depending on a number of the plurality of components that form the unit (Dholakiya, Abstract, teaches providing a convenient wizard-like graphical interface which guides the user through various common scenarios – data import, construction, and training of deep neural network, performing various experiment, analyzing and visualizing the results of these experiments; Dholakiya, page 2, section 3.2.1 further teaches once the option is chosen to construct the network from scratch, the layers which are associated with data are added…As each layer is completed, the net design so far can be viewed.; JP2003-241965A, component selection unit 2 in the program editor 2111 detects that the user has performed an operation of selecting an element part from the parts list on the data parts list area 61 or the link parts list area 62 using the mouse (step S1), the selected element The component is moved onto the program definition area 63 while being displayed on the program creation GUI screen by the component display unit 212 according to the drag operation following the user's selection operation (step S3). Here, the selected element part is represented by an icon and a type name of the part, and an arrow indicating the transmission direction of the value between the terminals. The icon to be displayed is determined by the following procedure. (A1) If an icon is set for the selected part, that icon is used. (A2) If there is no icon setting and there is a setting for the inherited component, the icon setting of the upper component is recursively searched, Use this. (A3) If the icon information cannot be obtained by the inheritance information, a predetermined implicit icon is used.). Motivation to combine same as stated above for claim 1. Regarding claim 7, the combination of Dholakiya in view of Konertz, Birdwell, and JP2003-241965A teaches all of the limitations of claim 1, and the combination further teaches wherein the presenting statistical information further includes presenting, on the form and in response to the arrangement of the components, the statistical information on the whole neural network including the plurality of components that form the unit, and the statistical information includes at least one of an output neuron size, an amount of memory used, and an amount of computation (Konertz, Paragraph [0083] teaches in some aspects, the context panel [i.e., the form] may provide contextual information related to the dynamics and/or statistics of a model; Paragraph [0093] teaches the context panel may provide statistical information and performance metrics correlated to the hardware used to implement the model [i.e., understood to teach “in response to the arrangement of the components”] . Further, the context panel may provide performance estimates…In one example, the context panel may provide visualization related to the power consumption related to a population of neurons or a portion thereof [i.e., further understood to teach “in response to the arrangement of the components”]. In a further example, the context panel may provide visualization related to computational load due to a population of neurons [i.e., further understood to teach “in response to the arrangement of the components”].). Motivation to combine same as stated for claim 1 above. Regarding claim 8, the combination of Dholakiya in view of Konertz, Birdwell, and JP2003-241965A teaches all of the limitations of claim 7, and the combination further teaches wherein the presenting statistical information further includes presenting the statistical information on each unit (Konertz, Paragraph [0093] teaches the context panel may provide statistical information and performance metrics correlated to the hardware used to implement the model. In one example, the context panel may provide visualization related to the power consumption related to a population of neurons or a portion thereof.). Motivation to combine same as stated for claim 1 above. Regarding claim 9, the combination of Dholakiya in view of Konertz, Birdwell, and JP2003-241965A teaches all of the limitations of claim 7, and the combination further teaches wherein the presenting statistical information further includes presenting the statistical information on the whole neural network and the statistical information on the unit in comparison with each other on the form (Konertz, Paragraph [0083] teaches in some aspects, the context panel [i.e., “the form”] may provide contextual information related to the dynamics and/or statistics of a model; Konertz, Paragraph [0093] teaches the context panel may provide statistical information and performance metrics correlated to the hardware used to implement the model. In one example, the context panel may provide visualization related to the power consumption related to a population of neurons or a portion thereof.). Motivation to combine same as stated in claim 1. Regarding claim 10, the combination of Dholakiya in view of Konertz, Birdwell, and JP2003-241965A teaches all of the limitations of claim 1, and the combination further teaches further comprising receiving an operation performed by the user to select an element contained in the statistical information and presenting a value of the element for each component and each unit by comparison (Konertz, Paragraph [0077] teaches In some aspects, the context panel may be a user interface that is provided along with a code editor. The context panel may be configured to display real-time visualization and test results as a user enters program code describing (to create) the neuromorphic model.; Konertz, Paragraph [0093] teaches the context panel may provide statistical information and performance metrics correlated to the hardware used to implement the model…With this information, the population may be modified either by manipulating the visualization or by updating the program code section to improve system or model efficiency.). Motivation to combine same as stated in claim 1. Regarding claim 11, the combination of Dholakiya in view of Konertz, Birdwell, and JP2003-241965A teaches all of the limitations of claim 10, and the combination further teaches wherein the value of the element for each component or each unit and an indicator representing a magnitude of the value of the element are presented in association with the component or the unit that is arranged on the form (Dholakiya, teaches an important feature in the Train view is the ability to stop a particular training task. This is particularly useful if the training is not proceeding satisfactorily (e.g., too slowly or has an unacceptable loss function magnitude)). Regarding claim 13, the combination of Dholakiya in view of Konertz, Birdwell, and JP2003-241965A teaches all of the elements of claim 1, and the combination further teaches further comprising outputting a source code of the neural network based on the arrangement of the components and the unit and the properties that are set (Dholakiya, Section 4, teaches BDS-licensed source code for Expresso; Konertz, Paragraph [0009] teaches The program code includes program code to generate contextual feedback in a neuromorphic model comprising one or more asset to be monitored during development of the neuromorphic model. The program code further includes program code to display an interactive context panel to show a representation based on the contextual feedback.; Konertz, Paragraph [0039] further teaches the processing unit 202 and its output connections may also be emulated by a software code). Motivation to combine same as stated in claim 1. Regarding claim 14, the combination of Dholakiya in view of Konertz, Birdwell, and JP2003-241965A teaches all of the limitations of claim 13, and the combination further teaches wherein the outputting a source code further includes generating the source code that maintains a configuration of the unit (Konertz, Paragraph [0077] teaches the present disclosure is directed to a context panel that provides real-time information during all stages of the neuromorphic model development process. In some aspects, the context panel may be a user interface that is provided along with a code editor. The context panel may be configured to display real-time visualization and test results as a user enters program code describing (to create) the neuromorphic model.; Konertz, Paragraph [0116] teaches the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in hardware, an example hardware configuration may comprise a processing system in a device.). Motivation to combine same as stated for claim 1. Regarding claim 18 (as amended), Claim 18 (as amended) recites the same and/or analogous limitations as claim 1 (as amended). Therefore claim 18 is rejected under the same rationale and motivation as claim 1 (as amended). Dholakiya further teaches an information processing apparatus comprising: a processor configured to provide a form for creating a program to build a neural network based on arrangement of components and properties that are set in the components, each of the components representing a layer of a the neural network (Dholakiya, Abstract, teaches providing a convenient wizard-like graphical interface which guides the user through various common scenarios – data import, construction, and training of deep neural networks; Dholakiya, page 2, section 3.2.1 further teaches once the option is chosen to construct the network from scratch, the layers which are associated with data are added…As each layer is completed, the net design so far can be viewed.; See Fig. 2 and Fig. 3 understood to read on each of the components representing a layer of the neural network; Dholakiya, Section 4 teaches Expresso [i.e., the graphical interface tool] is written in Python and has been developed on machines running Linux Ubuntu 14.04.); Regarding claim 19 (as amended), Claim 19 (as amended) teaches the same and/or analogous limitations as claim 1 (as amended). Therefore, claim 19 is rejected under the same rationale and motivation as claim 1. Dholakiya further teaches a processor configured to provide a form for creating a program to build a neural network based on arrangement of components and properties that are set in the components, each of the components representing a layer of a the neural network (Dholakiya, Abstract, teaches providing a convenient wizard-like graphical interface which guides the user through various common scenarios – data import, construction, and training of deep neural networks; Dholakiya, page 2, section 3.2.1 further teaches once the option is chosen to construct the network from scratch, the layers which are associated with data are added…As each layer is completed, the net design so far can be viewed.; See Fig. 2 and Fig. 3 understood to read on each of the components representing a layer of the neural network; Dholakiya, Section 4 teaches Expresso [i.e., the graphical interface] is written in Python and has been developed on machines running Linux Ubuntu 14.04.); Konertz further teaches non-transitory computer-readable medium storing a program for causing a computer to function as an information processing apparatus (Konertz, paragraph [0008] teaches the computer program product includes a non-transitory computer readable medium having encoded thereon program code… The program code further includes program code to display an interactive context panel; Konertz, paragraph [0016] further teaches FIG. 5 illustrates an example implementation of designing a neural network using a general-purpose processor in accordance with certain aspects of the present disclosure.) Regarding claim 20, the combination of Dholakiya in view of Konertz, Birdwell, and JP2003-241965A teaches all of the limitations of claim 13, and the combination further teaches wherein the outputting a source code further includes generating the source code in which the properties of the unit are passed as an argument of a function of the neural network corresponding to the unit (JP2003-241965A, paragraph [0015] teaches the data component definition information 111 includes field information of each data component prepared in advance. The function definition information 112a is, for each function prepared in advance and each newly created function, the name and input / output direction of each input / output terminal, the inherited part name, the icon data name, and the presence / absence of an inverse function definition [Note: here the properties of the unit (i.e., the data component prepared in advance) would be the inherited part name, the icon name – these are all part of the function definition information 112a which are passed as an argument of the function of the neural network]…The program editor 21 also causes the CPU 5 to be a connection display unit 214 that displays the connection state between the terminals designated by the user on the display device 3, and a user-specified component part of the component part group that constitutes the program. A sequence number setting unit 21 for setting a number indicating the execution sequence when a program designated by the user is executed 5 and program elements for causing each function means of the program code generation unit 216 to function. The program code generation unit 216 generates the code of the program being created. [Note: program code generation here reading on outputting source code as claimed]; Paragraph [0015] further teaches (B5) each field of the data part and each argument of the function and the predicate are collectively called a "terminal".; Paragraph [0088] further teaching a data component having a field to which data is set as a terminal is utilized by utilizing the software componentization technique of expressing the connecting means of the software component as a link component. A group of software parts classified into a link part having an argument as a terminal is defined as a group of element parts of the program configuration). Motivation to combine same as stated above for claim 1. Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Dholakiya in view of Konertz, Birdwell, and JP2003-241965A, as applied to claim 1 above, and further in view of Xia et al. (U.S. Patent No. 10810491) Regarding claim 6, the combination of Dholakiya in view of Konertz, Birdwell, and JP2003-241965A teaches all of the limitations of claim 5. Examiner believes that Dholakiya teaches or at least suggests the limitation of wherein the providing a form further includes displaying a parameter that is changed by processing the plurality of components that form the unit in association with the predetermined visual representation corresponding to the unit (Dholakiya, Figure 2 – using the intelli-sense editor for creating and modifying layers of a deep net; Dholakiya, Section 3.2 further teaches the net view is utilized for constructing deep neural network architectures (nets) – either from scratch or by modifying of existing nets). However, Xia more clearly teaches the limitation as provided below. Xia teaches wherein the providing a form further includes displaying a parameter that is changed by processing the plurality of components that form the unit in association with the predetermined visual representation corresponding to the unit (Xia, Abstract, teaches a visualization tool for machine learning models; Xia, Col. 4, lines 22-59 teaches the visualization manager may process and correlate the model metadata collected from various nodes. Metrics which can be used to compare different concurrently trained model variants may be generated and displayed using a dynamically updated easy-to-understand visualization interface…A number of different programmatic controls may be provided to clients enabling them to drill down into the details of selected internal model layers, to select specific model variants whose metrics are to be compared visually, to replay changes that have occurred during successive training iterations, and so on. [Note: the “metrics” understood to read on displayed parameter that changes by processing the plurality of components that form the unit, as claimed]) Before the effective filing date of the claimed invention, it would have been obvious to one or ordinary skill in the art to modify the graphical interface for constructing neural networks, as taught by Dholakiya in view of Konertz, Birdwell and JP2003-241965A, to further include the machine learning model visualization features, as taught by Xia. By comparing metrics and parameters corresponding to the Nth iteration of two different models, it may become easier to determine whether both models are worth training further using their current parameters, or whether it may make sense to modify the parameters of one or both of the models and/or restart the training phase. (Xia, Col. 4, lines 53-59). Claim 15-17 are rejected under 35 U.S.C. 103 as being unpatentable over Dholakiya in view of Konertz, Birdwell, and JP2003-241965A, as applied to claim 1 above, and further in view of JP3631443B2 and Fieldsend et al., “Pareto Evolutionary Neural Networks”, IEEE, 2005. Regarding claim 15, the combination of Dholakiya in view of Konertz, Birdwell, and JP2003-241965A teaches all of the limitations of claim 1, however the combination does not distinctly disclose further comprising: generating another neural network having a different network structure from a neural network that has been evaluated; acquiring a result of evaluating the generated neural network; and updating a Pareto optimum solution of the neural network having been evaluated based on the result of evaluating the generated neural network, wherein the generating further includes generating the another neural network having a different network structure from the neural network of the Pareto optimum solution. Nevertheless, JP3631443B2 teaches generating another neural network having a different network structure from a neural network that has been evaluated; acquiring a result of evaluating the generated neural network (JP3631443B2, Paragraph [0012] teaches generating a new neural network; Paragraph [0024] further teaches each of the neural networks stored in advance in the solution candidate neural network set storage unit 101 is evaluated by the same method as the evaluation performed on the new neural network in step S4, and the evaluation result (for example, evaluation It is assumed that (function value) is stored in association with each neural network.); Before the effective filing date of the claimed invention, it would have been obvious to one or ordinary skill in the art to modify the graphical interface for constructing neural networks, as taught by Dholakiya in view of Konertz, Birdwell, and JP2003-241965A, to further include the additional neural network(s), as taught by JP3631443B2. According to the present invention, it is possible to optimize the neural network coupling weight based on a general model evaluation function, which is better than the conventional optimization. (JP3631443B2, Paragraph [0075]) However, the combination in view of JP3631443B2 does not distinctly disclose and updating a Pareto optimum solution of the neural network having been evaluated based on the result of evaluating the generated neural network, wherein the generating further includes generating the another neural network having a different network structure from the neural network of the Pareto optimum solution. Nevertheless, Fieldsend teaches and updating a Pareto optimum solution of the neural network having been evaluated based on the result of evaluating the generated neural network, wherein the generating further includes generating the another neural network having a different network structure from the neural network of the Pareto optimum solution (Fieldsend, Abstract, teaches a novel methodology for implementing multiobjective optimization within the evolutionary neural network (ENN) domain. This methodology enables the parallel evolution of a population of ENN models which exhibit estimated Pareto optimality with respect to multiple error measures. A new method is derived from this framework, the Pareto evolutionary neural network (Pareto-ENN). The Pareto-ENN evolves a population of models that may be heterogeneous in their topologies inputs and degree of connectivity, and maintains a set of the Pareto optimal ENNs that it discovers). Before the effective filing date of the claimed invention, it would have been obvious to one or ordinary skill in the art to modify the graphical interface for constructing neural networks, as taught by Dholakiya in view of Konertz, Birdwell, and JP2003-241965A, and JP3631443B2, to further include the parallel evolution of a population of ENN models which exhibit estimated Pareto optimality with respect to multiple error measures, as taught by Fieldsend. “Once a set of MOENNs has been generated, that lie upon an estimate of the Pareto surface in the error space, a practitioner not only gains knowledge with respect to the error interactions of their problem, but they also have an opportunity to select an individual model that represents their error tradeoff preferences, or a group of models if so desired.” (Fieldsend, page 352, Col. 1, ¶ 2). Regarding claim 16, the combination of Dholakiya in view of Konertz, Birdwell, JP2003-241965A, JP3631443B2, and Fieldsend teaches all of the limitations of claim 15, and the combination further teaches wherein the generating further includes determining whether to change the network structure of the unit based on structure search permissibility that is set in the unit (JP3631443B2, paragraph [0024] teaches initial neural network generation method may be in accordance with a conventional method (e.g., the structure of the neural network is created or selected by the user, and the connection weight is randomly generated by a predetermined algorithm).). Motivation to combine same as stated for claim 15 above. Regarding claim 17, the combination of Dholakiya in view of Konertz, Birdwell, JP2003-241965A, JP3631443B2, and Fieldsend teaches all of the limitations of claim 15, and the combination further teaches wherein the generating further includes determining whether to change a value of an argument that is used in the unit based on structure search permissibility that is set in the argument (Fieldsend, page 341, Col. 1, ¶ 4, teaches topography and input feature selection is implemented within the Pareto-ENN model by bit mutation of the section of the decision vector representing the ENN architecture…Manipulation of structure is stochastic…By bit flipping the genes in the subsequent binary section of the decision vector the hidden ENN topography is manipulated.). Before the effective filing date of the claimed invention, it would have been obvious to one or ordinary skill in the art to modify the graphical interface for constructing neural networks, as taught by Dholakiya in view of Konertz, Birdwell, JP2003-241965A, and JP3631443B2, to further include the parallel evolution of a population of ENN models which exhibit estimated Pareto optimality with respect to multiple error measures, as taught by Fieldsend. “Once a set of MOENNs has been generated, that lie upon an estimate of the Pareto surface in the error space, a practitioner not only gains knowledge with respect to the error interactions of their problem, but they also have an opportunity to select an individual model that represents their error tradeoff preferences, or a group of models if so desired.” (Fieldsend, page 352, Col. 1, ¶ 2). Claim 21 is rejected under 35 U.S.C. 103 as being unpatentable over Dholakiya in view of Konertz, Birdwell, and JP2003-241965A, as applied to claim 1 above, and further in view of Hachiya (US 20180005106 A1 filed Jun. 28, 2017 and published Jan. 4, 2018) Regarding claim 21, the combination of Dholakiya in view of Konertz, Birdwell, and JP2003-241965A teaches all of the limitations of claim 1, however the combination does not distinctly disclose wherein the presenting statistical information further includes presenting a change in a value of the statistical information on the neural network in response to a change in the value of the defined argument that is commonly usable among the plurality of components that form the unit. Nevertheless, Hachiya teaches wherein the presenting statistical information further includes presenting a change in a value of the statistical information on the neural network in response to a change in the value of the defined argument that is commonly usable among the plurality of components that form the unit (Hachiya, [0013] teaches an information processing method performed by an information processing apparatus, comprising: performing a first calculation to obtain an output value of a first neural network for input data in correspondence with each category; performing a second calculation to obtain an output value of a second neural network for the input data in correspondence with each category, the second neural network being generated by changing a designated unit in the first neural network; performing a third calculation to obtain, for each category, change information representing a change between the output value obtained by the first calculation and the output value obtained by the second calculation; and outputting information representing contribution of the designated unit to a display device based on the change information obtained by the third calculation. [Note: JP2003-241965A has already been shown to teach controlling the defined argument for the unit (see JP2003-241965A, at [0015] and [claim 1] as stated above in the rejection of claim 1). Furthermore, Konertz has already been shown to teach in the rejection of claim 1 above “presenting statistical information on the neural network on the form” – See [0007], [0082], and [0083]].). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to have modified the graphical interface for constructing neural networks, as taught by Dholakiya in view of Konertz, Birdwell, and JP2003-241965A, with the displayed changes in the output (i.e., statistical information) as a result of changes on a specific parameter (i.e., the defined argument), as taught by Hachiya. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to BEATRIZ RAMIREZ BRAVO whose telephone number is 571-272-2156. The examiner can normally be reached Mon. - Fri. 7:30a.m.-5:00p.m.. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, USMAAN SAEED can be reached at 571-272-4046. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /B.R.B./Examiner, Art Unit 2146 /USMAAN SAEED/Supervisory Patent Examiner, Art Unit 2146
Read full office action

Prosecution Timeline

Sep 21, 2020
Application Filed
May 11, 2023
Non-Final Rejection — §103
Jul 27, 2023
Response Filed
Oct 26, 2023
Final Rejection — §103
Jan 12, 2024
Response after Non-Final Action
Apr 15, 2024
Request for Continued Examination
Apr 17, 2024
Response after Non-Final Action
Jul 05, 2024
Non-Final Rejection — §103
Oct 07, 2024
Response Filed
Feb 18, 2025
Final Rejection — §103
Jun 26, 2025
Request for Continued Examination
Jul 02, 2025
Response after Non-Final Action
Oct 31, 2025
Non-Final Rejection — §103
Jan 28, 2026
Response Filed
Mar 17, 2026
Final Rejection — §103
Apr 06, 2026
Interview Requested

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586348
FEATURE FUSION FOR MULTI-MODAL MACHINE LEARNING ANALYSIS
2y 5m to grant Granted Mar 24, 2026
Patent 12579417
METHODS AND SYSTEMS OF OPERATING A NEURAL CIRCUIT IN A NON-VOLATILE MEMORY BASED NEURAL-ARRAY
2y 5m to grant Granted Mar 17, 2026
Patent 12536420
Low Power Generative Adversarial Network Accelerator and Mixed-signal Time-domain MAC Array
2y 5m to grant Granted Jan 27, 2026
Patent 12536405
METHODS AND SYSTEMS FOR NEURAL AND COGNITIVE PROCESSING
2y 5m to grant Granted Jan 27, 2026
Patent 12530570
METHODS AND SYSTEMS OF OPERATING A NEURAL CIRCUIT IN A NON-VOLATILE MEMORY BASED NEURAL-ARRAY
2y 5m to grant Granted Jan 20, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

7-8
Expected OA Rounds
63%
Grant Probability
92%
With Interview (+28.9%)
4y 7m
Median Time to Grant
High
PTA Risk
Based on 97 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month