Prosecution Insights
Last updated: April 19, 2026
Application No. 17/791,373

CONVERSION METHOD AND APPARATUS FOR DEEP LEARNING MODEL, SERVER, AND STORAGE MEDIUM

Final Rejection §101§103
Filed
Dec 21, 2022
Examiner
MARU, MATIYAS T
Art Unit
2148
Tech Center
2100 — Computer Architecture & Software
Assignee
Shenzhen Corerain Technologies Co. Ltd.
OA Round
2 (Final)
58%
Grant Probability
Moderate
3-4
OA Rounds
4y 6m
To Grant
70%
With Interview

Examiner Intelligence

Grants 58% of resolved cases
58%
Career Allow Rate
23 granted / 40 resolved
+2.5% vs TC avg
Moderate +12% lift
Without
With
+12.5%
Interview Lift
resolved cases with interview
Typical timeline
4y 6m
Avg Prosecution
39 currently pending
Career history
79
Total Applications
across all art units

Statute-Specific Performance

§101
35.9%
-4.1% vs TC avg
§103
50.9%
+10.9% vs TC avg
§102
1.9%
-38.1% vs TC avg
§112
11.3%
-28.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 40 resolved cases

Office Action

§101 §103
Detailed Action Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Argument Applicant's arguments filed 12/19/2025 ("Arguments/Remarks") have been fully considered but they are not persuasive. In Remarks/Arguments (pg. 8 – 9) Applicant contends: “The Examiner's characterization does not fully account for the technical problem addressed by the present application or the specific technical solution recited in the claims. Substantial differences exist between a data flow architecture and an instruction set architecture, particularly in terms of data representation and operator granularity. In a data flow architecture, operator granularity is much larger, and the order of calculation modules must be predetermined based on data dependencies. Due to these architectural differences, a model trained under an instruction set architecture cannot be directly deployed on a data flow architecture, significantly impeding practical deployment of deep learning applications. Accordingly, the technical problem addressed is how to convert a deep learning model designed for an instruction set architecture into a model suitable for a data flow architecture so that it can process operators within the data flow environment. The steps of amended claim 1 implement a concrete technical solution:…” Regarding the above argument, the Examiner respectfully disagrees with Applicant’s assertion. The asserted improvement in not supported by sufficient technical details in the claim language. While the claim recites parsing, converting, adjusting and obtaining various computation graphs, without specifying how these operations are technically implemented or how they improve the underlying computer technology. In particular, the amended limitation descries parsing, converting, adjusting and obtaining computation graphs only at a high functional level, without specifying the underlying algorithms, data structures, architectural constraints or performance optimizations. As a result, it lacks technical details demonstrating how the recited steps are implemented or how they provide an improvement to computer technology. Therefore, the claim describes a sequence of abstract transformation at a high level of generality. In Remarks/Arguments (pg. 11) Applicant contends: “Regarding feature-by-feature distinctions in amended claim 1: Feature (1) - Parsing a target deep learning model into an instruction set computation graph. The Examiner cites paragraph [0037] of Seung-so as disclosing this feature. However, according to the present application, the target deep learning model is parsed into an intermediate representation of an instruction set computation graph, wherein operator types and operation rules are explicitly parsed for conversion and fusion. In contrast, Seung-so's task manager 240 splits tasks among hardware components, optimizes hardware latency, and monitors performance, but does not parse operator types or operation rules into an instruction set computation graph. Therefore, feature (1) is not disclosed by Seung-so.” Regarding the above argument, with respect to amended claim(s) have been considered but are moot, because arguments/remarks are directed to amended claim limitations that were not previously examined by the examiner. The rejections are noted in the current office action to address amended claim limitations. In Remarks/Arguments (pg. 11) Applicant contends: “Feature (2) - Converting the instruction set computation graph into a data flow computation graph with operator fusion. The Examiner cites Figure 4 of Lingzhi. The present application, however, reconstructs the instruction set computation graph according to operator granularity in a data flow architecture, fusing small-granularity operators into larger-granularity operators. In Lingzhi, Figure 4 only generates a first intermediate representation (IR) from model files, mapping feature maps and computational operations into a graph. Lingzhi does not disclose or suggest fusing small-granularity operators into larger operators. Therefore, feature (2) is not disclosed.” Regarding the above argument, the Examiner respectfully disagrees with Applicant’s assertion. Lingzhi, ¶[0011] disclosed “merging the computing operations” by setting a subgraph template, obtaining a subgraph matching scheme for a computing graph and reconstructing the graph into a second intermediate representation after the operations are merged. The describes combining multiple operations within a matched subgraph into a reconstructed representation based on hardware attributes. Such merging of computing operations via subgraph templates reasonably corresponds to fusing smaller granularity operations into larger composite operations. In Remarks/Arguments (pg. 12) Applicant contends: “Feature (3) - Adjusting the data flow computation graph to a customized architecture. The Examiner cites paragraph [0050] of Seung-so. According to the present application, the intermediate representation of the data flow computation graph is adjusted to a customized architecture, representing operators and their connection relationships for a target data flow network model. In contrast, Seung-so's model parser 210 and model builder 220 only generate a visual neural network graph and do not reconstruct or rewrite operators according to a customized architecture. Therefore, feature (3) is not disclosed.” Regarding the above argument, the Examiner respectfully disagrees with Applicant’s assertion. Seung-so, ¶[0036] disclosed that the model optimizer can replace, merge or divide and adjust the respective hardware operations so that each sub-model and the hardware operation devices can corresponds. This indicates modification of operations within the neural network model to align with differing hardware operation devices, which is rewriting or reconstructing operator according to hardware specific requirement. As to the remaining dependent claims, applicant argue that they are allowable due to their respective direct and indirect dependencies upon one of the aforementioned Independent claims. The Examiner respectfully disagrees, Independent claims were not allowable as stated in the paragraph above in this “Response to Arguments” section in this office action. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim(s) 1 – 4 and 7 – 12 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e. an abstract idea) without significantly more. In step 1, of the 101-analysis set forth in the MPEP 2106, the examiner has determined that the following limitations recite a process that, under the broadest reasonable interpretation, falls within one or more statutory categories (processes). In step 2A prong 1, of the 101-analysis set forth in MPEP 2106, the examiner has determined that the following limitations recite a process that, under broadest reasonable interpretation, covers a mental process but for the recitation of generic computer components: Regarding claim 1: Parsing [ ] a target deep learning model into an intermediate representation of an instruction set computation graph; (i.e.: the broadest reasonable interpretation, the claim recites abstract idea: mental process: It involves analyzing a model and dividing or breaking it down into a structured representation that captures its components and operation. See (MPEP 2106.04)). converting [ ] the intermediate representation of the instruction set computation graph into an intermediate representation of a data flow computation graph; (i.e.: the broadest reasonable interpretation, the claim recites abstract idea: mental process: It involves converting an instruction set computational graph into a dataflow by observing the relationships between nodes and edges. See (MPEP 2106.04)). adjusting, [ ], the intermediate representation of the data flow computation graph to an intermediate representation of a customized architecture; (i.e.: the broadest reasonable interpretation, the claim recites abstract idea: mental process: It involves reviewing a representation of a computation graph, evaluating characteristics of a target architecture’s requirements. See (MPEP 2106.04)). If the claim limitations, under their broadest reasonable interpretation, covers performance of the limitations as a mental process, but for the recitation of generic computer components, then it falls within the mental process. Accordingly, the claim recites an abstract idea. Step 2A Prong 2 of the 101-analysis, set forth in MPEP 2106, the examiner has determined that the following additional elements do not integrate this judicial exception into a practical application: A computer-implemented conversion method for a deep learning model on a computer comprising a processor, the method comprises Deemed insufficient to transform the judicial exception to a patentable invention because the claim recites limitation which does not amount to more than a recitation of the words "apply it" (or an equivalent), such as mere instructions to implement an abstract idea on a computer. See MPEP 2106.05(f)). … using the processor,… Deemed insufficient to transform the judicial exception to a patentable invention because the claim recites limitation which does not amount to more than a recitation of the words "apply it" (or an equivalent), such as mere instructions to implement an abstract idea on a computer. See MPEP 2106.05(f)). wherein the instruction set computation graph defines types of operator and an operation rule between operators of the target deep learning model; Deemed insufficient to transform the judicial exception to a patentable invention because the additional limitation simply links the judicial exception to a field of use and/or technology environment, see MPEP 2106.05(h). wherein the operator in the intermediate representation of the instruction set computation graph is the first operator, and an operator in the intermediate representation of the data flow computation graph is a second operator, and the first operator in the intermediate representation of the instruction set computation graph is fused into the second operator in the intermediate representation of the data flow computation graph according to the operator granularity of a data flow; Deemed insufficient to transform the judicial exception to a patentable invention because the additional limitation simply links the judicial exception to a field of use and/or technology environment, see MPEP 2106.05(h). obtaining [ ] a converted target data flow network model corresponding to the target deep learning model according to the intermediate representation of the customized architecture; Deemed insufficient to transform the judicial exception to a patentable invention because the claim recites limitation directed to mere data gathering as deemed insufficient to transform the judicial exception because claimed elements are considered insignificant extra-solution activity, See MPEP (2106.05(g))). wherein the intermediate representation of the customized architecture comprises types of operator and a connection relationship between operators of the target data flow network model. Deemed insufficient to transform the judicial exception to a patentable invention because the additional limitation simply links the judicial exception to a field of use and/or technology environment, see MPEP 2106.05(h). In Step 2B of the 101-analysis set forth in the 2019 PEG, the examiner has determined that the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception: Regarding limitation (I and II) recite mere application of the abstract idea or mere instructions to implement an abstract idea on a computer are deemed insufficient to transform the judicial exception to a patentable invention because the limitations generally apply the use of a generic computer and/or process with the judicial exception, see MPEP 2106.05(f). Regarding limitation (V), additional elements considered extra/post solution activity, as analyzed above, are activity that are well-understood routine and conventional, specifically: the courts have recognized the computer functions as well‐understood, routine, and conventional functions. Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TL| Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016) (using a telephone for image transmission); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network). See MPEP 2106.05(d)(II). Regarding limitation (III, IV and VI), additional elements are deemed insufficient to transform the judicial exception to a patentable invention to a patentable invention because they generally link the judicial exception to the technology environment, see MPEP 2106.05(h). As analyzed above, the additional elements, analyzed above, do not integrate the noted judicial exception into a practical application because they do not impose any meaningful limits on practicing the abstract idea. Therefore, the claim is directed to an abstract idea. Regarding claim 7, The rest of the limitations recite similar subject matter as claim 1, so are rejected under the same rationale. A conversion apparatus for a deep learning model, comprising: a storage apparatus configured to store one or more programs Deemed insufficient to transform the judicial exception to a patentable invention because the limitation is directed to mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea and are considered to adding the words “apply it” (or an equivalent) with the judicial exception, See MPEP 2106.05(f). wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to implement: Deemed insufficient to transform the judicial exception to a patentable invention because the limitation is directed to mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea and are considered to adding the words “apply it” (or an equivalent) with the judicial exception, See MPEP 2106.05(f). In Step 2B of the 101-analysis set forth in the 2019 PEG, the examiner has determined that the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception: Regarding limitation (I and II), recite mere application of the abstract idea or mere instructions to implement an abstract idea on a computer are deemed insufficient to transform the judicial exception to a patentable invention because the limitations generally apply the use of a generic computer and/or process with the judicial exception, see MPEP 2106.05(f). Regarding claim 9, The rest of the limitations recite similar subject matter as claim 1, so are rejected under the same rationale. A server, comprising: one or more processors, and a storage apparatus configured to store one or more programs; Deemed insufficient to transform the judicial exception to a patentable invention because the limitation is directed to mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea and are considered to adding the words “apply it” (or an equivalent) with the judicial exception, See MPEP 2106.05(f). wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to implement a conversion method for a deep learning model Deemed insufficient to transform the judicial exception to a patentable invention because the limitation is directed to mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea and are considered to adding the words “apply it” (or an equivalent) with the judicial exception, See MPEP 2106.05(f). In Step 2B of the 101-analysis set forth in the 2019 PEG, the examiner has determined that the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception: Regarding limitation (I and II) recite mere application of the abstract idea or mere instructions to implement an abstract idea on a computer are deemed insufficient to transform the judicial exception to a patentable invention because the limitations generally apply the use of a generic computer and/or process with the judicial exception, see MPEP 2106.05(f). Regarding claim 2, dependent upon claim 1, and fail to resolve the deficiencies identified above by integrating the judicial exception into a practical application, or introducing significantly more than the judicial exception. The claim recites: wherein the target deep learning model comprises a first operator granularity, the intermediate representation of the instruction set computation graph comprises a second operator granularity, and the intermediate representation of the data flow computation graph comprises a third operator granularity The recitation in the additional limitation simply links the judicial exception to a field of use and/or technology environment, see MPEP 2106.05(h). Limitations directed to field of use cannot integrate a judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B. Claim 8, recite similar subject matter as claim 2, so is rejected under the same rationale. Regarding claim 3, dependent upon claim 2, and fail to resolve the deficiencies identified above by integrating the judicial exception into a practical application, or introducing significantly more than the judicial exception. The claim recites: wherein the first operator granularity is the same as the second operator granularity The recitation in the additional limitation simply links the judicial exception to a field of use and/or technology environment, see MPEP 2106.05(h). Limitations directed to field of use cannot integrate a judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B. Claim 11, recite similar subject matter as claim 3, so is rejected under the same rationale. Regarding claim 4, dependent upon claim 2, and fail to resolve the deficiencies identified above by integrating the judicial exception into a practical application, or introducing significantly more than the judicial exception. The claim recites: wherein the second operator granularity is less than the third operator granularity. The recitation in the additional limitation simply links the judicial exception to a field of use and/or technology environment, see MPEP 2106.05(h). Limitations directed to field of use cannot integrate a judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B. Claim 12, recite similar subject matter as claim 4, so is rejected under the same rationale. Regarding claim 10, dependent upon claim 1, and fail to resolve the deficiencies identified above by integrating the judicial exception into a practical application, or introducing significantly more than the judicial exception. The claim recites: A non-transitory computer-readable storage medium storing a computer program Deemed insufficient to transform the judicial exception to a patentable invention because the limitation is directed to mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea and are considered to adding the words “apply it” (or an equivalent) with the judicial exception, See MPEP 2106.05(f). The additional limitations as analyze failed to integrate a judicial exception into a practical application at Step 2A and provide an inventive concept in Step 2B, per the analysis above. wherein the computer program when executed by a processor, implements the conversion method for a deep learning model according to claim 1 The recitation in the additional limitation simply links the judicial exception to a field of use and/or technology environment, see MPEP 2106.05(h). Limitations directed to field of use cannot integrate a judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 7 and 9 – 10 are rejected under 35 U.S.C. 103 as being unpatentable over Seung-so, Pub. No.: KR20210023401A (google translation), in view of Lingzhi et al., Pub. No.: CN110764744A (google translation), Reference – A, Pub. No.: CN111160551A (google translation) and Potharaju et al., Pub. No.: US20200034185A1. Regarding claim 1, Seung-so teaches: A computer implemented conversion method for a deep learning model, on a computer comprising a processor, the method comprises: (Seung-so, “[0028] Meanwhile, the computer system (1000) can execute various types of applications, and the applications can request the deep learning framework (200) to perform operations of the same or different hardware computing devices (300). At this time, the deep learning framework (200) [A computer implemented conversion method for a deep learning model, on a computer comprising a processor, the method comprises] can be performed in a non-blocking mode so that pipelining can be performed so that the different hardware computing devices (300) can perform operations in parallel and simultaneously, and even in the non-blocking mode, the computing path and the hardware computing processing time (latency) of each hardware computing device (300) can be changed so as to increase the hardware utilization of the different hardware computing devices (300) and reduce the overall hardware computing processing time (latency).”) Parsing, using the processor, a target deep learning model into an intermediate representation of an instruction set computation graph; (Seung-so, “[0037] The task manager (240) can divide neural network (NN) models into multiple sub-models, and the divided sub-models Each hardware operation unit (300) [parsing a target deep learning model into an intermediate representation of an instruction set computation graph] is allocated to each hardware operation unit (300) so that each hardware operation unit (300) [using the processor] performs pipelining.”) adjusting, using the processor, the intermediate representation of the data flow computation graph to an intermediate representation of a customized architecture; and (Seung-so, “[0036] The model optimizer (230) can adjust the neural network (NN) model in which the graph structure is generated. Depending on the embodiment, since the operations required for each hidden layer may be different for each sub-model including each hidden layer in the neural network (NN) model, the operations required for each sub-model may also be different. Accordingly, each sub-model may be operated by different hardware operation devices (300) [using the processor] with different operations. The model optimizer (230) can replace, merge, or divide and adjust the respective hardware operations so that each sub-model and the hardware operation devices (300) can correspond [adjusting, [ ], the intermediate representation of the data flow computation graph to an intermediate representation of a customized architecture]. Depending on the adjustment, the hardware operation processing time may change, and accordingly, the total hardware operation processing time for operating the entire model may be measured, and the minimum value among the measured total hardware operation processing times may be obtained.”) obtaining, using the processor, a converted target data flow network model corresponding to the target deep learning model (Seung-so, “[0050] The model parser (210) can read a neural network (NN) model file to obtain and parse information about the neural network (NN) model [obtaining a converted target data flow network model corresponding to the target deep learning model]. The obtained information can be transmitted to the model builder (220), and the obtained information can be used to create a graph structure of the neural network (NN) model [using the processor].”) according to the intermediate representation of the customized architecture (Seung-so, “[0008] According to some embodiments for achieving the above technical problem, a neural network operation system includes a model parser that reads a neural network model file to obtain information of a neural network model, a model builder that generates a graph structure of a neural network model using the information of the neural network model, a model optimizer that adjusts the graph structure of the neural network model to correspond to each operation of a first hardware operation device and a second hardware operation device whose operation is different from the first hardware operation device, and a task manager that divides the neural network model into a first sub-model and a second sub-model, pipelines the first and second sub-models [according to the intermediate representation of the customized architecture] by assigning them to the first and second hardware operation devices, respectively, and detects a minimum value among the total hardware operation processing times obtained through a change in the hardware operation processing time of at least one of the first and second sub-models.”) Seung-so does not teach: Converting, using the processor, the intermediate representation of the instruction set computation graph into an intermediate representation of a data flow computation graph; wherein the instruction set computation graph defines types of operator and an operation rule between operators of the target deep learning model the first operator in the intermediate representation of the instruction set computation graph is fused into the second operator in the intermediate representation of the data flow computation graph according to the operator granularity of a data flow; wherein the operator in the intermediate representation of the instruction set computation graph is the first operator, and an operator in the intermediate representation of the data flow computation graph is a second operator wherein the intermediate representation of the customized architecture comprises types of operator and a connection relationship between operators of the target data flow network model Lingzhi teaches: Converting, using the processor, the intermediate representation of the instruction set computation graph into an intermediate representation of a data flow computation graph; (Lingzhi, “[0037] – [0042] FIG. 4 shows a flow diagram of an intermediate representation generation [converting, using the processor, the intermediate representation of the instruction set computation graph] method according to one embodiment of the invention. In step S410, the input model file is parsed to acquire topology information of the neural network. In step S420, feature graph information and calculation operation information in the topology information are used as nodes and edges, respectively, to generate a first intermediate representation in the form of a graph [into an intermediate representation of a data flow computation graph].”) Lingzhi and Seung-so are related to the same field of endeavor (i.e.: deep learning framework). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to combine the teaching of Lingzhi with teachings of Seung-so to introduce intermediate representations that enable optimization and portability across frameworks and hardware to allow a system to not only minimize hardware latency but also adapt efficiency to diverse models and platforms (Lingzhi, Abstract). Seung-so in view of Lingzhi do not teach: wherein the instruction set computation graph defines types of operator and an operation rule between operators of the target deep learning model the first operator in the intermediate representation of the instruction set computation graph is fused into the second operator in the intermediate representation of the data flow computation graph according to the operator granularity of a data flow; wherein the operator in the intermediate representation of the instruction set computation graph is the first operator, and an operator in the intermediate representation of the data flow computation graph is a second operator wherein the intermediate representation of the customized architecture comprises types of operator and a connection relationship between operators of the target data flow network model Reference – A teaches: wherein the instruction set computation graph defines types of operator and an operation rule between operators of the target deep learning model; (Reference – A, “[0104] Step 2): The general-purpose processor checks the operators in the original subgraph according to the rules of the operators [wherein the instruction set computation graph defines types of operator and an operation rule between operators] in the learning library of the artificial intelligence processor, and performs a second division of the original subgraph based on the check results to obtain the target subgraph [of the target deep learning model].”) the first operator in the intermediate representation of the instruction set computation graph is fused into the second operator in the intermediate representation of the data flow computation graph according to the operator granularity of a data flow; (Reference – A, “[0105] In practice, according to the rules of the operators in the learning library of the artificial intelligence processor, the framework also needs to perform operator boundary checks on the operators in each original subgraph obtained in Step 1 [the first operator in the intermediate representation of the instruction set computation graph] (i.e.: rules of operation in the learning library of the artificial intelligence processor, which reflect processor specific execution units (instruction level or hardware defined granularity)), and divide the continuously executable operators into one subgraph, which is then compiled into FusionOp. This is the set of operators [the second operator in the intermediate representation of the data flow computation graph] that can truly be fused into one operator [is fused into [ ] according to the operator granularity of a data flow]. The FusionOp is stored in the Cache. When executing the optimized computation graph with the fusion operator, when running the fusion operator, it does not perform layer-by-layer execution, but directly retrieves the pre-compiled FusionOp from the Cache.”) Reference – A, Seung-so and Lingzhi are related to the same field of endeavor (i.e.: deep learning framework). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to combine the teaching of Reference – A with teachings of Seung-so and Lingzhi to add generating executable binary instructions for an artificial intelligence processor based on the fusion operator’s operation instructions. (Reference – A, Abstract). Seung-so in view of Lingzhi and Reference – A do not teach: wherein the operator in the intermediate representation of the instruction set computation graph is the first operator, and an operator in the intermediate representation of the data flow computation graph is a second operator wherein the intermediate representation of the customized architecture comprises types of operator and a connection relationship between operators of the target data flow network model Potharaju teaches: wherein the operator in the intermediate representation of the instruction set computation graph is the first operator, and an operator in the intermediate representation of the data flow computation graph is a second operator, (Potharaju, “[0054] When the sink operators within the intermediate dataflow execution graph [wherein the operator in the intermediate representation of the instruction set computation graph is the first operator] complete execution of the control messages, the operators each report the completion to the controller 210. When the controller 210 confirms that all sink operators in the dataflow execution graph [graph is a second operator] have executed the control messages (act 605), the controller 210 can understand that the intermediate dataflow execution graph [and an operator in the intermediate representation of the data flow computation] has now taken the form of the new dataflow execution graph, and that the data stream(s) is/are being fed into and processed by that new dataflow execution graph.”) wherein the intermediate representation of the customized architecture comprises types of operator and a connection relationship between operators of the target data flow network model. (Potharaju, “[0058] Eventually, the intermediate dataflow execution graph 800A will collapse into the new dataflow execution graph 700 once all operators have completed processing the control message. Recall that for each operator of the intermediate dataflow execution graph 800A [wherein the intermediate representation of the customized architecture comprises types of operator and] that is not part of the new dataflow execution graph 700 (which includes all of the original operators 501, 502, 503 and 504), that operator will shut down after executing the control message received on all of its input edges. Also, for each operator of the intermediate dataflow execution graph 800A that is not part of the original dataflow execution graph 500 (which includes all of the operators 501′, 502′, 503′ and 504′) [a connection relationship between operators of the target data flow network model], that operator will begin processing data messages after executing the control message received on each of its input edges.”) Potharaju, Seung-so, Lingzhi and Reference – A are related to the same field of endeavor (i.e.: deep learning framework). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to combine the teaching of Potharaju with teachings of Seung-so, Lingzhi and Reference – A to add automatic tuning of dataflow execution graph by monitoring performance parameters against service objectives and dynamically reconfiguring the graph based on the results. (Potharaju, Abstract). Regarding claim 7, Seung-so teaches: A conversion apparatus for a deep learning model, comprising: a storage apparatus configured to store one or more programs; wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to implement: (Seung-so, “[0010] According to some embodiments for achieving the above technical problem, a computing system includes a processor for controlling the overall operation of the system, a memory for storing data capable of controlling the system [A conversion apparatus for a deep learning model, comprising: a storage apparatus configured to store one or more programs;], a deep learning framework controlled by the processor [wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to implement], and a plurality of hardware computing devices controlled by the deep learning framework, wherein the deep learning framework includes a model parser for reading a neural network model file to obtain information of the neural network model, a model builder for generating a graph structure of the neural network model using the information of the neural network model, a model optimizer for adjusting the graph structure of the neural network model so that it corresponds to each operation of a first hardware computing device and a second hardware computing device whose operation is different from the first hardware computing device, a task manager for dividing the neural network model into a first sub-model and a second sub-model, allocating the first and second sub-models to the first and second hardware computing devices, respectively, and performing pipelining, and detecting a minimum value among the total hardware computing processing times obtained through a change in the hardware computing processing time of at least one of the first and second sub-models.”) The rest of the limitations are analogous to claim 1, so are rejected under similar rationale. Regarding claim 9, Seung-so teaches: A server, comprising: one or more processors, and a storage apparatus configured to store one or more programs; wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to implement a conversion method for a deep learning model (Seung-so, “[0010] According to some embodiments for achieving the above technical problem, a computing system includes a processor for controlling the overall operation of the system, a memory for storing data capable of controlling the system [and a storage apparatus configured to store one or more programs;], a deep learning framework controlled by the processor [A server, comprising: one or more processors, wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to implement a conversion method for a deep learning model], and a plurality of hardware computing devices controlled by the deep learning framework, wherein the deep learning framework includes a model parser for reading a neural network model file to obtain information of the neural network model, a model builder for generating a graph structure of the neural network model using the information of the neural network model, a model optimizer for adjusting the graph structure of the neural network model so that it corresponds to each operation of a first hardware computing device and a second hardware computing device whose operation is different from the first hardware computing device, a task manager for dividing the neural network model into a first sub-model and a second sub-model, allocating the first and second sub-models to the first and second hardware computing devices, respectively, and performing pipelining, and detecting a minimum value among the total hardware computing processing times obtained through a change in the hardware computing processing time of at least one of the first and second sub-models.”) The rest of the limitations are analogous to claim 1, so are rejected under similar rationale. Regarding claim 10, Seung-so in view of Lingzhi, Reference – A and Potharaju teach the method of claim 1. Seung-so further teaches: A non-transitory computer-readable storage medium storing a computer program, wherein the computer program when executed by a processor, implements the conversion method for a deep learning model according to claim 1. (Seung-so, “[0017] Referring to FIG. 1, a computer system (1000) includes a processor (100) [wherein the computer program when executed by a processor, implements the conversion method for a deep learning model according to claim 1], a deep learning framework (200), a hardware computing device (300), It may include RAM (400) (Random Access Memory) and memory (500) [A non-transitory computer-readable storage medium storing a computer program,], and according to an embodiment, the computer system (1000) At least some of the components can be mounted on a single semiconductor chip.”) Claim(s) 2 – 3, 8 and 11 are rejected under 35 U.S.C. 103 as being unpatentable over Seung-so in view of Lingzhi, Reference – A and Potharaju and in further view of Du et al., Pub. No.: US20200089535A1. Regarding claim 2, Seung-so in view of Lingzhi, Reference – A and Potharaju teach the method of claim 1. Seung-so in view of Lingzhi, Reference – A and Potharaju do not teach: wherein the target deep learning model comprises a first operator granularity, the intermediate representation of the instruction set computation graph comprises a second operator granularity, and the intermediate representation of the data flow computation graph comprises a third operator granularity Du teaches: wherein the target deep learning model comprises a first operator granularity, the intermediate representation of the instruction set computation graph comprises a second operator granularity, and the intermediate representation of the data flow computation graph comprises a third operator granularity (Du, “[0010] In some embodiment, the granularity task segmentation unit includes at least one of the following units: a first granularity task segmentation unit configured to take the whole task as one of the subtasks; [a first operator granularity, the intermediate representation of the instruction set computation graph] a second granularity task segmentation unit configured to divide sample data associated with the task into one or more subset of sample data [comprises a second operator granularity, and the intermediate representation of the data flow computation graph], and identify a computation of each subset of sample data as one of the subtasks; a third granularity task segmentation unit configured to segment the task according to layer types of a neural network [comprises a third operator granularity], where computation for layers of the same layer type is identified as one of the subtasks; fourth granularity task segmentation unit configured to segment the task according to an interlayer structure of the neural network, wherein computation for multiple adjacent layers is identified as one of the subtasks; and a fifth granularity task segmentation unit configured to segment the task according to intra-layer structures of the neural network to segment computation types in each of the layers of the neural network into subtasks.”) Du, Seung-so, Lingzhi, Reference – A and Potharaju are related to the same field of endeavor (i.e.: deep learning framework). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to combine the teaching of Du with teachings of Seung-so, Lingzhi, Reference – A and Potharaju to improve processing performance and minimize overhead by aligning sub-model partitioning with fine-grained task segmentation and hardware configuration. (Du, Abstract). Claim 8 recites analogous limitation as claim 2, so is rejected under the same rationale. Regarding claim 3, Seung-so in view of Lingzhi, Reference – A, Potharaju and Du teach the method of claim 2. Du further teaches: wherein the first operator granularity is the same as the second operator granularity. (Du, “[0010] In some embodiment, the granularity task segmentation unit includes at least one of the following units: a first granularity task segmentation unit [wherein the first operator granularity] configured to take the whole task as one of the subtasks; a second granularity task segmentation unit [as the second operator granularity] configured to divide sample data associated with the task into one or more subset of sample data, and identify a computation of each subset of sample data as one of the subtasks; a third granularity task segmentation unit configured to segment the task according to layer types of a neural network, where computation for layers of the same layer type [is the same as] is identified as one of the subtasks; fourth granularity task segmentation unit configured to segment the task according to an interlayer structure of the neural network, wherein computation for multiple adjacent layers is identified as one of the subtasks; and a fifth granularity task segmentation unit configured to segment the task according to intra-layer structures of the neural network to segment computation types in each of the layers of the neural network into subtasks.”) It would have been obvious to one of ordinary skill in the art before the effective filling date of the present application to combine the teachings of Du with teachings of Seung-so, Lingzhi, Reference – A and Potharaju for the same reasons disclosed for claim 3. Claim 11 recites analogous limitation as claim 3, so is rejected under the same rationale. Claim(s) 4 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Seung-so in view of Lingzhi, Reference – A and Potharaju, Du and in further view of Cohen et al., Pub. No.: US20190102671A1. Regarding claim 4, Seung-so in view of Lingzhi, Reference – A and Potharaju and Du teach the method of claim 2. Seung-so in view of Lingzhi, Reference – A, Potharaju and Du do not teach: wherein the second operator granularity is less than the third operator granularity. Cohen teaches: wherein the second operator granularity is less than the third operator granularity (Cohen, “[0327] There are further disclosed one or more tangible, non-transitory computer-readable mediums, wherein the two or more intermediate operators are Op1, Op2, and Op3, wherein the output operator is assigned a first value if Op1 [wherein the second operator granularity] (i.e.: Op1 corresponds to the second operator granularity) is less than Op3 [the third operator granularity] (i.e.: Op3 corresponds to the third operator granularity), a second value is Op1 is between Op3 and Op2, and a third value otherwise.”) Cohen, Seung-so, Lingzhi, Reference – A, Potharaju and Du are related to the same field of endeavor (i.e.: deep learning framework). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to combine the teaching of Cohen with teachings of Seung-so, Lingzhi, Reference – A, Potharaju and Du to add pipeline sub-models across devices to achieve faster and more efficient CNN computations within each hardware unit (Cohen, Abstract). Claim 12 recites analogous limitation as claim 4, so is rejected under the same rationale. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Rotem, et al., "Glow: Graph lowering compiler techniques for neural networks." (2018). Rotem provide a useful compiler toolkit that will allow hardware developers to focus on implementing efficient acceleration hardware, each of which likely differ in capabilities, and use Glow for automating compilation tasks such as instruction selection, memory allocation and graph scheduling. Chadha, et al., "Performance Analysis of Accelerated Linear Algebra Compiler for TensorFlow." 2017. Chadha analyze the performance of XLA compilation tool on machine learning algorithms like Convolutional Neural Networks, Long Short Term Memory and custom control flow graphs. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MATIYAS T MARU whose telephone number is (571)270-0902 or via email: matiyas.maru@uspto.gov. The examiner can normally be reached Monday - Friday (8:00am - 4:00pm) EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Michelle Bechtold can be reached on (571)431-0762. The fax phone number for the organization were this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /M.T.M./Examiner, Art Unit 2148 /MICHELLE T BECHTOLD/Supervisory Patent Examiner, Art Unit 2148
Read full office action

Prosecution Timeline

Dec 21, 2022
Application Filed
Sep 15, 2025
Non-Final Rejection — §101, §103
Dec 19, 2025
Response Filed
Mar 04, 2026
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586114
GENERATING DIGITAL RECOMMENDATIONS UTILIZING COLLABORATIVE FILTERING, REINFORCEMENT LEARNING, AND INCLUSIVE SETS OF NEGATIVE FEEDBACK
2y 5m to grant Granted Mar 24, 2026
Patent 12572796
METHODS AND SYSTEMS FOR GENERATING RECOMMENDATIONS FOR COUNTERFACTUAL EXPLANATIONS OF COMPUTER ALERTS THAT ARE AUTOMATICALLY DETECTED BY A MACHINE LEARNING ALGORITHM
2y 5m to grant Granted Mar 10, 2026
Patent 12567004
METHOD OF MACHINE LEARNING TRAINING FOR DATA AUGMENTATION
2y 5m to grant Granted Mar 03, 2026
Patent 12561588
Methods and Systems for Generating Example-Based Explanations of Link Prediction Models in Knowledge Graphs
2y 5m to grant Granted Feb 24, 2026
Patent 12561584
TEACHING DATA PREPARATION DEVICE, TEACHING DATA PREPARATION METHOD, AND PROGRAM
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
58%
Grant Probability
70%
With Interview (+12.5%)
4y 6m
Median Time to Grant
Moderate
PTA Risk
Based on 40 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month