Prosecution Insights
Last updated: April 19, 2026
Application No. 18/595,911

METHOD AND SYSTEM FOR COMPILING APPLICATIONS

Non-Final OA §103§DP
Filed
Mar 05, 2024
Examiner
CHEN, QING
Art Unit
2191
Tech Center
2100 — Computer Architecture & Software
Assignee
Moreh Corp.
OA Round
1 (Non-Final)
80%
Grant Probability
Favorable
1-2
OA Rounds
2y 10m
To Grant
99%
With Interview

Examiner Intelligence

Grants 80% — above average
80%
Career Allow Rate
542 granted / 678 resolved
+24.9% vs TC avg
Strong +52% interview lift
Without
With
+51.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
28 currently pending
Career history
706
Total Applications
across all art units

Statute-Specific Performance

§101
18.1%
-21.9% vs TC avg
§103
39.2%
-0.8% vs TC avg
§102
10.3%
-29.7% vs TC avg
§112
23.1%
-16.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 678 resolved cases

Office Action

§103 §DP
DETAILED ACTION This is the initial Office action based on the application submitted on March 5, 2024. Claims 1-8 are pending. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Specification The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed. The following title is suggested: METHOD AND SYSTEM FOR DETERMINING AN EXPECTED EXECUTION TIME FOR A COMPILED GRAPH USING A COST MODEL. Claim Objections Claims 1, 2, 4-6, and 8 are objected to because of the following informalities: Claims 1, 2, 4, and 8 recite “the profiling information.” It should read -- the profiling information of the system --. Claims 1, 5, 6, and 8 recite “the intermediate representation.” It should read -- the intermediate representation of at least the portion of the application program --. Claims 1 and 8 recite “each of the plurality of compiled graphs.” It should read -- each compiled graph of the plurality of compiled graphs --. Claim 2 contains a typographical error: the whitespace character before the period (.) should be deleted. Claims 2 and 4 recite “each of a plurality of types of operations.” It should read -- each type of operation of a plurality of types of operations --. Appropriate correction is required. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory obviousness-type double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); and In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on a nonstatutory double patenting ground provided the conflicting application or patent either is shown to be commonly owned with this application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. Effective January 1, 1994, a registered attorney or agent of record may sign a terminal disclaimer. A terminal disclaimer signed by the assignee must fully comply with 37 CFR 3.73(b). Claims 1, 7, and 8 are provisionally rejected on the ground of non-statutory obviousness-type double patenting as being unpatentable over Claims 1, 8, and 9 of co-pending Application No. 18/595,962 (hereinafter “‘962”) in view of US 2007/0061286 (hereinafter “Liu”) and US 11,392,356 (hereinafter “Leopoldseder”). Examiner respectfully submits the relevant sections of MPEP §§ 804(II)(B)(1) and 804(II)(B)(1)(a) with emphasis added for purposes of convenience in discussion and illustration: MPEP § 804(II)(B)(1) Obviousness-Type >A nonstatutory obviousness-type double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); and In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985).< Any obviousness-type double patenting rejection should make clear: (A) The differences between the inventions defined by the conflicting claims — a claim in the patent compared to a claim in the application; and (B) The reasons why a person of ordinary skill in the art would conclude that the invention defined in the claim at issue >is anticipated by, or< would have been an obvious variation of >,< the invention defined in a claim in the patent. MPEP § 804(II)(B)(1)(a) One-Way Obviousness If the application at issue is the later filed application or both are filed on the same day, only a one-way determination of obviousness is needed in resolving the issue of double patenting, i.e., whether the invention defined in a claim in the application would have been >anticipated by, or< an obvious variation of >,< the invention defined in a claim in the patent. See, e.g., In re Berg, 140 F.3d 1438, 46 USPQ2d 1226 (Fed. Cir. 1998) (the court applied a one-way test where both applications were filed the same day). If a claimed invention in the application would have been obvious over a claimed invention in the patent, there would be an unjustified timewise extension of the patent and an obvious-type double patenting rejection is proper. Unless a claimed invention in the application would have been >anticipated by, or< obvious over a claimed invention in the patent, no double patenting rejection of the obvious-type should be made, but this does not necessarily preclude a rejection based on another type of nonstatutory double patenting (see MPEP § 804, paragraph II.B.2. below). Similarly, even if the application at issue is the earlier filed application, only a one-way determination of obviousness is needed to support a double patenting rejection in the absence of a finding: (A) of administrative delay on the part of the Office causing delay in prosecution of the earlier filed application; and (B) that applicant could not have filed the conflicting claims in a single (i.e., the earlier filed) application. See MPEP § 804, paragraph II.B.1.(b) below. It is noted that the instant application is a co-pending application of ‘962 with the same filing date. It is also noted that both ‘962 and the instant application were filed by the same inventive entity and by a common assignee/owner. Claims 1, 8, and 9 of ‘962 recite almost all the limitations of Claims 1, 7, and 8 of the instant application. However, Claim 1 of the instant application, for example, recites the further limitations “acquiring profiling information of a system on which an application program is to be executed,” “generating, based on the profiling information, a cost model,” and “determining, using the cost model, an expected execution time for each of the plurality of compiled graphs.” As per Claim 1 of the instant application, for example, Liu discloses: acquiring profiling information of a system on which an application program is to be executed (paragraph [0018], “The framework 100, which may be implemented within a compiler, comprises, in one embodiment, a profiler 102 and a throughput-guided aggregation and mapping (TGAM) phase 104. The TGAM 104 may be configured to partition an application by aggregating functions into tasks (or aggregates) and map tasks to processors on the chip (emphasis added).”; paragraph [0020], “As shown in FIG. 1A, the TGAM phase 104 follows the profiler 102. The profiler 102, in one embodiment, provides runtime statistics (e.g. frequency of each packet processing function, utilization of communication channels, etc.) [acquiring profiling information of a system on which an application program is to be executed]. In the TGAM phase 104, multiple aggregates are generated and each aggregate is mapped to a suitable processor.”); generating, based on the profiling information, a cost model (paragraph [0020], “As shown in FIG. 1A, the TGAM phase 104 follows the profiler 102. The profiler 102, in one embodiment, provides runtime statistics (e.g. frequency of each packet processing function, utilization of communication channels, etc.). In the TGAM phase 104, multiple aggregates are generated and each aggregate is mapped to a suitable processor.”; paragraph [0021], “The TGAM phase 104, in one embodiment, comprises a code size model 110, a throughput-driven cost model 112 […] The throughput-driven cost model 112, in one embodiment, models throughput as well as other factors that have a critical effect on throughput (e.g. communication cost, memory access latency, CPU execution time, and code size, and synchronization cost) [generating, based on the profiling information, a cost model]. The cost model 112 is used by the aggregation and mapping component 114 to improve system throughput (emphasis added).”); and using the cost model (paragraph [0045], “Referring to FIG. 4, at operation 404, execution time of each aggregate is computed and the aggregates are sorted by their respective execution times at operation 406. In one embodiment, execution time of an aggregate is computed utilizing the cost model (H) described above (emphasis added).”). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of ‘962 to incorporate the teaching of Liu into ‘962 to include “acquiring profiling information of a system on which an application program is to be executed; generating, based on the profiling information, a cost model; and using the cost model.” The modification would be obvious because one of ordinary skill in the art would be motivated to use a cost model in order to improve system throughput (Liu, paragraph [0021]). And Leopoldseder discloses: determining […] an expected execution time for each of the plurality of compiled graphs (col. 5 lines 3-11, “The execution engine (146) includes functionality to obtain the values of one or more performance metrics while executing a compilation graph. A performance metric measures an aspect of the execution of the compilation graph. Examples of performance metrics include: time spent executing a function, time spent executing a loop in a function, number of memory operations in a function, number of memory operations in a loop in a function, number of nodes added or removed from the compilation graph, etc. (emphasis added)”; col. 7 lines 66 and 67, “In Step 206, the versions of the initial compilation graph are executed to obtain values of a performance metric [determining {…} an expected execution time for each of the plurality of compiled graphs].”). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of ‘962 to incorporate the teaching of Leopoldseder into ‘962 to include “determining […] an expected execution time for each of the plurality of compiled graphs.” The modification would be obvious because one of ordinary skill in the art would be motivated to obtain performance metric that measures an aspect of the execution of a compilation graph (Leopoldseder, col. 5 lines 3-11). Thus, Claims 1, 7, and 8 of the instant application are obvious over Claims 1, 8, and 9 of ‘962 and as such are unpatentable for obviousness-type double patenting. Claim 1 of ‘962 as shown in the table below recites almost all the limitations of Claim 1 of the instant application. The further limitation recited in Claim 1 of the instant application is boldfaced for the Applicant’s convenience. Claims 8 and 9 of ‘962 are not shown with Claims 7 and 8 of the instant application for the purpose of brevity. Co-Pending Application No. 18/595,962 Instant Application No. 18/595,911 1. A method performed by at least one first processor, the method comprising: 1. A method performed by at least one first processor, the method comprising: acquiring profiling information of a system on which an application program is to be executed; generating, based on the profiling information, a cost model; acquiring a first intermediate representation for a first portion of an application program; acquiring an intermediate representation of at least a portion of the application program; applying compiler passes to the intermediate representation; applying compiler passes to the first intermediate representation; generating, based on the applying the compiler passes, a plurality of compiled graphs associated with the intermediate representation; generating, based on the applying compiler passes to the first intermediate representation, a plurality of first candidate compiled graphs associated with the first intermediate representation; determining, using the cost model, an expected execution time for each of the plurality of compiled graphs; based on an expected execution time for each of the plurality of first candidate compiled graphs, selecting one first sub-optimal graph from among the plurality of first candidate compiled graphs; and selecting a complied graph of the plurality of compiled graphs based on the expected execution time for each of the plurality of compiled graphs; and transmitting the first sub-optimal graph to a second processor, transmitting the selected complied graph to a second processor, wherein the first sub-optimal graph is executed by the second processor. wherein the selected compiled graph is executed by the second processor. This is a provisional non-statutory double patenting rejection. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1 and 5-8 are rejected under 35 U.S.C. 103 as being unpatentable over US 11,392,356 (hereinafter “Leopoldseder”) in view of US 2007/0061286 (hereinafter “Liu”). [Examiner’s Remarks: In order for a reference to be proper for use in an obviousness rejection under 35 U.S.C. 103, the reference must be analogous art to the claimed invention. In re Bigio, 381 F.3d 1320, 1325, 72 USPQ2d 1209, 1212 (Fed. Cir. 2004). A reference is analogous art to the claimed invention if: (1) the reference is from the same field of endeavor as the claimed invention (even if it addresses a different problem); or (2) the reference is reasonably pertinent to the problem faced by the inventor (even if it is not in the same field of endeavor as the claimed invention). Note that the claimed invention is generally directed to generating a cost model based on profiling information of the system and determining an expected execution time for a compiled graph using the cost model (specification, page 1, lines 10-12). As for the “same field of endeavor” test, Leopoldseder is generally directed to optimizing a compilation graph (Leopoldseder, col. 2 lines 44 and 45). And as for the “reasonably pertinent” test, Liu is generally directed to partitioning an application utilizing a throughput-driven aggregation and mapping approach (Liu, paragraph [0002]). Thus, Leopoldseder and Liu are both analogous art to the claimed invention (even if they address different problems or are not in the same field of endeavor as the claimed invention). See MPEP § 2141.01(a)(I).] As per Claim 1, Leopoldseder discloses: A method performed by at least one first processor (Figure 1A: 106; col. 1 lines 27-29, “In general, in one aspect, one or more embodiments relate to a method […].”), the method comprising: acquiring an intermediate representation of at least a portion of [an] application program (col. 3 lines 48-601, “An initial compilation graph (114) is a graph representation of the syntactic structure of a function (112). The initial compilation graph (114) may further represent semantics of the function (112). For example, executing the initial compilation graph (114) may yield the same result as executing the textual representation of the function (112) in the source code (110). The initial compilation graph (114) may be generated from a function (112) by a compiler (e.g., dynamic compiler (104)). The initial compilation graph (114) includes nodes corresponding to the syntactic constructs of the function (112). The syntactic constructs may include variables, constants, operators, expressions, statements, function invocations, classes, declarations, identifiers, etc. (emphasis added)”; col. 7 lines 19-22, “Initially, in Step 202, a feature is extracted from an initial compilation graph and for an optimization parameter. The dynamic compiler may generate the initial compilation graph from a function in source code [acquiring an intermediate representation of at least a portion of {an} application program] (emphasis added).”); [1Examiner’s Remarks: Note that the Applicant’s specification expressly states that “an ‘intermediate representation’ may refer to a graph that is generated to efficiently execute a program and has the same meaning as a program and/or information associated therewith. The intermediate representation may include one or more nodes and/or one or more edges” (page 8, lines 9-12, emphasis added). Thus, under the broadest reasonable interpretation (BRI), the plain meaning of the limitation “an intermediate representation” includes a graph, which is consistent with the specification. Thus, the limitation “an intermediate representation,” given its plain meaning consistent with the specification, is mapped to Leopoldseder’s initial compilation graph. See MPEP § 2173.01(I).] applying compiler passes to the intermediate representation (col. 3 lines 61-67 to col. 4 lines 1-15, “The dynamic compiler (104) includes functionality to generate an optimized compilation graph (122A) by applying an optimization to the initial compilation graph (114) [applying compiler passes to the intermediate representation]. The optimization is a modification to the initial compilation graph (114) that attempts to improve the performance of the initial compilation graph (114). Although the optimization modifies the syntax of the initial compilation graph (114), executing the initial compilation graph (114) may continue to yield the same result as executing the textual representation of the function (112) in the source code (110). The modification may add, delete, and/or modify one or more nodes of the initial compilation graph (114). For example, the optimization may attempt to increase the execution speed and/or reduce the size of the initial compilation graph (114). Examples of optimizations may include: loop peeling, loop unrolling, constant folding, conditional elimination, escape analysis, scalar replacement, read elimination, strength reduction, etc. In one or more embodiments, the optimizations may be obtained from a library of compiler optimizations (e.g., specific to a programming language and/or compiler) [compiler passes] (emphasis added).”); generating, based on the applying the compiler passes, a plurality of compiled graphs associated with the intermediate representation (col. 4 lines 52-58, “The dynamic compiler (104) includes functionality to generate a new optimized compilation graph by applying an optimization to an existing optimized compilation graph. The dynamic compiler (104) includes functionality to generate a series of optimized compilation graphs (122A, 122N) by applying a series of optimizations according to a series of optimization parameters (120A, 120N) [based on the applying the compiler passes].”; col. 7 lines 38-40, “In Step 204, values of the optimization parameter are applied to the initial compilation graph to generate versions of the initial compilation graph [generating, based on the applying the compiler passes, a plurality of compiled graphs associated with the intermediate representation].”); determining […] an expected execution time for each of the plurality of compiled graphs (col. 5 lines 3-11, “The execution engine (146) includes functionality to obtain the values of one or more performance metrics while executing a compilation graph. A performance metric measures an aspect of the execution of the compilation graph. Examples of performance metrics include: time spent executing a function, time spent executing a loop in a function, number of memory operations in a function, number of memory operations in a loop in a function, number of nodes added or removed from the compilation graph, etc. (emphasis added)”; col. 7 lines 66 and 67, “In Step 206, the versions of the initial compilation graph are executed to obtain values of a performance metric [determining {…} an expected execution time for each of the plurality of compiled graphs].”); selecting a complied graph of the plurality of compiled graphs based on the expected execution time for each of the plurality of compiled graphs (col. 8 lines 40-45, “In Step 208, a version of the initial compilation graph is selected as an optimized compilation graph using the values of the performance metric. The dynamic compiler may select the version of the initial compilation graph corresponding to the “best” values of the performance metrics obtained in Step 206 above (emphasis added).”); and transmitting the selected complied graph to a second processor (col. 3 lines 19-22, “As shown in FIG. 1A, the computer system (100) includes a repository (102), a dynamic compiler (104), and computer processor(s) (106) [a second processor].”; col. 9 lines 4-10, “The dynamic compiler may re-execute the process of FIG. 2 for the optimized compilation graph and another optimization parameter to generate another optimized compilation graph, and so on, resulting in a series of one or more optimized compilation graphs each generated by applying an optimization parameter to the previous optimized compilation graph in the series.”), [Examiner’s Remarks: Note that Leopoldseder discloses computer processor(s) and that the dynamic compiler may re-execute the optimized compilation graph. Thus, one of ordinary skill in the art would readily comprehend that the optimized compilation graph is transmitted to one of the computer processors for re-execution.] wherein the selected compiled graph is executed by the second processor (col. 3 lines 19-22, “As shown in FIG. 1A, the computer system (100) includes a repository (102), a dynamic compiler (104), and computer processor(s) (106) [a second processor].”; col. 9 lines 4-10, “The dynamic compiler may re-execute the process of FIG. 2 for the optimized compilation graph and another optimization parameter to generate another optimized compilation graph, and so on, resulting in a series of one or more optimized compilation graphs each generated by applying an optimization parameter to the previous optimized compilation graph in the series.”). [Examiner’s Remarks: Note that Leopoldseder discloses computer processor(s) and that the dynamic compiler may re-execute the optimized compilation graph. Thus, one of ordinary skill in the art would readily comprehend that the optimized compilation graph is executed by one of the computer processors.] Leopoldseder does not explicitly disclose: acquiring profiling information of a system on which an application program is to be executed; generating, based on the profiling information, a cost model; and using the cost model. However, Liu discloses: acquiring profiling information of a system on which an application program is to be executed (paragraph [0018], “The framework 100, which may be implemented within a compiler, comprises, in one embodiment, a profiler 102 and a throughput-guided aggregation and mapping (TGAM) phase 104. The TGAM 104 may be configured to partition an application by aggregating functions into tasks (or aggregates) and map tasks to processors on the chip (emphasis added).”; paragraph [0020], “As shown in FIG. 1A, the TGAM phase 104 follows the profiler 102. The profiler 102, in one embodiment, provides runtime statistics (e.g. frequency of each packet processing function, utilization of communication channels, etc.) [acquiring profiling information of a system on which an application program is to be executed]. In the TGAM phase 104, multiple aggregates are generated and each aggregate is mapped to a suitable processor (emphasis added).”); generating, based on the profiling information, a cost model (paragraph [0020], “As shown in FIG. 1A, the TGAM phase 104 follows the profiler 102. The profiler 102, in one embodiment, provides runtime statistics (e.g. frequency of each packet processing function, utilization of communication channels, etc.). In the TGAM phase 104, multiple aggregates are generated and each aggregate is mapped to a suitable processor.”; paragraph [0021], “The TGAM phase 104, in one embodiment, comprises a code size model 110, a throughput-driven cost model 112 […] The throughput-driven cost model 112, in one embodiment, models throughput as well as other factors that have a critical effect on throughput (e.g. communication cost, memory access latency, CPU execution time, and code size, and synchronization cost) [generating, based on the profiling information, a cost model]. The cost model 112 is used by the aggregation and mapping component 114 to improve system throughput (emphasis added).”); and using the cost model (paragraph [0045], “Referring to FIG. 4, at operation 404, execution time of each aggregate is computed and the aggregates are sorted by their respective execution times at operation 406. In one embodiment, execution time of an aggregate is computed utilizing the cost model (H) described above (emphasis added).”). As pointed out hereinabove, Leopoldseder and Liu are both analogous art to the claimed invention. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Liu into the teaching of Leopoldseder to include “acquiring profiling information of a system on which an application program is to be executed; generating, based on the profiling information, a cost model; and using the cost model.” The modification would be obvious because one of ordinary skill in the art would be motivated to use a cost model in order to improve system throughput (Liu, paragraph [0021]). As per Claim 5, the rejection of Claim 1 is incorporated; and Leopoldseder further discloses: wherein the generating the plurality of compiled graphs comprises generating the plurality of compiled graphs for the intermediate representation using a plurality of combinations of compiler options (col. 4 lines 8-15, “Examples of optimizations may include: loop peeling, loop unrolling, constant folding, conditional elimination, escape analysis, scalar replacement, read elimination, strength reduction, etc. In one or more embodiments, the optimizations may be obtained from a library of compiler optimizations (e.g., specific to a programming language and/or compiler).” and lines 52-58, “The dynamic compiler (104) includes functionality to generate a new optimized compilation graph by applying an optimization to an existing optimized compilation graph. The dynamic compiler (104) includes functionality to generate a series of optimized compilation graphs (122A, 122N) by applying a series of optimizations according to a series of optimization parameters (120A, 120N).”). As per Claim 6, the rejection of Claim 5 is incorporated; and Leopoldseder further discloses: wherein the generating the plurality of compiled graphs comprises: generating, based on the intermediate representation and a number of second processors on which the selected compiled graph can be executed, the plurality of combinations of compiler options (col. 3 lines 19-22, “As shown in FIG. 1A, the computer system (100) includes a repository (102), a dynamic compiler (104), and computer processor(s) (106).”; col. 4 lines 8-15, “Examples of optimizations may include: loop peeling, loop unrolling, constant folding, conditional elimination, escape analysis, scalar replacement, read elimination, strength reduction, etc. In one or more embodiments, the optimizations may be obtained from a library of compiler optimizations (e.g., specific to a programming language and/or compiler).” and 52-58, “The dynamic compiler (104) includes functionality to generate a new optimized compilation graph by applying an optimization to an existing optimized compilation graph. The dynamic compiler (104) includes functionality to generate a series of optimized compilation graphs (122A, 122N) by applying a series of optimizations according to a series of optimization parameters (120A, 120N).”). As per Claim 7, the rejection of Claim 1 is incorporated; and Leopoldseder further discloses: [a] non-transitory computer-readable recording medium storing instructions that, when executed by one or more processors, cause performance of the method according to claim 1 (col. 11 lines 15-21, “Software instructions in the form of computer readable program code to perform embodiments disclosed herein may be stored, in whole or in part, temporarily or permanently, on a non-transitory computer readable medium such as a CD, DVD, storage device, a diskette, a tape, flash memory, physical memory, or any other computer readable storage medium.”). Claim 8 is an information processing system claim corresponding to the method claim hereinabove (Claim 1). Therefore, Claim 8 is rejected for the same reason set forth in the rejection of Claim 1. Claims 2 and 4 are rejected under 35 U.S.C. 103 as being unpatentable over Leopoldseder in view of Liu as applied to Claim 1 above, and further in view of US 2020/0342286 (hereinafter “Zhang”). [Examiner’s Remarks: In order for a reference to be proper for use in an obviousness rejection under 35 U.S.C. 103, the reference must be analogous art to the claimed invention. In re Bigio, 381 F.3d 1320, 1325, 72 USPQ2d 1209, 1212 (Fed. Cir. 2004). A reference is analogous art to the claimed invention if: (1) the reference is from the same field of endeavor as the claimed invention (even if it addresses a different problem); or (2) the reference is reasonably pertinent to the problem faced by the inventor (even if it is not in the same field of endeavor as the claimed invention). Note that the claimed invention is generally directed to generating a cost model based on profiling information of the system and determining an expected execution time for a compiled graph using the cost model (specification, page 1, lines 10-12). And as for the “reasonably pertinent” test, Zhang is generally directed to scheduling a computation graph on heterogeneous computing resources (Zhang, paragraph [0002]). Thus, Zhang is an analogous art to the claimed invention (even if it is not in the same field of endeavor as the claimed invention). See MPEP § 2141.01(a)(I).] As per Claim 2, the rejection of Claim 1 is incorporated; and the combination of Leopoldseder and Liu does not explicitly disclose: wherein the acquiring the profiling information comprises determining, for each of a plurality of types of operations, an execution time in the system according to a plurality of input data sizes. However, Zhang discloses: wherein the acquiring the profiling information comprises determining, for each of a plurality of types of operations, an execution time in the system according to a plurality of input data sizes (paragraph [0049], “The execution time for an operation or a group of operations can be estimated by statically modelling the cost, dynamically profiling the cost from execution experiments or simulations, or using execution history records based on the sizes of data structures, operation type, computing throughput, or memory bandwidth of the system.”). As pointed out hereinabove, Zhang is an analogous art to the claimed invention. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Zhang into the combined teachings of Leopoldseder and Liu to include “wherein the acquiring the profiling information comprises determining, for each of a plurality of types of operations, an execution time in the system according to a plurality of input data sizes.” The modification would be obvious because one of ordinary skill in the art would be motivated to cluster operations together or break large operation into smaller operations to achieve optimal execution performance and efficiency (Zhang, paragraph [0018]). As per Claim 4, the rejection of Claim 1 is incorporated; and the combination of Leopoldseder and Liu does not explicitly disclose: wherein the generating the cost model comprises estimating, based on the profiling information, a function that outputs the expected execution time, based on an input data size, for each of a plurality of types of operations. However, Zhang discloses: wherein the generating the cost model comprises estimating, based on the profiling information, a function that outputs the expected execution time, based on an input data size, for each of a plurality of types of operations (paragraph [0049], “The execution time for an operation or a group of operations can be estimated by statically modelling the cost, dynamically profiling the cost from execution experiments or simulations, or using execution history records based on the sizes of data structures, operation type, computing throughput, or memory bandwidth of the system.”). As pointed out hereinabove, Zhang is an analogous art to the claimed invention. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Zhang into the combined teachings of Leopoldseder and Liu to include “wherein the generating the cost model comprises estimating, based on the profiling information, a function that outputs the expected execution time, based on an input data size, for each of a plurality of types of operations.” The modification would be obvious because one of ordinary skill in the art would be motivated to cluster operations together or break large operation into smaller operations to achieve optimal execution performance and efficiency (Zhang, paragraph [0018]). Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over Leopoldseder in view of Liu as applied to Claim 1 above, and further in view of US 2010/0005272 (hereinafter “Vuletic”). [Examiner’s Remarks: In order for a reference to be proper for use in an obviousness rejection under 35 U.S.C. 103, the reference must be analogous art to the claimed invention. In re Bigio, 381 F.3d 1320, 1325, 72 USPQ2d 1209, 1212 (Fed. Cir. 2004). A reference is analogous art to the claimed invention if: (1) the reference is from the same field of endeavor as the claimed invention (even if it addresses a different problem); or (2) the reference is reasonably pertinent to the problem faced by the inventor (even if it is not in the same field of endeavor as the claimed invention). Note that the claimed invention is generally directed to generating a cost model based on profiling information of the system and determining an expected execution time for a compiled graph using the cost model (specification, page 1, lines 10-12). And as for the “reasonably pertinent” test, Vuletic is generally directed to accessing a user virtual memory through a virtual memory window (Vuletic, Abstract). Thus, Vuletic is an analogous art to the claimed invention (even if it is not in the same field of endeavor as the claimed invention). See MPEP § 2141.01(a)(I).] As per Claim 3, the rejection of Claim 1 is incorporated; and the combination of Leopoldseder and Liu does not explicitly disclose: wherein the acquiring the profiling information comprises: determining a memory copy time according to a plurality of data sizes; and determining at least one of inter-processor communication time or inter-node communication time according to a plurality of data sizes. However, Vuletic discloses: wherein the acquiring the profiling information comprises: determining a memory copy time according to a plurality of data sizes (paragraph [0062], “For the VMW-based versions, three components of the execution time are measured: (1) hardware execution time-time spent in the coprocessor and in the WMU, required for computation, memory accesses, and virtual memory translations; (2) software execution time for window memory copying-time spent in transferring data from/to user-space memory; and (3) software execution time for the WMU management-time spent in checking which address has generated the fault, selecting a page for eviction, and updating the translation table.” and “Programming is made easier (both in C and VHDL) because no explicit reference to the dual-port memory is required: It is important to stress that all of the experiments are performed by simply changing the input data size, without the need of modifying neither the application code, nor the coprocessor design.”); and determining at least one of inter-processor communication time or inter-node communication time according to a plurality of data sizes (paragraph [0061], “FIG. 7 shows the execution times of the benchmarks. The IDEA results are shown for pure software, for a typical coprocessor (without OS), and for a VMW-based version of the benchmark, with different input data sizes. The complex IDEA coprocessor core runs at 6 MHz and has 3 pipeline stages.”). As pointed out hereinabove, Vuletic is an analogous art to the claimed invention. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Vuletic into the combined teachings of Leopoldseder and Liu to include “wherein the acquiring the profiling information comprises: determining a memory copy time according to a plurality of data sizes; and determining at least one of inter-processor communication time or inter-node communication time according to a plurality of data sizes.” The modification would be obvious because one of ordinary skill in the art would be motivated to add a virtual memory window for virtual memory accesses to a reconfigurable computing platform (Vuletic, paragraph [0077]). Conclusion The prior art made of record and not relied upon is considered pertinent to the Applicant’s disclosure. They are as follows: US 2022/0012028 (hereinafter “Yount”) discloses performing automatic compiler optimization to enable streaming-store generation for unaligned contiguous write access. Any inquiry concerning this communication or earlier communications from the Examiner should be directed to Qing Chen whose telephone number is 571-270-1071. The Examiner can normally be reached on Monday through Friday from 9:00 AM to 5:00 PM ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, the Applicant is encouraged to use the USPTO Automated Interview Request (AIR) at https://www.uspto.gov/interviewpractice. If attempts to reach the Examiner by telephone are unsuccessful, the Examiner’s supervisor, Wei Mui, can be reached at 571-272-3708. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for more information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO customer service representative, call 800-786-9199 (in USA or Canada) or 571-272-1000. /Qing Chen/ Primary Examiner, Art Unit 2191
Read full office action

Prosecution Timeline

Mar 05, 2024
Application Filed
Jan 23, 2026
Non-Final Rejection — §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591415
INTELLIGENT AND PREDICTIVE MODULES FOR SOFTWARE DEVELOPMENT AND CODING USING ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING
2y 5m to grant Granted Mar 31, 2026
Patent 12591416
INTELLIGENT AND PREDICTIVE MODULES FOR SOFTWARE DEVELOPMENT AND CODING USING ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING
2y 5m to grant Granted Mar 31, 2026
Patent 12585460
SOFTWARE OBFUSCATION METHOD USING AN OPAQUE PREDICATE BASED ON MULTIPLYING MIXED BOOLEAN-ARTHMETIC EXPRESSIONS
2y 5m to grant Granted Mar 24, 2026
Patent 12572348
Secure Application Acceleration System and Apparatus
2y 5m to grant Granted Mar 10, 2026
Patent 12572339
ACCELERATE INFERENCE PERFORMANCE ON ARTIFICIAL INTELLIGENCE ACCELERATORS
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
80%
Grant Probability
99%
With Interview (+51.9%)
2y 10m
Median Time to Grant
Low
PTA Risk
Based on 678 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month