Prosecution Insights
Last updated: April 19, 2026
Application No. 18/117,204

METHOD AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM AND APPARATUS FOR ANALYZING ALGORITHMS DESIGNED FOR RUNNING ON NETWORK PROCESSING UNIT

Final Rejection §101§103
Filed
Mar 03, 2023
Examiner
HU, SELINA ELISA
Art Unit
2193
Tech Center
2100 — Computer Architecture & Software
Assignee
Airoha Technology (Suzhou) Limited
OA Round
2 (Final)
67%
Grant Probability
Favorable
3-4
OA Rounds
3y 3m
To Grant
99%
With Interview

Examiner Intelligence

Grants 67% — above average
67%
Career Allow Rate
2 granted / 3 resolved
+11.7% vs TC avg
Strong +100% interview lift
Without
With
+100.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
32 currently pending
Career history
35
Total Applications
across all art units

Statute-Specific Performance

§101
24.4%
-15.6% vs TC avg
§103
53.5%
+13.5% vs TC avg
§102
12.0%
-28.0% vs TC avg
§112
10.1%
-29.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 3 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This office action is in response to applicant’s amendment filed on 12/10/2025. Claims 1-20 are pending and examined. Response to Arguments Applicant's arguments filed 12/10/2025 with respect to 35 U.S.C. 101 have been fully considered but they are not persuasive. Applicant argued that claims 1, 8, and 14 are directed to a “specific, unconventional and technological way to improve an algorithm analysis method for a Network Processing Unit (NPU) to predict the performance of the algorithm in advance based on the simulated execution of instructions.” Examiner respectfully disagrees, see 35 U.S.C. 101 rejections below for a detailed analysis. The amendments to the claim reflect the modification of the limitation “generating, by the processing unit, an execution-cost statistics table according to the instruction classification table and an instruction cost table to predict performance of the algorithm running on the NPU, thereby enabling the algorithm to be optimized according to content of the execution-cost statistics table,” however, a specific, unconventional and technological improvement is not presented in the claim language and cited specification paragraph 22. The amended claim language does not specify how the execution-cost statistics table, instruction classification table, and/or instruction cost table predict the performance of the algorithm running on the NPU, as the tables themselves do not optimize the algorithm. Therefore, the 35 U.S.C. 101 rejections for claims 1-20 are maintained. Applicant's arguments filed 12/10/2025 with respect to 35 U.S.C. 103 have been fully considered but they are not persuasive. Applicant argued that “Wang does not teach or suggest that the NPU is associated with the virtual machine in any means or element.” Examiner respectfully disagrees, see 35 U.S.C. 103 rejections below for a detailed analysis. Wang teaches in paragraph 113, as cited below, “… any compute instance may be any one or a combination of computing resources of different granularities such as a virtual machine, a container, a thread, and a process, or may be processors such as a CPU, a GPU, and an NPU…” and therefore, the compute instance which can be any one or a combination of a virtual machine and NPU that processes some processing tasks of the app correlates to loading and executing an executable program file comprising an algorithm that can be executed by the NPU on a virtual machine. Applicant further argued that “Chiao does not and cannot teach or suggest the operation of generating, by the processing unit, an instruction classification table during an execution of the executable program file on the virtual machine, wherein the instruction classification table stores information about a plurality of instructions that have been executed on the virtual machine, and which instruction category each instruction is related to, as recited in claims 1, 8, and 14.” Examiner has explained in the detailed analysis below that Chiao alone does not explicitly teach that the instruction classification table is generated during an execution of the executable program file on a virtual machine, that the instructions are executed by a virtual machine and that the instruction classification table and instruction cost tables are used to predict performance of the algorithm running on the NPU. However, generating instruction tables during the execution of an executable program file is a popular method of generating instruction tables as evidenced by Morris below. Additionally, virtual machines are a popular method of executing program instructions and files as evidenced by Morris below. Lastly, using cost tables to predict the performance of algorithms is a popular method of performance optimization as evidenced by Morris below, and running algorithms on an NPU is a popular method of algorithm execution as evidenced by Wang below. Applicant further argued that “Chiao does not and cannot teach or suggest the operation of generating, by the processing unit, the execution-cost statistics table according to the instruction classification table and the instruction cost table to predict performance of the algorithm running on the NPU, thereby enabling the algorithm to be optimized according to content of the execution-cost statistics table, wherein the instruction cost table stores a plurality of costs, in which each cost is related to a designated instruction category, as recited in claims 1, 8, and 14” and that “Morris cannot overcome Chiao's deficiency of the execution-cost statistics table storing a summarized cost of executed instructions for each instruction category, as recited in claims 1, 8, and 14.” Examiner interprets the one or more instructions as a portion of the source code of Morris as an instruction category and the data store containing metadata which stores a load cost for a specified portion of the source code of Morris as the execution-cost statistics table storing a summarized cost for each category. Therefore, it would have been obvious to one of ordinary skill in the art to which said subject matter pertains before the effective filing date of the claimed invention to combine Chiao with Morris because determining performance costs across various operating systems or resources can be used to reduce the cost for executing one or more instructions and identify candidate operating environments based on its accessibility, availability, and compatibility with the instructions. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to (an) abstract idea(s) without significantly more. Claims 1, 8 and 14 recite: A method for analyzing an algorithm designed for running on a network processing unit (NPU), performed by a processing unit, comprising: loading and executing, by the processing unit, an executable program file on a virtual machine, wherein the executable program file comprises the algorithm that can be executed by the NPU; generating, by the processing unit, an instruction classification table during an execution of the executable program file on the virtual machine, wherein the instruction classification table stores information about a plurality of instructions that have been executed on the virtual machine, and which instruction category each instruction is related to; and generating, by the processing unit, an execution-cost statistics table according to the instruction classification table and an instruction cost table to predict performance of the algorithm running on the NPU, thereby enabling the algorithm to be optimized according to content of the execution-cost statistics table, wherein the instruction cost table stores a plurality of costs, in which each cost is related to a designated instruction category, wherein the execution-cost statistics table stores a summarized cost of executed instructions for each instruction category. Step 1: Is the claim to a process, machine, manufacture, or composition of matter? Yes. Claim 1 is a process. Claim 8 is a manufacture. Claim 14 is a machine. Step 2A, Prong I: Does the claim recite an abstract idea, law of nature, or natural phenomenon? Yes: (an) abstract idea(s). The ‘generating’ limitation in #2 above, as claimed and under broadest reasonable interpretation (BRI), is a mental process that covers performance of the limitation in the mind. The limitation “generating” in the context of this claim encompasses a person analyzing, evaluating, or determining an instruction classification table, including comparison or judgement. The ‘generating’ limitation in #3 above, as claimed and under broadest reasonable interpretation (BRI), is a mental process that covers performance of the limitation in the mind. The limitation “generating” in the context of this claim encompasses a person analyzing, evaluating, or determining an execution-cost statistics table according to the instruction classification table and instruction cost table, including comparison or judgement. Step 2A, Prong II: Does the claim recite additional elements that integrate the judicial exception into a practical application? No. The ‘loading’ limitation in #1 above, as claimed and under broadest reasonable interpretation (BRI), is an additional element as “apply it” that is mere instructions to apply an exception. The limitation “loading” in the context of this claim encompasses merely loading and executing an executable program file. See MPEP 2106.05(f). Additionally, one or more of the claims recite the following additional elements: Program code (Claim 8) Processing unit (Claims 1 and 8) Network processing unit (Claims 1, 8 and 14) These additional elements are recited at a high level of generality (i.e., as generic computer components) such that they amount to no more than components comprising mere instructions to apply the exception. Accordingly, these additional elements do not integrate the abstract idea(s) into a practical application because they do not impose any meaningful limits on practicing the abstract ideas(s). Step 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception? No. As discussed above with respect to integration of the abstract idea(s) into a practical application, the aforementioned additional elements amount to no more than components for obtaining or gathering data and comprising mere instructions to apply the exception which is evidently seen in MPEP 2106.05(f). Mere instructions to apply an exception using generic computer components cannot provide an inventive concept. Claims 2, 9 and 15 merely further describe the virtual machine of Claims 1, 8 and 14 respectively. The claims do not include additional elements that integrate into practical application or are sufficient to amount to significantly more than the judicial exception. Claims 3, 10 and 16 merely further describe the ONU router of Claims 2, 9 and 15 respectively. The claims do not include additional elements that integrate into practical application or are sufficient to amount to significantly more than the judicial exception. Claims 5, 12 and 18 merely further describe the cost of Claims 1, 8 and 14 respectively. The claims do not include additional elements that integrate into practical application or are sufficient to amount to significantly more than the judicial exception. Claims 6, 13 and 19 merely further describe the summarized cost of executed instructions of Claims 5, 12 and 18 respectively. The claims do not include additional elements that integrate into practical application or are sufficient to amount to significantly more than the judicial exception. Claims 7 and 20 merely further describe the instruction categories of Claims 1 and 14 respectively. The claims do not include additional elements that integrate into practical application or are sufficient to amount to significantly more than the judicial exception. Therefore, Claims 1-3, 5-10, 12-16 and 18-19 are directed to (an) abstract idea(s) without significantly more. Claims 4, 11 and 17 recite: wherein the algorithm runs on the NPU to repeatedly receive messages through an input port of the ONU router and transmit the messages out to a target equipment through an output port of the ONU router. Step 1: Is the claim to a process, machine, manufacture, or composition of matter? Yes. Claim 4 is a process. Claim 11 is a manufacture. Claim 17 is a machine. Step 2A, Prong II: Does the claim recite additional elements that integrate the judicial exception into a practical application? No. The ‘receiving’ limitation in #4 above, as claimed and under broadest reasonable interpretation (BRI), is an additional element that is insignificant extra-solution activity. The limitation “receiving” in the context of this claim encompasses mere data gathering. See MPEP 2106.05(g). The ‘transmitting’ limitation in #5 above, as claimed and under broadest reasonable interpretation (BRI), is an additional element as “apply it” that is mere instructions to apply an exception. The limitation “transmitting” in the context of this claim encompasses merely transmitting messages to a target equipment. See MPEP 2106.05(f). Step 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception? No. As discussed above with respect to integration of the abstract idea(s) into a practical application, the aforementioned additional elements amount to no more than components for obtaining or gathering data and comprising mere instructions to apply the exception which is evidently seen in MPEP 2106.05(g)&(f). Mere instructions to apply an exception using generic computer components cannot provide an inventive concept. Therefore, Claims 4, 11 and 17 are directed to (an) abstract idea(s) without significantly more. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 5-6, 8, 12-14, and 18-19 are rejected under 35 U.S.C. 103 as being unpatentable over Chiao et al. (U.S. Patent No. US 9672041 B2), hereinafter “Chiao” in view of Wang et al. (U.S. Patent No. US 20230236896 A1), hereinafter “Wang” and Morris et al. (U.S. Patent No. US 20160154673 A1), hereinafter “Morris.” With regards to Claim 1, Chiao teaches: generating, by the processing unit, an instruction classification table for the executable program file, wherein the instruction classification table stores information about a plurality of instructions (Fig. 1-2, col. 2, lines 41-45 and 59-66, col. 3, lines 65-67 and col. 4, line 1, “The aforementioned method puts long-length frequently used instruction groups into an instruction table. Each of the instruction groups may include one or more instructions in sequential order in a program code to be executed by the aforementioned processor… At step 105, analyze a program code to find one or more instruction groups in the program code according to a preset condition. In this embodiment, the preset condition is that the count of occurrences in the program code of each of the instruction groups must be larger than or equal to a first threshold value and the size (in bits) of each of the instruction groups must be smaller than or equal to a second threshold value… For example, FIG. 2 shows the instruction table 230 generated based on the program code 210. The numbers on the left side of the instruction table 230 are the indices of the entries of the instruction table 230.” The instruction table generated based on program code which includes the count of occurrences of each instruction group correlates to generating an instruction classification table for the executable program file which stores information about the plurality of instructions), and which instruction category each instruction is related to (Fig. 1-2, col. 4, lines 16-18, “It can be seen from FIG. 1 and FIG. 2 that the method in FIG. 1 puts the first X of the instruction groups sorted in step 110 into the instruction table.” The instruction groups in the instruction table correlates to the instruction categories each instruction is related to); and generating, by the processing unit, an execution-cost statistics table according to the instruction classification table and an instruction cost table (Col. 3, lines 41-60, “At step 125, check whether the instruction table is already full or not, and check whether the instruction list is empty or not. The flow terminates when the instruction table is full or the instruction list is empty. The flow proceeds to step 130 when the instruction table still has vacancy and the instruction list is not empty. At step 130, get the first instruction group G from the instruction list. At step 135, check whether the value of the cost function of the instruction group G is larger than a third threshold value or not. In this embodiment, the third threshold value is 0. The third threshold value may be any other integer value in the other embodiments of the present invention. The flow terminates when the value of the cost function of the instruction group G is smaller than or equal to the third threshold value. The flow proceeds to step 140 when the value of the cost function of the instruction group G is larger than the third threshold value. At step 140, put the instruction group G into entry I of the instruction table.” The instruction list and instruction table being used or updated respectively for an updated instruction table correlates to generating an execution-cost statistics table according to the instruction classification table and instruction cost table), wherein the instruction cost table stores a plurality of costs, in which each cost is related to a designated instruction category (Col. 3, lines 23-24, 35-40 and 47-51, “Next, at step 110, sort the instruction groups found in step 105 in descending order of the cost function of each instruction group… Next, at step 115, construct an instruction list based on the result of the aforementioned sorting. The instruction list includes all of the instruction groups and the instruction groups in the instruction list retain their sorted order. Therefore, the first instruction group of the instruction list is the instruction group whose cost function value is the largest… At step 130, get the first instruction group G from the instruction list. At step 135, check whether the value of the cost function of the instruction group G is larger than a third threshold value or not.” The cost function being calculated for each group and added to an instruction list which sorts the instruction groups by cost correlates to the instruction cost tale storing costs related to a designated instruction category), Chiao does not explicitly teach that the instruction classification table is generated during an execution of the executable program file on a virtual machine, that the instructions are executed by a virtual machine and that the instruction classification table and instruction cost tables are used to predict performance of the algorithm running on the NPU. However, generating instruction tables during the execution of an executable program file is a popular method of generating instruction tables as evidenced by Morris (Fig. 20, paragraphs 214-215, “In an aspect, a translator may include logic that generates a symbol table and/or otherwise identifies references that may be included in generating a symbol table as described in more detail below. For example, representation identifier 1902 may receive a name of a file including a translation of source code, such as illustrated by source code 2000 in FIG. 20. The translation may include object code generated by a compiler. The compiler may provide a symbol table in the file with the object and/or as separate associated data. FIG. 20 illustrates one possible symbol table 2002 that may be generated from source code 2000… In FIG. 20, symbol table 2002 includes a “draw” symbol row 2004 generated based on statement 2006 in source code 2000. Statement 2006 and symbol row 2004 identify an external symbol that an executable translation of source code 2000 is linked to in order to send presentation data to a presentation space of an operating environment executing the executable translation.” The symbol table generated based on the execution of the executable translation of the source code, which is generated by the compiler, correlates to generating an instruction table during the execution of the executable program file). Additionally, virtual machines are a popular method of executing program instructions and files as evidenced by Morris (Paragraph 113, “As used in the present disclosure the term “operating environment resource” (OER) with respect to a particular operating environment refers any entity in to the operating environment that includes data, logic, and/or hardware that is accessed, directly and/or indirectly, in executing an instruction encoded in logic in the operating environment… Exemplary OERs include… a virtual machine.” The OER which includes a virtual machine that is used to execute an instruction encoded in logic in the operating environment correlates to executing instructions and executable program files with a virtual machine). Lastly, using cost tables to predict the performance of algorithms is a popular method of performance optimization as evidenced by Morris below (Paragraph 207, 390 and 392), and running algorithms on an NPU is a popular method of algorithm execution as evidenced by Wang below (Paragraph 113). Chiao does not explicitly teach: A method for analyzing an algorithm designed for running on a network processing unit (NPU), performed by a processing unit, comprising: loading and executing, by the processing unit, an executable program file on a virtual machine, wherein the executable program file comprises the algorithm that can be executed by the NPU; thereby enabling the algorithm to be optimized according to content of the execution-cost statistics table wherein the execution-cost statistics table stores a summarized cost of executed instructions for each instruction category. However, Wang teaches: A method for analyzing an algorithm designed for running on a network processing unit (NPU), performed by a processing unit (Paragraph 113, “In this embodiment of this application, the plurality of compute instances are deployed on at least one site managed by the cloud resource scheduling system, and any compute instance may be used to process some processing tasks of the APP. In addition, any compute instance may be any one or a combination of computing resources of different granularities such as a virtual machine, a container, a thread, and a process, or may be processors such as a CPU, a GPU, and an NPU.” The compute instance which includes an NPU that processes some processing tasks of the app correlates to a method for analyzing an algorithm designed for running on an NPU), comprising: loading and executing, by the processing unit, an executable program file on a virtual machine, wherein the executable program file comprises the algorithm that can be executed by the NPU (Paragraph 113, “In this embodiment of this application, the plurality of compute instances are deployed on at least one site managed by the cloud resource scheduling system, and any compute instance may be used to process some processing tasks of the APP. In addition, any compute instance may be any one or a combination of computing resources of different granularities such as a virtual machine, a container, a thread, and a process, or may be processors such as a CPU, a GPU, and an NPU.” The processing tasks in the app correlate to the algorithm to be executed by the NPU, and the app correlates to the executable program file on the virtual machine. The compute instance which includes a virtual machine with an NPU that processes some processing tasks of the app correlates to loading and executing an executable program file comprising an algorithm that can be executed by the NPU on a virtual machine); Morris further teaches: thereby enabling the algorithm to be optimized according to content of the execution-cost statistics table (Paragraph 207, 390 and 392, “For example, a criterion may be based on a cost of loading an object code translation of a code library accessed in performing the one or more instructions. Each candidate may have metadata that identifies a load cost, which may be measured in time to load, a measure of power and/or energy used in loading, and/or a measure of risk associated with a provider of the respective candidates. One or more candidates may meet the criterion. Metadata may be stored in a data store, as illustrated by operating environment data store 1107. Operating environment data store 1107 may be included in a persistent memory, such as a disk drive, and/or in a processor memory… Selecting one of the operating environments to perform a particular operation may include determining that an operating environment that meets the selection criterion is at least one of a best selection, a random selection, and a next selection based on, for example, a specified order of multiple operating environments that meet the selection criterion… Whether a selection criterion is met may be based on a cost for accessing an OER included in performing the one or more instructions specified in the source code.” Selecting one of the operating environments to perform a particular operation correlate to the algorithm. Selecting a particular operating environment to perform an operation based on the cost, which is stored in the metadata of the data store, correlates to optimizing the algorithm according to the content of the execution-cost statistics table) wherein the execution-cost statistics table stores a summarized cost of executed instructions for each instruction category (Paragraphs 203 and 207, “The one or more instructions may be specified by a line of one or more lines of code or a portion thereof in the source code… For example, a criterion may be based on a cost of loading an object code translation of a code library accessed in performing the one or more instructions. Each candidate may have metadata that identifies a load cost, which may be measured in time to load, a measure of power and/or energy used in loading, and/or a measure of risk associated with a provider of the respective candidates. One or more candidates may meet the criterion. Metadata may be stored in a data store, as illustrated by operating environment data store 1107. Operating environment data store 1107 may be included in a persistent memory, such as a disk drive, and/or in a processor memory.” The one or more instructions as a portion of the source code correlates to an instruction category. The data store containing metadata which stores a load cost for a specified portion of the source code correlates to the execution-cost statistics table storing a summarized cost for each category). Therefore, it would have been obvious to one of ordinary skill in the art to which said subject matter pertains before the effective filing date of the claimed invention to combine Chiao with a method for analyzing an algorithm designed for running on a network processing unit (NPU), performed by a processing unit, comprising: loading and executing an executable program file on a virtual machine, wherein the executable program file comprises the algorithm that can be executed by the NPU; as taught by Wang because varied compute instances used to process processing tasks of different granularities can adhere to resource requirement information from the client. This allows resource scheduling systems to allocate an appropriate compute instance for implementing an app or a combination of computing resources (Wang: paragraphs 112-113). Additionally, it would have been obvious to one of ordinary skill in the art to which said subject matter pertains before the effective filing date of the claimed invention to combine Chiao with thereby enabling the algorithm to be optimized according to content of the execution-cost statistics table and wherein the execution-cost statistics table stores a summarized cost of executed instructions for each instruction category as taught by Morris because determining performance costs across various operating systems or resources can be used to reduce the cost for executing one or more instructions and identify candidate operating environments based on its accessibility, availability, and compatibility with the instructions (Morris: paragraph 203). With regards to Claims 8 and 14, the method of Claim 1 performs the same steps as the manufacture and machine of Claims 8 and 14 respectively, and Claims 8 and 14 are therefore rejected using the same rationale set forth above in the rejection of Claim 1. With regards to Claim 5, Chiao in view of Wang and Morris teaches the method of Claim 1 above. Morris further teaches: wherein the cost is expressed as a total number of clock cycles (Paragraph 389, “Whether a selection criterion is met may be based on a cost for accessing an OER included in performing the one or more instructions specified in the source code… A cost may be based on a measure of monetary cost, heat, OER utilization such as memory utilization, CPU cycles…” The cost of performing the one or more instructions being based on a measure of CPU cycles correlates to the cost being expressed as a total number of clock cycles). Therefore, it would have been obvious to one of ordinary skill in the art to which said subject matter pertains before the effective filing date of the claimed invention to combine Chiao with wherein the cost is expressed as a total number of clock cycles as taught by Morris because the performance cost being based on a measure of power quality and/or energy efficiency increases the flexibility in calculating cost to fit specific criteria to find a match (Morris: paragraphs 389-390). With regards to Claims 12 and 18, the method of Claim 5 performs the same steps as the manufacture and machine of Claims 12 and 18 respectively, and Claims 12 and 18 are therefore rejected using the same rationale set forth above in the rejection of Claim 5. With regards to Claim 6, Chiao in view of Wang, and Morris teaches the method of Claim 5 above. Chiao further teaches: wherein the summarized cost of executed instructions for each instruction category is calculated by a formula as follows: totalCost#i=Cnt#i*Cost#i totalCost#i represents the summarized cost of executed instructions for ith instruction category (Col. 3, lines 25-26, “In this embodiment, the cost function of each instruction group K is defined as “CC.sub.K*(L.sub.K−N)−M”” The cost function of each instruction group K correlates to summarized cost of executed instructions related to the ith instruction category), Cnt#i represents a total number of executed instructions related to the ith instruction category (Col. 3, lines 25-28, “In this embodiment, the cost function of each instruction group K is defined as “CC.sub.K*(L.sub.K−N)−M”. CC.sub.K is the count of occurrences of the instruction group K in the program code.” The value CC.sub.K representing the count of occurrences of instructions in instruction group K correlates to the total number of executed instructions related to the ith instruction category), Cost#i represents a theoretical cost of the ith instruction category (Col. 3, lines 25-26 and 28-34, “In this embodiment, the cost function of each instruction group K is defined as “CC.sub.K*(L.sub.K−N)−M”… L.sub.K is the length (in bits) of the instruction group K. N is the length (in bits) of the EIT instruction, which is also the length of the shortest instruction set of the processor. M is the aforementioned second threshold value. The cost function means the number of bits saved by replacing an instruction group with its corresponding EIT function.” The value (L.sub.K−N)−M represents the cost as the overall cost function means the number of bits saved by replacing an instruction group, and it is multiplied by the number of occurrences. Therefore, (L.sub.K−N)−M corelates to the theoretical cost of the ith instruction category), i is an integer greater than zero, and less than or equal to N, N represents a total number of instruction categories (Col. 3, lines 25-26, “In this embodiment, the cost function of each instruction group K is defined as “CC.sub.K*(L.sub.K−N)−M”” The instruction groups being labeled up to K correlates to i being an integer greater than 0 and less than or equal to N, which represents the total number of instruction categories). With regards to Claims 13 and 19, the method of Claim 6 performs the same steps as the manufacture and machine of Claims 13 and 19 respectively, and Claims 13 and 19 are therefore rejected using the same rationale set forth above in the rejection of Claim 6. Claim(s) 2-4, 9-11, and 15-17 are rejected under 35 U.S.C. 103 as being unpatentable over Chiao in view of Wang, Morris and Ghazisaidi et al. (U.S. Patent No. US 20130022356 A1), hereinafter “Ghazisaidi.” With regards to Claim 2, Chiao in view of Wang and Morris teaches the method of Claim 1 above. Chiao in view of Wang and Morris does not explicitly teach: wherein the virtual machine creates a virtual environment for simulating hardware components in an Optical Network Unit (ONU) router. However, Ghazisaidi teaches: wherein the method creates an environment for simulating hardware components in an Optical Network Unit (ONU) router (Paragraph 9, “The embodiments of the invention include a network element implementing an optical network unit (ONU) that is configured to improve efficiency in a passive optical network (PON)… The ONU comprises an ingress module, egress module, alternate connection module and network processor.” The passive optical network implementing an optical network unit which comprises an egress module and network processor correlates to a method creating an environment for simulating hardware components in an optical network unit router). Ghazisaidi does not explicitly teach that a virtual machine is creating a virtual environment for simulating hardware components. However, virtual machines are a popular host for virtual environments to simulate hardware components as evidenced by Wang above (Paragraph 113, “In this embodiment of this application, the plurality of compute instances are deployed on at least one site managed by the cloud resource scheduling system, and any compute instance may be used to process some processing tasks of the APP. In addition, any compute instance may be any one or a combination of computing resources of different granularities such as a virtual machine, a container, a thread, and a process, or may be processors such as a CPU, a GPU, and an NPU.” The compute instance which includes a virtual machine with an NPU correlates to a virtual machine simulating hardware components) Therefore, it would have been obvious to one of ordinary skill in the art to which said subject matter pertains before the effective filing date of the claimed invention to combine Chiao with wherein the method creates an environment for simulating hardware components in an Optical Network Unit (ONU) router as taught by because the optical network unit improves cost and energy efficiency for the passive optical network by entering a sleep mode that disables communication with the optical line terminal over the optical line to reduce energy consumption when the ONU is idle (Ghazisaidi: paragraph 9). With regards to Claim 3, Chiao in view of Wang, Morris and Ghazisaidi teaches the method of Claim 2 above. Ghazisaidi further teaches: wherein the ONU router comprises the NPU (Paragraph 9, “The ONU comprises an ingress module, egress module, alternate connection module and network processor.” The optical network unit which comprises a network processor correlates the ONU router comprising the NPU), Wang further teaches: and the processing unit is installed in an analysis equipment other than the ONU router (Paragraph 44, “According to an eighth aspect, an embodiment of this application provides a chip system, including a processor. The processor is coupled to a memory, and the memory is configured to store a program or instructions. The chip system may further include an interface circuit, and the interface circuit is configured to receive code instructions and transmit the code instructions to the processor. When the program or the instructions are executed by the processor, the chip system implements the method in any one of the possible designs of the first aspect or the second aspect.” The chip system including a processor which implements various methods when the program or instructions are executed by the processor correlates to the processing unit being installed in an analysis equipment other than the ONU router). Ghazisaidi does not explicitly teach that a virtual machine is creating the virtual environment for simulating hardware components. However, virtual machines are a popular host for virtual environments to simulate hardware components as evidenced by Wang above (Paragraph 113, “In this embodiment of this application, the plurality of compute instances are deployed on at least one site managed by the cloud resource scheduling system, and any compute instance may be used to process some processing tasks of the APP. In addition, any compute instance may be any one or a combination of computing resources of different granularities such as a virtual machine, a container, a thread, and a process, or may be processors such as a CPU, a GPU, and an NPU.” The compute instance which includes a virtual machine with an NPU correlates to a virtual machine simulating hardware components) Therefore, it would have been obvious to one of ordinary skill in the art to which said subject matter pertains before the effective filing date of the claimed invention to combine Chiao with wherein the method creates a virtual environment for simulating hardware components in an Optical Network Unit (ONU) router as taught by Ghazisaidi because the optical network unit improves cost and energy efficiency for the passive optical network by entering a sleep mode that disables communication with the optical line terminal over the optical line to reduce energy consumption when the ONU is idle (Ghazisaidi: paragraph 9). Additionally, it would have been obvious to one of ordinary skill in the art to which said subject matter pertains before the effective filing date of the claimed invention to combine Chiao with and the processing unit is installed in an analysis equipment other than the ONU router as taught by Wang because the chip system can support multiple processors which can be implemented by hardware of software, providing greater flexibility. Memory can also be integrated with the process or disposed separately from the processor (Wang: paragraphs 45-46). With regards to Claims 10 and 16, the method of Claim 3 performs the same steps as the manufacture and machine of Claims 10 and 16 respectively, and Claims 10 and 16 are therefore rejected using the same rationale set forth above in the rejection of Claim 3. With regards to Claim 4, Chiao in view of Wang, Morris and Ghazisaidi teaches the method of Claim 3 above. Ghazisaidi further teaches: wherein the algorithm runs on the NPU to repeatedly receive messages through an input port of the ONU router (Paragraph 10, “The network processor is configured to execute a quality of service module, an AG-ONU monitor module, a traffic forwarding module and an ONU management module. The quality of service module is configured to check whether the received data traffic for the ONU has a high priority and low bandwidth requirement.” The network processor executing a quality of service module to analyze received data traffic for the ONU correlates to the algorithm running on the NPU to repeatedly receive messages through an input port of the ONU router) and transmit the messages out to a target equipment through an output port of the ONU router (Paragraph 10, “The network processor is configured to execute a quality of service module, an AG-ONU monitor module, a traffic forwarding module and an ONU management module… The traffic forwarding module is configured to process the received data traffic having the high priority and low data bandwidth requirement that can be serviced by the alternate connection by transmitting the received data traffic for the ONU to the AG-ONU over the PON to be forwarded to the ONU over the alternate connection. The traffic forwarding module is configured to process the received data traffic having a low priority or high data bandwidth requirement by transmitting a control packet to the AG-ONU via the PON to be forwarded by the AG-ONU to the ONU over the alternate connection and transmitting the data traffic for the ONU to the ONU over the optical line based on a grant.” The traffic forwarding module of the network processor forwarding received data traffic from the ONU to the AG-ONU correlates to transmitting messages out to a target equipment through an output port of the ONU router). Therefore, it would have been obvious to one of ordinary skill in the art to which said subject matter pertains before the effective filing date of the claimed invention to combine Chiao wherein the algorithm runs on the NPU to repeatedly receive messages through an input port of the ONU router and transmit the messages out to a target equipment through an output port of the ONU router as taught by Ghazisaidi because the optical network unit improves cost and energy efficiency for the passive optical network by entering a sleep mode that disables communication with the optical line terminal over the optical line to reduce energy consumption when the ONU is idle. Receiving and forwarding data traffic sent to the ONU allows communication to aggregating ONUs via ingress and egress modules, which further improve cost and energy efficiency (Ghazisaidi: paragraphs 9-10). With regards to Claims 11 and 17, the method of Claim 4 performs the same steps as the manufacture and machine of Claims 11 and 17 respectively, and Claims 11 and 17 are therefore rejected using the same rationale set forth above in the rejection of Claim 4. Claim(s) 7 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Chiao in view of Wang, Morris and Cischke et al. (U.S. Patent No. US 7386434 B1), hereinafter “Cischke.” With regards to Claim 7, Chiao in view of Wang and Morris teaches the method of Claim 1 above. Chiao in view of Wang and Morris does not explicitly teach: wherein the instruction categories comprise: cache-read instruction; cache-write instruction; SRAM-read instruction; SRAM-write instruction; DRAM-read instruction; DRAM-write instruction; Input/Output (1O)-read instruction; I/O-write instruction; regular calculation instruction; and special function instruction. However, Cischke teaches: wherein the instruction categories comprise: cache-read instruction (Col. 10, lines 46-47 and 50-52, “The twelve different categories of functions that the FGP is programmed to generate include… (3) instruction cache reads, which read instructions from the main memory.” The instruction cache read category correlates to the instruction categories comprising cache-read instructions); cache-write instruction (Col. 4, lines 26-36, “A "normal function" is defined as a function that the simulator treats as an input to stimulate the simulated portion of the integrated circuit. For example, an actual cache memory device has a command set to which it responds. Such commands sets typically include (as just one example) a command to write a specific value to a specific cache memory location. In this instance, the description of the function is a "write" command, and the parameters would be (1) the address of the memory location to be written to, and (2) the data to be written to the location.” The normal function which includes a write command to a cache correlates to a cache-write instruction); SRAM-read instruction (Col. 10, lines 46-47 and 53-55, “The twelve different categories of functions that the FGP is programmed to generate include… (5) Commodity I/O operations, which read or write to memory locations in devices not a part of the main memory.” The commodity I/O operations which include read operations to memory locations not part of main memory such as SRAM correlates to SRAM-read instructions); SRAM-write instruction (Col. 10, lines 46-47 and 53-55, “The twelve different categories of functions that the FGP is programmed to generate include… (5) Commodity I/O operations, which read or write to memory locations in devices not a part of the main memory.” The commodity I/O operations which include write operations to memory locations not part of main memory such as SRAM correlates to SRAM-write instructions); DRAM-read instruction (Col. 6, lines 4-13, “A "read with lock" function simultaneously reads a main memory location and establishes ownership of that location (and the adjacent locations comprising the 8-16 word cache line fetched by the read-with-lock function) using the MESI protocol. In the example provided herein, this memory lock is released by (1) accessing (either by reading from or writing to) any of the 8-16 words of the cache line fetched by the read-with-lock function, and (2) subsequently issuing a lock release function to the specific memory location previously locked.” The read with lock function reading a main memory location which includes DRAM correlates to a DRAM-read instruction); DRAM-write instruction (Col. 5, lines 29-32 and 37-40, “"Leaky writes" refer to processor write functions that force the cache to write data to main memory sooner than it would under other circumstances… As a result, by issuing a leaky write command, the processor is configured to avoid the cache's normal data aging processes and write data to main memory as soon as reasonably possible.” The leaky write command causing the processor to write data to main memory which includes DRAM correlates to a DRAM-write instruction); Input/Output (1O)-read instruction (Col. 10, lines 46-47 and 53-55, “The twelve different categories of functions that the FGP is programmed to generate include… (5) Commodity I/O operations, which read or write to memory locations in devices not a part of the main memory.” The commodity I/O operations which include read operations correlates to I/O read instructions); I/O-write instruction (Col. 10, lines 46-47 and 53-55, “The twelve different categories of functions that the FGP is programmed to generate include… (5) Commodity I/O operations, which read or write to memory locations in devices not a part of the main memory.” The commodity I/O operations which include write operations correlates to I/O write instructions); regular calculation instruction (Col. 10, lines 46-47 and 57, “The twelve different categories of functions that the FGP is programmed to generate include… (7) per-J (or "bitwise" writes).” The per-J or bitwise writes correlates to regular calculation instructions); and special function instruction (Col. 10, lines 46-47 and 49-50, “The twelve different categories of functions that the FGP is programmed to generate include… (2) lock operations, e.g. those that lock memory locations to prevent access by other processors.” The lock operations which lock memory locations to prevent access by other processors correlates to special function instructions). Therefore, it would have been obvious to one of ordinary skill in the art to which said subject matter pertains before the effective filing date of the claimed invention to combine Chiao with wherein the instruction categories comprise: cache-read instruction; cache-write instruction; SRAM-read instruction; SRAM-write instruction; DRAM-read instruction; DRAM-write instruction; Input/Output (1O)-read instruction; I/O-write instruction; regular calculation instruction; and special function instruction as taught by Cischke because different categories of instructions can be assigned a relative weight or tag address. These weights can be utilized by generated test settings files to indicate the relative frequencies of occurrence of each category to modify function generating programs (Cischke: Col. 10, lines 38-46 and 61-67 and Col. 11, lines 1-7). With regards to Claim 20, the method of Claim 7 performs the same steps as the machine of Claim 20, and Claim 20 is therefore rejected using the same rationale set forth above in the rejection of Claim 7. Prior Art Made of Record The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure. Kuzmin et al. (U.S. Patent No. US 20030196193 A1); teaching a method of breaking down a set of computer code into basic instructions. As each code segment is executed, a log tracks how many times the code segment was executed. The log is then analyzed with a set of calibration statistics which specify how much processing time is consumed by each basic instruction, and an overall execution cost is derived for each executed code segment. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SELINA HU whose telephone number is (571)272-5428. The examiner can normally be reached Monday-Friday 8:30-5:30. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chat Do can be reached at (571) 272-3721. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SELINA ELISA HU/Examiner, Art Unit 2193 /Chat C Do/Supervisory Patent Examiner, Art Unit 2193
Read full office action

Prosecution Timeline

Mar 03, 2023
Application Filed
Sep 03, 2025
Non-Final Rejection — §101, §103
Dec 10, 2025
Response Filed
Dec 23, 2025
Final Rejection — §101, §103
Apr 07, 2026
Applicant Interview (Telephonic)
Apr 07, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12585485
Warm migrations for virtual machines in a cloud computing environment
2y 5m to grant Granted Mar 24, 2026
Patent 12563114
CONTENT INITIALIZATION METHOD, ELECTRONIC DEVICE AND STORAGE MEDIUM
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 2 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
67%
Grant Probability
99%
With Interview (+100.0%)
3y 3m
Median Time to Grant
Moderate
PTA Risk
Based on 3 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month