Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(B) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of pre-AIA 35 U.S.C. 112, second paragraph::
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 5-8 and 15-18 are rejected under 35 U.S.C. 112(b) or pre-AIA 35 U.S.C. 112, second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention.
Claims 5, 6, 15, and 16 disclose “command and/or data.” The phrase “command and/or data” recited in the claims fails to inform, with reasonable certainty, a person of ordinary skill in the art of the scope of the limitation because the specification does not define or provide objective examples distinguishing a “command” from “data,” nor does it explain which message fields qualify as one or the other or how differing behavior depends on that distinction. Applicant is required to either amend the claims to provide objective boundaries (for example, by defining “command” and “data” in the claim language or specifying examples).
Claims 6 and 16 disclose “a pointer to the command and/or data, and/or a flow identifier.” The phrase “a pointer to the command and/or data, and/or a flow identifier identifying a flow of tasks with which the task is associated” fails to inform, with reasonable certainty, those skilled in the art of the scope of the limitation. Specifically, the claim does not define (1) what is meant by “pointer” (e.g., memory address, handle, index, descriptor), (2) what distinguishes “command” from “data” for purposes of the pointer, (3) the meaning and boundaries of “flow of tasks” or how the “flow identifier” maps to such a flow (unique instance, class, session, etc.), and (4) which combinations of the nested “and/or” alternatives are intended.
Claims 7, 8, 17, and 18 depend from claims 6 and 16 and are rejected for the same reasons.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless -
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Yang (US 2020/0059435, hereinafter Yang).
Regarding claim 1, Yang discloses
A method comprising (fig. 1-11):
managing performance of a task on a message by a plurality of circuits of a processing device (paragraph [0038]: A plurality of sets of circuits may be cascaded together as a series of function circuits as the illustrated function circuits 0-n at references 154-158; paragraph [0041]: a general-purpose processor is cost efficient to run common high-level tasks such as task scheduling and data fusion. That is because a large number of applications require processors to run such high-level tasks, a general-purpose processor may be optimized to run the common high-level tasks efficiently), the task comprising a sequence of processings to be performed on the message and each circuit of the plurality of circuits performing a processing of the sequence of processings (paragraph [0038]: One data flow may follow one order of sets of function circuits while another data flow may follow a different order of sets of function circuits), the managing performance of the task comprising:
routing, based on the sequence of processings for the task (paragraph [0038]: One data flow may follow one order of sets of function circuits while another data flow may follow a different order of sets of function circuits), first information regarding the task to a first circuit of the plurality of circuits to perform a first processing (paragraph [0067]: At task box 2, the data blocks from the flow classifier 152 are provided to a set of circuits (function circuits 0 in this example), and the set of circuits processes data blocks of each data flow; paragraph [0065]: a source M at reference 118 is from the processor 104, and the processor 104 may have already assigned one or more flow IDs such as FIDM to the data blocks of the one or more data flow; paragraph [0074]: At task box 5, the processor 104 provides the information on the processor 104's processing of what is provided by the function circuits 0 to the next set of circuits (function circuits 1 at reference 156 in this example; paragraph [0078]: The operations in task boxes 3-5 continue to the next set of circuits until the data blocks of the data flows are processed by the series of sets of circuits for the data flows) of the sequence of processings on the message (paragraph [0038]: One data flow may follow one order of sets of function circuits while another data flow may follow a different order of sets of function circuits);
receiving, from the first circuit, an output of the first processing (paragraph [0070]: At task box 3A, the function circuits 0 provide information on the processing of the data blocks of a data flow (e.g., data flow processing information) to the processor 104 based on the data flow's flow ID; paragraph [0078]: The operations in task boxes 3-5 continue to the next set of circuits until the data blocks of the data flows are processed by the series of sets of circuits for the data flows); and
routing, based on the sequence of processings identified for the task (paragraph [0038]: One data flow may follow one order of sets of function circuits while another data flow may follow a different order of sets of function circuits), second information regarding the task to a second circuit of the plurality of circuits to perform a second processing that follows the first processing (paragraph [0074]: At task box 5, the processor 104 provides the information on the processor 104's processing of what is provided by the function circuits 0 to the next set of circuits (function circuits 1 at reference 156 in this example); paragraph [0078]: The operations in task boxes 3-5 continue to the next set of circuits until the data blocks of the data flows are processed by the series of sets of circuits for the data flows) in the sequence of processings (paragraph [0038]: One data flow may follow one order of sets of function circuits while another data flow may follow a different order of sets of function circuits).
Regarding claim 11 referring to claim 1, Yang discloses A method comprising: A non-transitory computer-readable storage medium for storing instructions executable by a processor, the instructions comprising: … (FIG. 10).
Regarding claim 20 referring to claim 1, Yang discloses A device comprising: a circuit configured to perform a method comprising … (FIG 4, 10).
Regarding claims 2 and 12, Yang discloses
wherein:
the method is performed by a controller (Fig. 4 (Control) Processor 104) communicatively coupled to each circuit of the plurality of circuits, wherein each circuit of the plurality of circuits (paragraph [0038]: A plurality of sets of circuits may be cascaded together as a series of function circuits as the illustrated function circuits 0-n at references 154-158) is connected to the controller via one or more interfaces (paragraph [0072]: The bus or interconnection 410 is between the processor 102 and the processor 104);
wherein each circuit of the plurality of circuits comprises one or more queues (paragraph [0052]: each set of circuits includes a data storage) for output of tasks that are to be passed to one or more other circuits of the plurality of circuits (paragraph [0070]: At task box 3A, the function circuits 0 provide information on the processing of the data blocks of a data flow (e.g., data flow processing information) to the processor 104 based on the data flow's flow ID; paragraph [0071]: Alternatively, instead of task box 3A, at task box 3B, based on the data flow's flow ID, the function circuits 0 may provide information on the processing of the data blocks of the data flow to the next sets of circuits without providing the information to the processor 104); and
routing to a circuit of the plurality of circuits comprises routing to an interface of the circuit from a queue of another circuit of the plurality of circuits (paragraph [0070]: At task box 3A, the function circuits 0 provide information on the processing of the data blocks of a data flow (e.g., data flow processing information) to the processor 104 based on the data flow's flow ID; paragraph [0071]: Alternatively, instead of task box 3A, at task box 3B, based on the data flow's flow ID, the function circuits 0 may provide information on the processing of the data blocks of the data flow to the next sets of circuits without providing the information to the processor 104).
Regarding claims 3 and 13, Yang discloses
wherein: the task is a first type of task, the first type of tasks comprising the sequence of processings performed with the plurality of circuits; and a second type of task comprises a second sequence of processings performed with at least some of the plurality of circuits, the second sequence of processings being different from the sequence of processings (paragraph [0038]: For example, flow 1 may be processed through the flow classifier 152-function circuits 0-function circuits 1-function circuits 2, while flow 2 may be processed through the flow classifier 152-function circuits 1-function circuits 2 (thus skipping function circuits 0). Additionally, flow 3 may be processed through the flow classifier 152-function circuits 2-function circuits 0 (thus skipping function circuits 1 and having the processing order between function circuits 0 and 2 reversed)).
Regarding claims 4 and 14, Yang discloses
wherein:
the task is one of a plurality of tasks, the plurality of tasks organized into at least a first flow of tasks (paragraph [0038]: One data flow may follow one order of sets of function circuits while another data flow may follow a different order of sets of function circuits. For example, flow 1 may be processed through the flow classifier 152-function circuits 0-function circuits 1-function circuits 2, while flow 2 may be processed through the flow classifier 152-function circuits 1-function circuits 2 (thus skipping function circuits 0). Additionally, flow 3 may be processed through the flow classifier 152-function circuits 2-function circuits 0 (thus skipping function circuits 1 and having the processing order between function circuits 0 and 2 reversed); paragraph [0041]: a general-purpose processor is cost efficient to run common high-level tasks such as task scheduling and data fusion);
managing performance of the task comprises selecting, at a time, between one or more tasks for which information is to be routed to circuits of the plurality of circuits for processing (paragraph [0038]: One data flow may follow one order of sets of function circuits while another data flow may follow a different order of sets of function circuits. For example, flow 1 may be processed through the flow classifier 152-function circuits 0-function circuits 1-function circuits 2, while flow 2 may be processed through the flow classifier 152-function circuits 1-function circuits 2 (thus skipping function circuits 0). Additionally, flow 3 may be processed through the flow classifier 152-function circuits 2-function circuits 0 (thus skipping function circuits 1 and having the processing order between function circuits 0 and 2 reversed); paragraph [0041]: a general-purpose processor is cost efficient to run common high-level tasks such as task scheduling and data fusion); and
managing performance of the task comprises ensuring that tasks of the first flow of tasks are processed by circuits of the plurality of circuits according to an order of the tasks in the first flow (paragraph [0038]: One data flow may follow one order of sets of function circuits while another data flow may follow a different order of sets of function circuits. For example, flow 1 may be processed through the flow classifier 152-function circuits 0-function circuits 1-function circuits 2, while flow 2 may be processed through the flow classifier 152-function circuits 1-function circuits 2 (thus skipping function circuits 0). Additionally, flow 3 may be processed through the flow classifier 152-function circuits 2-function circuits 0 (thus skipping function circuits 1 and having the processing order between function circuits 0 and 2 reversed)).
Regarding claims 5 and 15, Yang discloses
wherein:
the message comprises a command and/or data (paragraph [0047] The processor 104 may provide two types of flow information to the processor 102. One type is flow mapping, which defines which source maps to which flow ID. The other type is flow configuration, which defines whether flow processing information of a set of circuits is to be provided to the processor 104 and how);
the task comprises a task description comprising information regarding performance of the task (paragraph [0049] The flow mapping information indicates how flow identifier are mapped to flows; paragraph [0053] The flow configuration is used by a set of circuits in the processor 102 to determine whether to provide flow processing information of a data flow to the processor 104 and how); and
routing the first information regarding the task to the first circuit (paragraph [0038]: One data flow may follow one order of sets of function circuits while another data flow may follow a different order of sets of function circuits; paragraph [0067]: At task box 2, the data blocks from the flow classifier 152 are provided to a set of circuits (function circuits 0 in this example), and the set of circuits processes data blocks of each data flow; paragraph [0065]: a source M at reference 118 is from the processor 104, and the processor 104 may have already assigned one or more flow IDs such as FIDM to the data blocks of the one or more data flow) and the second information regarding the task to the second circuit comprises routing, at a time, at least some of the task description at the time (paragraph [0074]: At task box 5, the processor 104 provides the information on the processor 104's processing of what is provided by the function circuits 0 to the next set of circuits (function circuits 1 at reference 156 in this example) … The information provided from the processor 104 to the function circuits 1 may be the updated data blocks of the data flow FIDL and/or one or more values. In one embodiment, the processor 104 provides one or more points/addresses of the information to the function circuits 1 (e.g., providing the one or more points/addresses to the bus or interconnect 410, from which the function circuits 1 retrieve)).
Regarding claims 6 and 16, Yang discloses
wherein:
each of the plurality of circuits is communicatively coupled to a shared memory (paragraph [0075]: when a data storage such as the data storage 116 is the source of a data flow, the processor 104 may provide the information on its processing such as updated data blocks back to the data storage. Since the data storage may provide its data blocks to a bus or interconnection such as the bus or interconnection 410, the updated data blocks are provided to the next set of circuits (the function circuits 1 in this example));
the command and/or the data for the message is stored in a message buffer in the shared memory (paragraph [0060]: Each data flow includes a stream of data; paragraph [0072]: The data blocks may be provided through a bus or an interconnect 410 … the data storage 116 may provide data blocks of data flows for the sets of circuits to process);
the information regarding performance of the task (paragraph [0047]: The processor 104 may provide two types of flow information to the processor 102. One type is flow mapping, which defines which source maps to which flow ID. The other type is flow configuration, which defines whether flow processing information of a set of circuits is to be provided to the processor 104 and how) is stored in the shared memory separate from the command and/or the data (paragraph [0060]: Each data flow includes a stream of data; paragraph [0072]: The data blocks may be provided through a bus or an interconnect 410 … the data storage 116 may provide data blocks of data flows for the sets of circuits to process); and
the task description comprises a pointer to a location storing the information regarding performance of the task, a pointer to the command and/or data, and/or a flow identifier identifying a flow of tasks with which the task is associated (paragraph [0070]: the function circuits 0 provide information on the processing of the data blocks of a data flow (e.g., data flow processing information) to the processor 104 based on the data flow's flow ID … The updated data blocks or the processing results may be provided to the processor 104 through the function circuits 0 forwarding the updated data blocks or the processing results to the processor 104, or through the function circuits providing an address/pointer for the updated data blocks or the processing results for the processor 104 to retrieve).
Regarding claims 7 and 17, Yang discloses
wherein:
the first circuit edits the flow identifier for the task (paragraph [0080]: the processor 104 may update the flow mapping and flow configuration thus adjust the data flow processing at the processor 102; paragraph [0083]: the processor 104 instructs the processor 102 to update flow configuration of one or more sets of circuits at task box 2B; paragraph [0085]: Unlike the second approach, where the processor 104 may interact with a plurality of sets of circuits as a single module, in embodiments of the invention the processor 104 may interact with a set of circuits as necessary by updating the flow mapping of the flow identifier and/or flow configuration of the set of circuits); and
the second information regarding the task (paragraph [0074]: At task box 5, the processor 104 provides the information on the processor 104's processing of what is provided by the function circuits 0 to the next set of circuits (function circuits 1 at reference 156 in this example); paragraph [0078]: The operations in task boxes 3-5 continue to the next set of circuits until the data blocks of the data flows are processed by the series of sets of circuits for the data flows) has a different flow identifier for the task than the first information regarding the task (paragraph [0059]: At task box 1, the multiplexor 172 maps flow IDs to a plurality of data flows, each flow ID being mapped to one data flow. As discussed herein above, a source such as a camera may generate multiple data flows, and since each data flow is assigned to a flow ID, a source may be mapped to multiple flow IDs).
Regarding claims 8 and 18, Yang discloses
wherein routing the first and second information regarding the task to the first circuit and the second circuit, respectively, comprises looking up the flow identifier in a table of information regarding routing of tasks (paragraph [0096]: The flow configuration is included in a configuration table, where each entry of the configuration table corresponds to a flow ID in one embodiment; paragraph [0073]:At task box 4, the processor 104 processes the provided information on the processing of the data blocks of the data flow FIDL. The processing may be performed by an execution unit (not shown) of the processor 104; paragraph [0074]: At task box 5, the processor 104 provides the information on the processor 104's processing of what is provided by the function circuits 0 to the next set of circuits (function circuits 1 at reference 156 in this example); paragraph [0079]: through setting the flow mapping and/or flow configuration, the processor 104 may determine whether or not to involve in the processing of a data flow at one or more set of circuits and how).
Regarding claims 9 and 19, Yang discloses
wherein the first circuit is a programmable processing circuit (paragraph [0037]: A set of circuits may include an application-specific integrated circuit (ASIC) and/or a field programmable gate array (FPGA)).
Regarding claim 10, Yang discloses
further comprising: receiving the message from a network (paragraph [0033]: a source may be a data storage such as an illustrated data storage 116 … While the data storage 116 is illustrated within the processor 104, it may be outside of the processor 104 but coupled to the processor 104; paragraph [0109]: a source may be a camera, and the data flow includes images/video data captured by the camera (the data flow may be referred to as a visual data flow)).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure.
Miniskar et al. (US 2025/0103235) discloses “the first processor may execute a first task to generate an output data object in its local memory, and that data object may be required input by a second task scheduled to run on a second processor. For efficient execution of both tasks, it can be desirable to transfer the data object from the first processor's local memory to the second processor's local memory” (paragraph [0003]).
Shin et al. (US 2024/0152391) discloses “The task queue 622 is configured to store task descriptors in sequence, perform dependency checks on the stored task descriptors, and sequentially store task descriptors for which the dependency checks have been completed” (paragraph [0220]) and “the task buffer 621, the first queue group Q1, the dependency checker DPc, the second queue group Q2, and the runtime handle RH may be respectively referred to as a first queue circuit, a dependency checker circuit, a second queue circuit” (paragraph [0221]).
Pope et al. (US 2023/0224261) discloses “The route information may be source based routing which expresses the path that a capsule will take, or may be an indirection to a route table or to a program state (processor) which is controlling the flow of capsules” (paragraph [0119]).
Kuo et al. (US 2022/0036158) discloses “The task list 704 includes a sequence of tasks including neural engine tasks TC1 through TC4 (corresponding to convolution layers C1 through C4) and planar engine tasks TP1 through TP5 (corresponding to pooling layers P1 through P5)” (paragraph [0090]) and “Neural processor circuit 218 stores 1510 a plurality of queues of tasks in a plurality of buffer circuits that include a first buffer circuit and a second buffer circuit. The first buffer circuit and the second buffer circuit may be two of the task buffers 1380” (paragraph [0124]).
Vierimaa (US 10,574,795) discloses “When the first processor sub-module 70 has completed its task, it may output the result of the processing task to the second processor sub-module 71. The second processor sub-module 72 may be configured by the software in the memory area 82 to perform a second, different processing task for the input data stream and/or for the result of the processing task of the first processor sub-module 70” (col. 3, lines 19-26).
Ichiba (US 2019/0026247) discloses “the output control circuit 250 transfers the result data RDT, which is the result of the process executed in the current logic circuit 242, to the next stage logic circuit 272 in the order indicated by the order information NINF corresponding to the executed process B” (paragraph [0152]).
Browne et al. (US 2018/0285154) discloses “if the result of decision block 708 means that the co-processor 180 processes the packet associated with the slot 360, then the co-processor reads (block 712) the ring slot descriptor and the application data and performs (block 716) the job indicated by the ring slot descriptor (encryption, compression, as examples). The co-processor 180 then writes (block 720) the result back to the ring slot and sets the descriptor done field (to indicate completion of the job) before incrementing (block 724) the co-processor slot sequence number or incrementing descriptor offset (whichever is applicable)” (paragraph [0060]).
Hasting et al. (US 2017/0286157) discloses “the scheduling circuit further detects a critical sequences of tasks, schedules those tasks to be processed by a single destination processing core, and, upon completion of the critical sequence, conducts another load balancing to potentially select a different processing core to process more tasks” (abstract).
Zhang et al. (US 2017/0185449) discloses “routing, by the on-chip router, the data packet to the second processor core according to the identifier of the second processor core in the data packet after the on-chip router obtains the data packet. The data processing task may further include execution sequence information, instructing the second processor core to complete the data processing task in an execution sequence indicated by the execution sequence information” (paragraph [0006]).
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SISLEY N. KIM whose telephone number is (571)270-7832. The examiner can normally be reached M-F 11:30AM -7:30PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, April Y. Blair can be reached on (571)270-1014. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SISLEY N KIM/Primary Examiner, Art Unit 2196 02/18/2026