Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for therejections under this section made in this Office action: A person shall be entitled to a patent unless -
(a)(2) the claimed invention was described in a patent issued under section 151, or in anapplication for patent published or deemed published under section 122(b), in which the patent orapplication, as the case may be, names another inventor and was effectively filed before the effectivefiling date of the claimed invention.
Claim 15 rejected under 35 U.S.C. 102(a)(2) as being unpatentable by Zhou
Claim 15, Zhou teaches “a method for memory access, applied in a multi-core network processing unit (NPU), wherein the multi-core NPU comprises a first core and a second core ([Fig. 11] multiple processing units), the method comprising: in response to the first core requesting memory space allocation ([0294] Step 1200: The memory sharing control device receives a first memory access request sent by a first processing unit in the at least two processing units, where the processing unit is a processor, a core in a processor, or a combination of cores in a processor.), providing a first item from a first allocated queue dedicated to the first core to the first core, wherein the first allocated queue cannot be used by the second core ([0295] Step 1202: The memory sharing control device allocates a first memory from the memory pool to the first processing unit, where the first memory is accessible by a second processing unit in the at least two processing units in another time period. [0011] Optionally, that at least one memory in the memory pool is accessible by different processing units in different time periods means that any two of the at least two processing units can separately access the at least one memory in the memory pool in different time periods. For example, the at least two processing units include a first processing unit and a second processing unit. In a first time period, a first memory in the memory pool is accessed by the first processing unit, and the second processing unit cannot access the first memory. In a second time period, the first memory in the memory pool is accessed by the second processing unit, and the first processing unit cannot access the first memory. Optionally, the processor may be a central processing unit (CPU), and one CPU may include two or more cores.); and accessing, by the first core, to memory space indicated by a first memory address range recorded in the first item ([0296] Step 1204: The first processing unit accesses the first memory via the memory sharing control device. [0029] Optionally, memory address information of the first memory includes a start address of the first memory and a size of the first memory [0233] For example, a memory access request includes information such as RESOURCE_ID, address information, and an access attribute that are of a processing unit. RESOURCE_ID is an ID of a combination of cores, the address information is address information of a memory to be accessed, and the access attribute indicates whether the memory access request is a read request or a write request.)”.
Claim Rejections - 35 USC §103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim/s 1, 3, 8, 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Beckmann (Pub. No. US 2022/0197726) in further view of Zhou (Pub. No. US 2023/0409198).
Claim 1, Beckmann teaches “A method for memory access control, performed by a central processing unit (CPU), wherein the CPU is coupled to a network processing unit (NPU) and the NPU comprises a plurality of cores ([Fig. 2] plurality cores), the method comprising: … dequeuing one or more first items in a shared resource pool starting from a slot that is pointed to by a take index ([0048] However, if the shared message queue is not full, INMC 34 may use an inter-node-put instruction to copy an item (e.g., item 80) from local message queue 24 to shared message queue data buffer 70, as shown at block 332. [Fig. 2] P2 as take index), and enqueuing the one or more first items into the first allocated queue starting from an empty slot that is pointed to by a write index ([0048] In particular, INMC may write the item to the slot indicated by copy of tail index 77. Referring again to FIG. 2, arrows P5.1 and P5.2 show INMC 34 writing item 80 to shared message queue data buffer 70 in L1DC 22B via CCPI 90. As shown at block 334 of FIG. 3, INMC 34 may then remove the current item from local message queue 24. And the process may then return to block 310, with INMC 34 receiving additional items from sender thread 64A and sending those items to the shared message queue, as indicated above. [0050] // write data to shared-message-queue, put into receiver's cache inter-node-put<T>(&q->buffer[tail-index-copy % q->capacity], item), …, so that the memory address range of the RAM has been reserved for the first core ([0053] In one embodiment, the head index and the tail index are larger integers than required for the capacity of the queue. For example, a queue with capacity 64 would require 6-bit integers to address one of 64 locations (i.e. the allocated space for first core to place data item 80), but the data processing system may use at least two additional bits (i.e. 8 bits in this case) for the head index and for the tail index, to allow full and empty conditions to be easily computed by comparing the larger index values. [0099] The machine-readable media for some embodiments may include, without limitation, tangible non-transitory storage components such as magnetic disks, optical disks, magneto-optical disks, dynamic random-access memory (RAM), static RAM, non-volatile RAM (NVRAM))”.
However, Beckmann may not explicitly teach details of utilizing the identification of the first core for allocation of memory.
Zhou teaches “obtaining an identification of a first core, wherein the first core requests to allocate memory space ([Fig. 12, 1200-1202] access request of first processing unit results in allocation of memory); determining a first allocated queue from a plurality of allocated queues according to the identification of the first core ([Fig. 11] multiple memory in memory pool (i.e. identified queue of Beckmann). [0029] Optionally, memory address information of the first memory includes a start address of the first memory and a size of the first memory. The first processing unit has an identifier, and the establishing a correspondence between a memory address of the first memory and the first processing unit may be establishing a correspondence between the unique identifier of the first processing unit and the memory address information of the first memory. [0031] virtualize a plurality of virtual memory devices from the memory pool, where a physical memory corresponding to a first virtual memory device in the plurality of virtual memory devices is the first memory; and [0032] allocate the first virtual memory device to the first processing unit. Optionally, the virtual memory device corresponds to a segment of consecutive physical memory addresses in the memory pool. The virtual memory device corresponds to a segment of consecutive physical memory addresses in the memory pool, so that management of the virtual memory device can be simplified. Certainly, the virtual memory device may alternatively correspond to several segments of inconsecutive physical memory addresses in the memory pool. [0033] Optionally, the first virtual memory device may be allocated to the first processing unit by establishing an access control table. For example, the access control table may include information such as the identifier of the first processing unit, an identifier of the first virtual memory device, and the start address and the size of the memory corresponding to the first virtual memory device. The access control table may further include permission information of accessing the first virtual memory device by the first processing unit, attribute information of a memory to be accessed (including but not limited to information about whether the memory is a persistent memory), and the like.) wherein each first item comprises a memory address range of a random access memory (RAM), so that the memory address range of the RAM has been reserved for the first core ([0233] For example, a memory access request includes information such as RESOURCE_ID, address information, and an access attribute that are of a processing unit. RESOURCE_ID is an ID of a combination of cores, the address information is address information of a memory to be accessed, and the access attribute indicates whether the memory access request is a read request or a write request. [0296] Step 1204: The first processing unit accesses the first memory via the memory sharing control device. [0029] Optionally, memory address information of the first memory includes a start address of the first memory and a size of the first memory. [0270] Therefore, a range of memory resources that can be shared by the processor is further expanded, so that the memory resources are shared in a larger range, and utilization of the memory resources is further improved.)”.
It would have been obvious to one of ordinary skill in the art at the time the invention was filed to apply the teachings of Zhou with the teachings of Beckmann in order to provide a system that teaches utilizing core information. The motivation for applying Zhou teaching with Beckmann, teaching is to provide a system that allows for design choice. Beckmann, Zhou analogous art directed towards core processing. Together Beckmann, Zhou teach every limitation of the claimed invention. Since the teachings were analogous art known at the filing time of invention, one of ordinary skill could have applied the teachings of Zhou with the teachings of Beckmann by known methods and gained expected results.
Claim 3, 10, the combination teaches the claim wherein Zhou teaches “the method of claim 1, wherein the first core after dequeuing any first item from the first allocated queue through a first application programming interface (API) stores data in or reads data from the memory address range of the RAM, which is recorded in the dequeued first item ([0214] In this embodiment of this application, a processor connected to the memory sharing control device 200 may be any processor that implements a processor function. FIG. 6 is a schematic diagram of a structure of a processor 210 according to an embodiment of this application. As shown in FIG. 6, the processor 210 includes a kernel 601, a memory 602, a peripheral interface 603, and the like. The kernel 601 may include at least one core, and is configured to implement a function of the processor 210. In FIG. 6, two cores (a core 1 and a core 2) are used as an example for description. However, a quantity of cores in the processor 600 is not limited. The processor 600 may further include four, eight, or 16 cores. The memory 602 includes a cache or an SRAM, and is configured to cache read/write data of the core 1 or the core 2. The peripheral interface 603 includes a Serdes interface 6031, a memory controller 6032, an input/output interface, a power supply, a clock, and the like. The Serdes interface 6031 is an interface for connecting the processor 210 and a serial bus. After a memory access request in a parallel signal form initiated by the processor 210 is converted into a serial signal through the Serdes interface 6031, the serial signal is sent to the memory sharing control device 200 via the serial bus. The memory controller 6032 may be a memory controller with a function similar to that of the memory controller shown in FIG. 5. When the processor 210 has a local memory controlled by the processor 210, the processor 210 may implement access control on the local memory via the memory controller 6032.)”.
Rationale to claim 1 is applied here.
Claim 8, “an optical network unit (ONU) router, comprising: a network processing unit (NPU), comprising a plurality of cores ([Fig. 7] 1510); a random access memory (RAM), coupled to the NPU ([Fig. 7] 1510 comprising memory), comprising a shared resource pool and a plurality of allocated queues; and a central processing unit (CPU), coupled to the NPU and the RAM, arranged operably to: obtain an identification of a first core from the first core of the NPU, wherein the first core requests to allocate memory space; determine a first allocated queue from a plurality of allocated queues according to the identification of the first core; and dequeue one or more first items in the shared resource pool starting from a slot that is pointed to by a take index, and enqueue the one or more first items into the first allocated queue starting from an empty slot that is pointed to by a write index, wherein each first item comprises a memory address range of the RAM, so that the memory address range of the RAM has been reserved for the first core” is similar to claim 1 and therefore rejected with the same references and citations.
Claim/s 2, 9 is/are rejected under 35 U.S.C. 103 as being unpatentable over Beckmann, Zhou in further view of Smith (Pub. No. US 2017/0371660).
Claim 2, 9 the combination may not explicitly teach the claim.
Smith teaches “the method of claim 1, wherein the shared resource pool is a cyclical queue, and memory address ranges of any two of all available items in the shared resource pool are not overlapped ([0198] At process block 1420, the issued load and store instructions can be stored in a local memory based on the relative program order of the instructions. For example, the issued load and store instructions can be stored in a local memory of a load-store queue that is organized as a circular buffer with different non-overlapping regions for different instruction blocks. For example, each of the non-overlapping regions can correspond to a different instruction block)”.
It would have been obvious to one of ordinary skill in the art at the time the invention was filed to apply the teachings of Smith with the teachings of Beckmann, Zhou in order to provide a system that details of address information. The motivation for applying Smith teaching with Beckmann, Zhou teaching is to provide a system that allows for design choice. Beckmann, Zhou, Smith analogous art directed towards core processing. Together Beckmann, Zhou, Smith teaches very limitation of the claimed invention. Since the teachings were analogous art known at the filing time of invention, one of ordinary skill could have applied the teachings of Smith with the teachings of Beckmann, SUNDARARAJAN, Zhou by known methods and gained expected results.
Claim/s 4, 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Beckmann, Zhou in further view of Gunner (Pub. No. US 2016/0044695).
Claims 4, 11, the combination teaches the claim, wherein Beckman teaches “the method of claim 1, comprising: enqueuing second items …into empty slots of the shared resource pool, wherein each second item comprises a memory address range that has been allocated for one corresponding core in the NPU ([0075] Example A1 is a processor package comprising a first core, a local cache in the first core, and an INMC in the first core. The INMC is configured to (a) receive an inter-node message from a sender thread executing on the first core, wherein the message is directed to a receiver thread executing on a second core; (b) in response to receiving the inter-node message, store a payload from the inter-node message in a local message queue in the local cache of the first core; [0053] In one embodiment, the head index and the tail index are larger integers than required for the capacity of the queue. For example, a queue with capacity 64 would require 6-bit integers to address one of 64 locations (i.e. the allocated space for first core to place data item 80).
However, the combination may not explicitly teach the origination of the packets.
Gunner teaches “a plurality of recycled queues ([0031] A set of per-CoS queues 110a to 110d queue packets for a plurality of subscriber devices. Each CoS queue 110 includes multiple QBlocks 112. Each QBlock within a CoS queue 110 is a first-in-first-out (FIFO) queue scheduled for availability to a Weighted Fair Queue (WFQ) scheduler 120 at a different time interval. When a QBlock 112 reaches the head of its respective CoS queue 110, the WFQ scheduler 120 transfers one-or-more packets from the QBlock 112 for transmission via a network port 160. [0040] The circular arrangement of QBlock recycling in FIG. 4 is advantageous because queue reuse minimizes the delays and memory management transactions associated with creating, ordering, and discarding queues. For example, the block of per-CoS queues 110a to 110d in FIG. 1 may be implemented as a three-dimensional array structure, where a first dimension distinguishes the class of service associated with the queue, the second dimension distinguishes QBlocks 112 within a CoS queue, and the third dimension delineates the available packet slots within the QBlock)”
It would have been obvious to one of ordinary skill in the art at the time the invention was filed to apply the teachings of Gunner with the teachings of Beckmann, Zhou in order to provide a system that details of queues. The motivation for applying Smith teaching with Beckmann, Zhou teaching is to provide a system that allows for design choice. Beckmann, Zhou, Smith analogous art directed towards core processing. Together Beckmann, Zhou, Smith teaches very limitation of the claimed invention. Since the teachings were analogous art known at the filing time of invention, one of ordinary skill could have applied the teachings of Smith with the teachings of Beckmann, Zhou by known methods and gained expected results.
Claim/s 6, 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Beckmann, Zhou, Gunner in further view of Anderson (Pub. No. US 2007/0094643).
Claim 6, 13, the combination may not explicitly teach the limitation.
Anderson teaches “the method of claim 4, wherein a recycling event handler comprises an operation for enqueuing the second items of the plurality of recycled queues into the shared resource pool, the method comprising: executing the recycling event handler when a recycling event is triggered ([0036] FIG. 8 also shows network stack 80 that includes a post-interrupt handler 81 and packet directing module 82, which may direct packets up the stack to IP stack 83 and/or network tapping 84, as examples. Network tapping 84 refers to what tcpdump and lindump do, as examples. That is, network tapping 84 gets a copy of the packets (usually implemented as a pointer not as an actual copy) that is not filtered and processed by the IP stack; for example, the network tap does not check that packet checksums are correct, whereas the IP stack will drop packets with bad checksums. This exemplary embodiment can thus avoid sending the captured packets 101A up the network stack 80 by having the interrupt handler 801 trigger writing of the packet to trace file 102A. Interrupt handler 801 may be configured to not send the received packets 101A on up the stack 80, but instead causes network tracing tool 11A to write the packets to trace file 102A. In certain embodiments, captured packets are directly written (e.g., copied) from interrupt handler 801 to a trace file 102A, and standard packet processing up the kernel's network stack 80 is bypassed for such captured packets. This avoidance of processing the packets up the network stack 80 may allow the buffers for data 101A to be reused immediately, rather than allocating new buffers. Depending on the OS driver structure, network tracing tool 11A may, in certain embodiments, run as part of post-interrupt handler 81 rather than as part of interrupt handler 801 depending, for example, on where data 101A is received.)”.
It would have been obvious to one of ordinary skill in the art at the time the invention was filed to apply the teachings of Anderson with the teachings of Beckmann, Zhou, Gunner in order to provide a system that details of recycle queues. The motivation for applying Anderson teaching with Beckmann, Zhou, Gunner teaching is to provide a system for improved resource usage. Beckmann, Zhou, Gunner, Anderson analogous art directed towards core processing. Together Beckmann, Zhou, Gunner, Anderson teaches very limitation of the claimed invention. Since the teachings were analogous art known at the filing time of invention, one of ordinary skill could have applied the teachings of Anderson with the teachings of Beckmann, Zhou, Gunner by known methods and gained expected results.
Claim/s 7, 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Beckmann, Zhou in further view of Oren (Pub. No. US 2018/0167340).
Claim 7, 14, the combination teaches the claim, wherein Beckmann teaches “the method of claim 1, wherein, a second operation for determining the first allocated queue, a third operation for dequeuing the one or more first items from the shared resource pool, and a fourth operation for enqueuing the one or more first items into the first allocated queue ([0044] However, if local message queue 24 is not empty, INMC 34 may then use an RAO (a) to get a copy of the current tail index for the shared message queue from shared cache (e.g., L3 cache 50) and (b) to increment the current tail index, thereby reserving the associated slot in shared message queue data buffer 70 for an item from local message queue 24. For instance, referring again to FIG. 2, arrows P3.1 and P3.2 show that, when INMC 34 uses an RAO to get a copy of tail index 76 from L3 cache 50 and to increment tail index 76, the RAO may involve CCPI 90. In other words, INMC may interact with tail index 76 via CCPI 90. And arrow P3.3 shows that INMC 34 may save the copy of tail index 77 to a register in core 20A.)”.
However, the combination may not explicitly teach the limitation.
Oren teaches “the method comprising: … an allocation event handler comprises a first operation for obtaining the identification of the first core …executing the allocation event handler when an allocation event is triggered ([0037] The wireless NIC 130 moves the packets from the hardware transmission queue 506 to the transmission FIFO 508 for transmission. After transmission, the wireless NIC 130 receives an indication of whether transmission was successful from the physical layer of the network 106. As shown, the indications are stored in the receive FIFO 510 and include the packet number and the processor core number. The wireless NIC 130 examines the processor core number and generates an interrupt to the associated processor core 122 to process the indication. For example, as shown, packet 6 triggers an interrupt to core number 1, i.e., processor core 122b, and packet 2 triggers an interrupt to core number 0, i.e., processor core 122a. In response to the interrupt, the processor core 122a, 122b may store the indication in the receive queue 504a, 504, respectively, or otherwise process the indication.)”.
It would have been obvious to one of ordinary skill in the art at the time the invention was filed to apply the teachings of Oren with the teachings of Beckmann, Zhou in order to provide a system that details of event handling. The motivation for applying Oren teaching with Beckmann, Zhou teaching is to provide a system that allows for design choice. Beckmann, Zhou, Oren analogous art directed towards core processing. Together Beckmann, Zhou, Oren teaches very limitation of the claimed invention. Since the teachings were analogous art known at the filing time of invention, one of ordinary skill could have applied the teachings of Oren with the teachings of Beckmann, Zhou by known methods and gained expected results.
Claim/s 16, 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zhou in further view of Beckmann.
Claim 16, Zhou may not explicitly teach the limitation.
Beckmann teaches “the method of claim 15, comprising: in response to an allocation event, fetching the first item from a shared resource pool ([0044] However, if local message queue 24 is not empty, INMC 34 may then use an RAO (a) to get a copy of the current tail index for the shared message queue from shared cache (e.g., L3 cache 50) and (b) to increment the current tail index, thereby reserving the associated slot in shared message queue data buffer 70 for an item from local message queue 24. For instance, referring again to FIG. 2, arrows P3.1 and P3.2 show that, when INMC 34 uses an RAO to get a copy of tail index 76 from L3 cache 50 and to increment tail index 76, the RAO may involve CCPI 90. In other words, INMC may interact with tail index 76 via CCPI 90. And arrow P3.3 shows that INMC 34 may save the copy of tail index 77 to a register in core 20A.);; and pushing the first item into the first allocated queue ([0048] In particular, INMC may write the item to the slot indicated by copy of tail index 77. Referring again to FIG. 2, arrows P5.1 and P5.2 show INMC 34 writing item 80 to shared message queue data buffer 70 in L1DC 22B via CCPI 90. As shown at block 334 of FIG. 3, INMC 34 may then remove the current item from local message queue 24. And the process may then return to block 310, with INMC 34 receiving additional items from sender thread 64A and sending those items to the shared message queue, as indicated above. [0050] // write data to shared-message-queue, put into receiver's cache inter-node-put<T>(&q->buffer[tail-index-copy % q->capacity], item)”.
It would have been obvious to one of ordinary skill in the art at the time the invention was filed to apply the teachings of Beckmann with the teachings of Zhou in order to provide a system that teaches managing data in core-based memory systems. The motivation for applying Beckmann teaching with Zhou teaching is to provide a system that allows for design choice. Zhou, Beckmann, are analogous art directed towards core processing. Together Zhou, Beckmann teach every limitation of the claimed invention. Since the teachings were analogous art known at the filing time of invention, one of ordinary skill could have applied the teachings of Beckmann with the teachings of Beckmann, Zhou by known methods and gained expected results.
Claim 17, the combination teaches the claim, wherein Zhou teaches “the method of claim 16, wherein the shared resource pool comprises a plurality of third items and each third item stores a third memory address range that can be reserved for any of the first core and the second core ([0095] The QoS engine is configured to implement optimized storage of the data that needs to be cached by any one of the at least two processing units in the cache unit. [0029] Optionally, memory address information of the first memory includes a start address of the first memory and a size of the first memory [0158] It should be understood that the terms used in the descriptions of the various examples in the specification and claims of this application are merely intended to describe specific examples, but are not intended to limit the examples. The terms “one” (“a” and “an”) and “the” of singular forms used in the descriptions of various examples and the appended claims are also intended to include plural forms, unless otherwise specified in the context clearly.)”.
Claim/s 18-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zhou in further view of Nyak (Pat. No. US 8,732,403).
Claim 18, Zhou may not explicitly teach the limitation.
Nyak teaches “the method of claim 15, comprising: in response to the first core requesting to release memory space, pushing a second item that has been provided to the first core to use into a first recycled queue dedicated to the first core, wherein a second memory address range recorded in the second item that is pushed into the first recycled queue is no longer used by the first core ([Col. 26, Line 16-28] (147) As known in the art, the caching layer 280 transfers a data block 1410 and its associated metadata header 1405 to the recycle queue 1415 upon the occurrence of a predetermined event. When a data block 1410 and its associated metadata header 1405 are transferred to the recycle queue 1415, the data block 1410 and its associated metadata header 1405 are deleted from its original storage location in the cache memory 225 and stored to the reserved storage space allocated to the recycle queue 1415 in the cache memory 225. This is conceptually shown in FIG. 14 by the dashed arrow lines from the original storage location in the cache memory 225 to the recycle queue 1415 for data block B2 and its associated metadata header H2.), and wherein the first recycled queue cannot be used by the second core (i.e. as taught by Zhou [0011])”.
It would have been obvious to one of ordinary skill in the art at the time the invention was filed to apply the teachings of Zhou with the teachings of Beckmann, in order to provide a system that teaches an item may comprise address information. The motivation for applying Zhou teaching with Beckmann, teaching is to provide a system that allows for design choice. Beckmann, Zhou analogous art directed towards core processing. Together Beckmann, Zhou teach every limitation of the claimed invention. Since the teachings were analogous art known at the filing time of invention, one of ordinary skill could have applied the teachings of Zhou with the teachings of Beckmann, by known methods and gained expected results.
Claim 19, the combination teaches the claim, wherein Nyak teaches “the method of claim 18, comprising: in response to a recycling event, migrating the second item in the first recycled queue to an empty slot of a shared resource pool ([Col. 26, Line 16-28] (147) As known in the art, the caching layer 280 transfers a data block 1410 and its associated metadata header 1405 to the recycle queue 1415 upon the occurrence of a predetermined event. When a data block 1410 and its associated metadata header 1405 are transferred to the recycle queue 1415, the data block 1410 and its associated metadata header 1405 are deleted from its original storage location in the cache memory 225 and stored to the reserved storage space allocated to the recycle queue 1415 in the cache memory 225. This is conceptually shown in FIG. 14 by the dashed arrow lines from the original storage location in the cache memory 225 to the recycle queue 1415 for data block B2 and its associated metadata header H2.), and wherein the first recycled queue cannot be used by the second core (i.e. as taught by Zhou [0011])”.
Rationale to claim 18 is applied here.
Claim 20, the combination teaches the claim, wherein Zhou teaches “the method of claim 19, wherein the second memory address range stored in the shared resource pool can be reserved for any of the first core and the second core ([0295] Step 1202: The memory sharing control device allocates a first memory from the memory pool to the first processing unit, where the first memory is accessible by a second processing unit in the at least two processing units in another time period)”.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to WYNUEL S AQUINO whose telephone number is (571)272-7478. The examiner can normally be reached 9AM-5PM EST M-F.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Lewis Bullock can be reached at 571-272-3759. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/WYNUEL S AQUINO/Primary Examiner, Art Unit 2199