DETAILED ACTION
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This Office Action is responsive to communication(s) filed on 12/17/2024. Claims 1-20 have been examined and are pending in this application.
Information Disclosure Statement
The information disclosure statement (IDS) was submitted on 11/04/2025. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
As enumerated in the table below, instant claims 1-20 are anticipated by claims 1-3, 5-6, 8, and 12-20 of US Patent 12,169,459.
Status
Instant Application
US Patent 12,169,459
Anticipation
1. A method for accessing data in a heterogeneous processing system using a dataflow graph having a plurality of nodes connected by edges, wherein the heterogeneous processing system includes a host processor, a first processor coupled to a first memory, a second processor coupled to a second memory, and switch and bus circuitry that communicatively couples the host processor, the first processor, and the second processor, the method comprising:
executing at least a portion of a first node of the plurality of nodes of the dataflow graph using the first processor;
executing at least a portion of a second node of the plurality of nodes of the dataflow graph using the second processor;
mapping virtual addresses of the second memory to physical addresses of the switch and bus circuitry; and
configuring the first processor to directly access the second memory using the mapped physical addresses; and
directly accessing, by the first processor, the second memory through the switch and bus circuitry.
12. A method of accessing data in a heterogeneous system to implement a machine learning system using a dataflow graph having a plurality of nodes connected by edges, wherein the heterogeneous system includes a host processor, a first processor coupled to a first memory, a second processor coupled to a second memory, and switch and bus circuitry that communicatively couples the host processor, the first processor, and the second processor, the method comprising:
executing at least a portion of a first node of the plurality of nodes of the dataflow graph using the first processor;
executing at least a portion of a second node of the plurality of nodes of the dataflow graph using the second processor;
mapping, by the host processor, virtual addresses of the second memory to physical addresses of the switch and bus circuitry;
configuring, by the host processor, the first processor to directly access the second memory using the mapped physical addresses according to memory extension operation; and
directly accessing, by the first processor, the second memory through the switch and bus circuitry.
Anticipation
2. The method of claim 1, wherein the method is for implementing a machine learning system using dataflow graphs.
Claim 12.
Anticipation
3. The method of claim 1, wherein configuring the first processor comprises configuring a reconfigurable dataflow unit.
13. The method of claim 12, wherein the configuring the first processor comprises configuring a reconfigurable dataflow unit.
Anticipation
4. The method of claim 1, wherein configuring the first processor comprises configuring a compute engine.
14. The method of claim 12, wherein the configuring the first processor comprises configuring a compute engine.
Anticipation
5. The method of claim 1, wherein the first processor comprises a reconfigurable processor, comprising: an array of coarse-grained reconfigurable units, each including an address generation unit, a plurality of memory units, and a plurality of compute units interconnected by an array-level network; a top-level network coupled to the address generation unit of the array of coarse-grained reconfigurable units; and an interface coupled between the top-level network and the switch and bus circuitry; wherein configuring the first processor comprises configuring the address generation unit of the array of coarse-grained reconfigurable unit in the reconfigurable processor to map virtual addresses of the second memory to physical addresses of the switch and bus circuitry.
15. The method of claim 12, wherein the first processor comprises a reconfigurable processor that includes: an array of coarse-grained reconfigurable units comprising, an address generation unit, a plurality of memory units, and a plurality of compute units interconnected by an array-level network; a top-level network coupled to the address generation unit of the array of coarse-grained reconfigurable units; and an interface coupled between the top-level network and the switch and bus circuitry; and the configuring the first processor comprises configuring the address generation unit of the array of coarse-grained reconfigurable unit in the reconfigurable processor to map virtual addresses of the second memory to physical addresses of the switch and bus circuitry.
Anticipation
6. The method of claim 1, further comprising: generating and storing, by programming the second processor, while executing the portion of the second node, to execute a first part of an application to generate and store first data into the second memory; and configuring the first processor to directly access, by the first processor, the first data from the second memory using mapped physical addresses while executing a second part of the application the portion of the first node.
16. The method of claim 12, further comprising: generating and storing, by the second processor, while executing the portion of the second node, first data into the second memory; and directly accessing, by the first processor, the first data from the second memory using mapped physical addresses while executing the portion of the first node.
Anticipation
7. The method of claim 1, further comprising configuring the first processor to write second data generated by the second part of the application into the first memory portion of the first node.
17. The method of claim 16, further comprising configuring the first processor to write second data generated by the portion of the first node.
Anticipation
8. The method of claim 1, wherein the heterogeneous system includes a host memory coupled to the host processor; and wherein the heterogeneous system is configured to provide the first processor with direct access to the host memory and to write second data output from executing the portion of the second node part of the application directly into the host memory.
18. The method of claim 16, the heterogeneous system including a host memory coupled to the host processor, further comprising configuring the first processor to directly access the host memory and to write second data output from executing the portion of the second node directly into the host memory.
Anticipation
9. The method of claim 1, wherein the heterogeneous system is configured to provide the first processor to execute a first part of an application to generating, by the first processor while executing the portion of the first node, the first data and directly writing the first data into the second memory using mapped physical addresses.
19. The method of claim 12, further comprising generating, by the first processor while executing the portion of the first node, first data and directly writing the first data into the second memory using mapped physical addresses.
Anticipation
10. The method of claim 1, the heterogeneous system including a host memory coupled to the host processor, executing the portion of the first node using the first processor to directly read first data from the host memory generating second data using the first data; and while executing the application, directly writing the second data into the second memory using mapped physical addresses.
20. The method of claim 12, the heterogeneous system including a host memory coupled to the host processor, further comprising executing the portion of the first node using the first processor to; directly read first data from the host memory, use the first data to generate second data, and directly write the second data into the second memory using mapped physical addresses.
Anticipation
11. A method of claim 1, wherein the heterogeneous system includes mapping the virtual addresses of the second memory to the physical addresses of the switch and bus circuitry; and wherein the first processor is configured to directly access the second memory using the mapped physical addresses.
Claim 12.
Anticipation
12. The method of claim 1, wherein the second processor is programmed to execute a first part of an application to generate and write first data into the second memory; and wherein the first processor is configured to directly access the first data from the second memory using mapped physical addresses while executing a second part of the application.
2. The heterogeneous processing system of claim 1, wherein the second processor is programmed to execute a first part of an application to generate and store first data into the second memory, and wherein the first processor is configured to directly access the first data from the second memory using the mapped physical addresses while executing a second part of the application using the first data.
Anticipation
13. The method of claim 12, wherein the first processor is configured to write second data generated by the second part of the application into the first memory.
3. The heterogeneous processing system of claim 2, wherein the first processor is further configured to store second data output from executing the second part of the application into the first memory.
Anticipation
14. The method of claim 13, wherein the first processor is configured to directly access the host memory and to write second data output from executing the second part of the application directly into the host memory.
5. The heterogeneous processing system of claim 2, further comprising: a host memory coupled to the host processor; and wherein the first processor is further configured to directly access the host memory and to store second data output from executing the second part of the application directly into the host memory.
Anticipation
15. The method of claim 1, wherein the first processor is configured to execute a first part of an application to generate first data and to directly write the first data into the second memory using mapped physical addresses.
6. The heterogeneous processing system of claim 1, wherein the first processor is configured to execute a first part of an application to generate first data and to directly write the first data into the second memory using the mapped physical addresses.
Anticipation
16. The method of claim 1, wherein the first processor is configured to directly read first data from the host memory while executing an application using the first data to generate second data; and directly writing the second data into the second memory using mapped physical addresses while executing the application.
8. The heterogeneous processing system of claim 1, further comprising: a host memory coupled to the host processor; and wherein the first processor is further configured to directly read first data from the host memory while executing an application using the first data to generate second data and to directly write the second data into the second memory while executing the application.
Anticipation
17. A heterogeneous processing system, comprising: a host processor; a first processor coupled to a first memory, wherein the first processor comprises a reconfigurable processor comprising: an array of coarse-grained reconfigurable units comprising, an address generation unit, a plurality of memory units, and a plurality of compute units interconnected by an array-level network; a top-level network coupled to the address generation unit of the array of coarse-grained reconfigurable units; and an interface coupled between the top-level network and an external port of the first processor; a second processor coupled to a second memory; and switch and bus circuitry that communicatively couples the host processor, the extremal port of the first processor, and the second processor; wherein the host processor is configured to map virtual addresses of the second memory to physical addresses of the switch and bus circuitry; and wherein the first processor can directly access the second memory using the mapped physical addresses.
1. A heterogeneous processing system, comprising: a host processor; a first processor coupled to a first memory, wherein the first processor comprises a reconfigurable processor that includes: an array of coarse-grained reconfigurable units comprising, an address generation unit, a plurality of memory units, and a plurality of compute units interconnected by an array-level network; a top-level network coupled to the address generation unit of the array of coarse-grained reconfigurable units; and an interface coupled between the top-level network and an external port of the first processor; a second processor coupled to a second memory; and switch and bus circuitry that communicatively couples the host processor, the external port of the first processor, and the second processor; wherein the host processor is programmed to configure the address generation unit of the array of coarse-grained reconfigurable unit in the first processor to map virtual addresses of the second memory to physical addresses of the switch and bus circuitry so that the first processor can directly access the second memory using the mapped physical addresses according to memory extension operation.
Anticipation
18. The heterogeneous processing system of claim 17, wherein the second processor is programmed to execute a first part of an application to generate and store first data into the second memory, and wherein the first processor is configured to directly access the first data from the second memory using the mapped physical addresses while executing a second part of the application using the first data.
2. The heterogeneous processing system of claim 1, wherein the second processor is programmed to execute a first part of an application to generate and store first data into the second memory, and wherein the first processor is configured to directly access the first data from the second memory using the mapped physical addresses while executing a second part of the application using the first data.
Anticipation
19. The heterogeneous processing system of claim 17, wherein the first processor is further configured to store second data output from executing the second part of the application into the first memory.
3. The heterogeneous processing system of claim 2, wherein the first processor is further configured to store second data output from executing the second part of the application into the first memory.
Anticipation
20. The heterogeneous processing system of claim 17, wherein the first processor may be a reconfigurable processor, a reconfigurable dataflow unit, or a compute engine.
14. The method of claim 12, wherein the configuring the first processor comprises configuring a compute engine.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-4 and 11 are rejected under 35 U.S.C. 103 as being unpatentable over ChoFleming et al. US 2020/0310994 (“ChoFleming”) in view of Raikin et al. US 2016/0077976 (“Raikin”).
As per independent claim 1, ChoFleming teaches A method (A method comprising, see independent claim 9) for accessing data in a heterogeneous processing system using a dataflow graph having a plurality of nodes connected by edges (FIG. 3B illustrates a dataflow graph 300 for the program source of FIG. 3A … Dataflow graph 300 includes a pick node 304, switch node 306, and multiplication node 308. Para 0132), the method comprising:
executing at least a portion of a first node of the plurality of nodes of the dataflow graph using the first processor (array of processing elements 301 is configured to execute the dataflow graph 300 of FIG. 3B, para 0133);
executing at least a portion of a second node of the plurality of nodes of the dataflow graph using the second processor (array of processing elements 301 is configured to execute the dataflow graph 300 of FIG. 3B, para 0133).
ChoFleming discloses all of the claim limitations from above and additionally teaches an array of processing elements connected to a memory, but does not explicitly teach “wherein the heterogeneous processing system includes a host processor, a first processor coupled to a first memory, a second processor coupled to a second memory, and switch and bus circuitry that communicatively couples the host processor, the first processor, and the second processor” and “mapping virtual addresses of the second memory to physical addresses of the switch and bus circuitry; and configuring the first processor to directly access the second memory using the mapped physical addresses; and directly accessing, by the first processor, the second memory through the switch and bus circuitry”.
However, in an analogous art in the same field of endeavor, Raikin teaches wherein the heterogeneous processing system (FIG. 1 is a block diagram that schematically illustrates a computer system 20 having multiple diverse devices including CPU, GPU, SSD, and HCA communicating via PCIe, paras 0040-0042 and FIG. 1) includes a host processor (Computer system 20 comprises a CPU 32, para 0040 and FIG. 1), a first processor coupled to a first memory (Computer system 20 includes multiple Graphics Processing Units (GPUs), para 0042 and FIG. 1. Each GPU 44A-C in FIG. 1 comprises a local GPU memory 60, para 0047 and FIG. 1), a second processor coupled to a second memory (Computer system 20 includes multiple Graphics Processing Units (GPUs), para 0042 and FIG. 1. Each GPU 44A-C in FIG. 1 comprises a local GPU memory 60, para 0047 and FIG. 1), and switch and bus circuitry that communicatively couples the host processor, the first processor, and the second processor (CPU 32 and GPUs 44A-C are communicatively coupled via a switch fabric 40, para 0042 and FIG. 1. Communication over fabric 40 is carried out in accordance with a fabric address space referred to as physical address space or PCIe address space, para 0043 and FIG. 1),
mapping virtual addresses of the second memory to physical addresses of the switch and bus circuitry (In system 400, TA 424 provides address translation services to DEV_A including converting DEV_A address space to PCIe address space, para 0110. A PCI BAR (Base Address Register) assigns a range of the PCIe address space to a respective address range of local memory 408 so that this address range can be accessed directly by one or more other devices such as DEV_A, para 0107. A PCIe device may use a virtual address space that is larger than a physical address space of fabric 40, para 0044);
configuring the first processor to directly access the second memory using the mapped physical addresses (DEV_A is configured to directly access the local memories of multiple respective devices such as DEV_B, para 0114 and FIGS. 1 and 5-6, using the address translation services provided by TA 424, para 0110);
directly accessing, by the first processor, the second memory through the switch and bus circuitry (DEV_A is configured to directly access the local memories of multiple respective devices such as DEV_B, para 0114 and FIGS. 1 and 5-6, using the address translation services provided by TA 424, para 0110).
Given the teaching of Raikin, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to further modify the scope of the invention of ChoFleming with “wherein the heterogeneous processing system includes a host processor, a first processor coupled to a first memory, a second processor coupled to a second memory, and switch and bus circuitry that communicatively couples the host processor, the first processor, and the second processor” and “mapping virtual addresses of the second memory to physical addresses of the switch and bus circuitry; and configuring the first processor to directly access the second memory using the mapped physical addresses; and directly accessing, by the first processor, the second memory through the switch and bus circuitry”. The motivation would be that the invention provides improved methods and systems for accessing the local memory of a device over PCIe and other suitable bus or network fabric type, para 0025 of Raikin.
As per dependent claim 2, ChoFleming in combination with Raikin discloses the method of claim 1. ChoFleming teaches wherein the method is for implementing a machine learning system using dataflow graphs (Certain embodiments herein permit the introduction of new application-specific PEs, for example, for machine learning or security, and not merely a homogeneous combination. Para 0153).
As per dependent claim 3, ChoFleming in combination with Raikin discloses the method of claim 1. ChoFleming may not explicitly disclose, but Raikin teaches wherein configuring the first processor comprises configuring a reconfigurable dataflow unit (The CPU 32 configures mapping table 64 in the GPU 44A-C to translate between BAR1 addresses and respective E_REGION1 addresses, para 0066 and FIGS. 1 and 2A-B, so that other devices can directly access the local memory of another device, para 0114 and FIGS. 1 and 5-6).
The same motivation that was utilized for combining ChoFleming and Raikin as set forth in claim 1 is equally applicable to claim 3.
As per dependent claim 4, ChoFleming in combination with Raikin discloses the method of claim 1. ChoFleming may not explicitly disclose, but Raikin teaches wherein configuring the first processor comprises configuring a compute engine (The CPU 32 configures mapping table 64 in the GPU 44A-C to translate between BAR1 addresses and respective E_REGION1 addresses, para 0066 and FIGS. 1 and 2A-B, so that other devices can directly access the local memory of another device, para 0114 and FIGS. 1 and 5-6).
The same motivation that was utilized for combining ChoFleming and Raikin as set forth in claim 1 is equally applicable to claim 4.
As per dependent claim 11, ChoFleming in combination with Raikin discloses the method of claim 1. ChoFleming may not explicitly disclose, but Raikin teaches wherein the heterogeneous system includes mapping the virtual addresses of the second memory to the physical addresses of the switch and bus circuitry; and wherein the first processor is configured to directly access the second memory using the mapped physical addresses (The CPU 32 configures mapping table 64 in the GPU 44A-C to translate between BAR1 addresses and respective E_REGION1 addresses, para 0066 and FIGS. 1 and 2A-B, so that other devices can directly access the local memory of another device, para 0114 and FIGS. 1 and 5-6).
The same motivation that was utilized for combining ChoFleming and Raikin as set forth in claim 1 is equally applicable to claim 11.
Allowable Subject Matter
Claims 17-20 are allowed.
Claims 5-10 and 12-16 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Reasons for Allowance
The following is an examiner’s statement of reasons for allowance.
After careful considerations, examination and search of the claimed invention, the closest prior art of record does not teach or anticipate the claimed feature of claim 17 “wherein the first processor comprises a reconfigurable processor comprising: an array of coarse-grained reconfigurable units comprising, an address generation unit, a plurality of memory units, and a plurality of compute units interconnected by an array-level network; a top-level network coupled to the address generation unit of the array of coarse-grained reconfigurable units; and an interface coupled between the top-level network and an external port of the first processor; a second processor coupled to a second memory; and switch and bus circuitry that communicatively couples the host processor, the extremal port of the first processor, and the second processor; wherein the host processor is configured to map virtual addresses of the second memory to physical addresses of the switch and bus circuitry; and wherein the first processor can directly access the second memory using the mapped physical addresses” in combination with the overall claimed limitations when interpreted in light of the specification.
ChoFleming and Raikin are believed to be the closest prior art. However, ChoFleming alone or in any combination with Raikin does not disclose the subject matter of independent claim 17 with such specificity. Therefore, claim 17 is patentable.
Claims 18-20 directly or indirectly depend from claim 17 and these claims are also patentable by virtue of the their dependency to the base claim 17.
After careful considerations, examination and search of the claimed invention, the closest prior art of record does not teach or anticipate the claimed feature of claim 5 “wherein the first processor comprises a reconfigurable processor, comprising: an array of coarse-grained reconfigurable units, each including an address generation unit, a plurality of memory units, and a plurality of compute units interconnected by an array-level network; a top-level network coupled to the address generation unit of the array of coarse-grained reconfigurable units; and an interface coupled between the top-level network and the switch and bus circuitry; wherein configuring the first processor comprises configuring the address generation unit of the array of coarse-grained reconfigurable unit in the reconfigurable processor to map virtual addresses of the second memory to physical addresses of the switch and bus circuitry” in combination with the overall claimed limitations when interpreted in light of the specification.
ChoFleming and Raikin are believed to be the closest prior art. However, ChoFleming alone or in any combination with Raikin does not disclose the subject matter of dependent claim 5 with such specificity. Therefore, claim 5 is patentable.
After careful considerations, examination and search of the claimed invention, the closest prior art of record does not teach or anticipate the claimed feature of claim 6 “further comprising: generating and storing, by programming the second processor, while executing the portion of the second node, to execute a first part of an application to generate and store first data into the second memory; and configuring the first processor to directly access, by the first processor, the first data from the second memory using mapped physical addresses while executing a second part of the application the portion of the first node” in combination with the overall claimed limitations when interpreted in light of the specification.
ChoFleming and Raikin are believed to be the closest prior art. However, ChoFleming alone or in any combination with Raikin does not disclose the subject matter of dependent claim 6 with such specificity. Therefore, claim 6 is patentable.
After careful considerations, examination and search of the claimed invention, the closest prior art of record does not teach or anticipate the claimed feature of claim 7 “further comprising configuring the first processor to write second data generated by the second part of the application into the first memory portion of the first node” in combination with the overall claimed limitations when interpreted in light of the specification.
ChoFleming and Raikin are believed to be the closest prior art. However, ChoFleming alone or in any combination with Raikin does not disclose the subject matter of dependent claim 7 with such specificity. Therefore, claim 7 is patentable.
After careful considerations, examination and search of the claimed invention, the closest prior art of record does not teach or anticipate the claimed feature of claim 8 “wherein the heterogeneous system includes a host memory coupled to the host processor; and wherein the heterogeneous system is configured to provide the first processor with direct access to the host memory and to write second data output from executing the portion of the second node part of the application directly into the host memory” in combination with the overall claimed limitations when interpreted in light of the specification.
ChoFleming and Raikin are believed to be the closest prior art. However, ChoFleming alone or in any combination with Raikin does not disclose the subject matter of dependent claim 8 with such specificity. Therefore, claim 8 is patentable.
After careful considerations, examination and search of the claimed invention, the closest prior art of record does not teach or anticipate the claimed feature of claim 9 “wherein the heterogeneous system is configured to provide the first processor to execute a first part of an application to generating, by the first processor while executing the portion of the first node, the first data and directly writing the first data into the second memory using mapped physical addresses” in combination with the overall claimed limitations when interpreted in light of the specification.
ChoFleming and Raikin are believed to be the closest prior art. However, ChoFleming alone or in any combination with Raikin does not disclose the subject matter of dependent claim 9 with such specificity. Therefore, claim 9 is patentable.
After careful considerations, examination and search of the claimed invention, the closest prior art of record does not teach or anticipate the claimed feature of claim 10 “the heterogeneous system including a host memory coupled to the host processor, executing the portion of the first node using the first processor to directly read first data from the host memory generating second data using the first data; and while executing the application, directly writing the second data into the second memory using mapped physical addresses” in combination with the overall claimed limitations when interpreted in light of the specification.
ChoFleming and Raikin are believed to be the closest prior art. However, ChoFleming alone or in any combination with Raikin does not disclose the subject matter of dependent claim 10 with such specificity. Therefore, claim 10 is patentable.
After careful considerations, examination and search of the claimed invention, the closest prior art of record does not teach or anticipate the claimed feature of claim 12 “wherein the second processor is programmed to execute a first part of an application to generate and write first data into the second memory; and wherein the first processor is configured to directly access the first data from the second memory using mapped physical addresses while executing a second part of the application” in combination with the overall claimed limitations when interpreted in light of the specification.
ChoFleming and Raikin are believed to be the closest prior art. However, ChoFleming alone or in any combination with Raikin does not disclose the subject matter of dependent claim 12 with such specificity. Therefore, claim 12 is patentable.
Dependent claims 13-14 directly or indirectly depend from claim 12 and these claims are also patentable by virtue of their dependency to patentable claim 12.
After careful considerations, examination and search of the claimed invention, the closest prior art of record does not teach or anticipate the claimed feature of claim 15 “wherein the first processor is configured to execute a first part of an application to generate first data and to directly write the first data into the second memory using mapped physical addresses” in combination with the overall claimed limitations when interpreted in light of the specification.
ChoFleming and Raikin are believed to be the closest prior art. However, ChoFleming alone or in any combination with Raikin does not disclose the subject matter of dependent claim 15 with such specificity. Therefore, claim 15 is patentable.
After careful considerations, examination and search of the claimed invention, the closest prior art of record does not teach or anticipate the claimed feature of claim 16 “wherein the first processor is configured to directly read first data from the host memory while executing an application using the first data to generate second data; and directly writing the second data into the second memory using mapped physical addresses while executing the application” in combination with the overall claimed limitations when interpreted in light of the specification.
ChoFleming and Raikin are believed to be the closest prior art. However, ChoFleming alone or in any combination with Raikin does not disclose the subject matter of dependent claim 16 with such specificity. Therefore, claim 16 is patentable.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ZUBAIR AHMED whose telephone number is (571)272-1655. The examiner can normally be reached 7:30AM - 5:00PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, HOSAIN T. ALAM can be reached at (571) 272-3978. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ZUBAIR AHMED/Examiner, Art Unit 2132
/HOSAIN T ALAM/Supervisory Patent Examiner, Art Unit 2132