DETAILED ACTION
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claims 1-20 have been examined.
Priority
Applicant’s claim for the benefit of a prior-filed application under 35 U.S.C. 119(a)-(e) or under 35 U.S.C. 120, 121, 365(c), or 386(c) is acknowledged. The instant application claims priority to Chinese Application 202210744371.5, filed June 28, 2022.
Information Disclosure Statement
The Applicant's submission of the Information Disclosure Statement dated November 10, 2025 is acknowledged by the Examiner and the cited references have been considered in the examination of the claims now pending. A copy of the PTOL-1449 initialed and dated by the Examiner is attached to the instant office action.
Specification
The disclosure is objected to because of the following informalities.
The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed.
Appropriate correction is required. The lengthy specification has not been checked to the extent necessary to determine the presence of all possible minor errors. Applicant’s cooperation is requested in correcting any errors of which Applicant may become aware in the specification.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over US Patent No. 6,002,881 by York et al. (hereinafter referred to as “York”) in view of US Publication No. 2023/0297508 by Zirr et al. (hereinafter referred to as “Zirr”).
Regarding claims 1, 9, and 13, taking claim 1 as representative, York discloses:
…[an apparatus that] comprises a…[core], a memory management unit, and a coprocessor connected between the …[core] and the memory management unit, and wherein (York discloses, at Figure 1 and related description, a system comprising a core, a memory, and a coprocessor. As disclosed at col. 56, lines 56-59, the core includes a memory management unit. As disclosed at col. 5, lines 20-26, the system allows transfer of data directly between memory and the coprocessor and from the coprocessor to the core, which discloses the coprocessor being connected between the core and the memory management unit.):
the …[core] is configured to send a read instruction to the memory management unit, wherein the read instruction comprises an address of a first operand and a first identifier, and the first identifier indicates to perform an operation on the first operand in the coprocessor (York discloses, at col. 5, lines 20-26, the core sending an instruction to load data from the memory to the coprocessor. As disclosed at col. 11, the instruction includes a field identifying an address of the operand, e.g., Rn, and an identifier, e.g., Piccolo1, indicating to perform an operation on the operand in the coprocessor.);
the memory management unit is configured to: obtain the first operand from a first memory (York discloses, at col. 5, lines 20-26, loading data from the memory to the coprocessor.); and
send the first operand to the coprocessor (York discloses, at col. 5, lines 20-26, loading data from the memory to the coprocessor.); and
the coprocessor is configured to perform the operation on the first operand to obtain an operation result (York discloses, at Figure 1 and related description, the coprocessor operates on the loaded data.).
York does not explicitly disclose the aforementioned apparatus is a graphics processing apparatus and the aforementioned apparatus comprises a shader.
However, in the same field of endeavor (e.g., processing) Zirr discloses:
a graphics processor comprising a shader (Zirr discloses, at Figure 8 and related description, a graphics processor comprising a shader.).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify York to include the GPU and shader disclosed by Zirr in order to improve performance in processing graphics data.
Regarding claims 2 and 14, taking claim 2 as representative, York, as modified, discloses the elements of claim 1, as discussed above. York also discloses:
a first end of the …[core] is coupled to a first end of the memory management unit, a first end of the coprocessor is coupled to a second end of the …[core], and a second end of the coprocessor is coupled to a second end of the memory management unit (York discloses, at Figure 1, the core is coupled to both the memory and coprocessor and the coprocessor is also coupled to the memory.).
York does not explicitly disclose the aforementioned apparatus comprises a shader.
However, in the same field of endeavor (e.g., processing) Zirr discloses:
a shader (Zirr discloses, at Figure 8 and related description, a graphics processor comprising a shader.).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify York to include the GPU and shader disclosed by Zirr in order to improve performance in processing graphics data.
Regarding claims 3 and 15, taking claim 3 as representative, York, as modified, discloses the elements of claim 1, as discussed above. York also discloses:
the …[core] comprises a scheduler and a second memory (York discloses, at Figure 8 and related description, the core includes a decoder and a register bank, which discloses a scheduler and a second memory.); and
a first end of the scheduler is coupled to a first end of the second memory, a second end of the scheduler is coupled to the first end of the memory management unit, the second end of the memory management unit is coupled to the second end of the coprocessor, and the first end of the coprocessor is coupled to the first end of the second memory (York discloses, at Figure 8 and related description, the decoder is connected to the register file and the memory control unit, the memory control unit is coupled to the coprocessor, and the coprocessor is coupled to the register file. See also col. 13, lines 33-34, which discloses moving data from the coprocessor output to the core register file.).
York does not explicitly disclose the aforementioned apparatus comprises a shader.
However, in the same field of endeavor (e.g., processing) Zirr discloses:
a shader (Zirr discloses, at Figure 8 and related description, a graphics processor comprising a shader.).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify York to include the GPU and shader disclosed by Zirr in order to improve performance in processing graphics data.
Regarding claims 4 and 16, taking claim 4 as representative, York, as modified, discloses the elements of claim 1, as discussed above. York also discloses:
the scheduler is configured to send the read instruction to the memory management unit (York discloses, at Figure 8 and related description, forwarding instructions from the decoder to the memory control unit, which includes the aforementioned coprocessor memory access instructions.);
the second memory is configured to receive the operation result obtained by the coprocessor (York discloses, at col. 13, lines 33-34, moving output data from the coprocessor output to the core register file.); and
the scheduler is further configured to: receive a first indication sent by the coprocessor, wherein the first indication indicates that the operation on the first operand is completed (York discloses, at col. 57, line 51-col. 58, line 4, the coprocessor signals to the core that the read operation is complete.); and
process the operation result based on the operation result and an indication of a program (York discloses, at col. 57, line 51-col. 58, line 4, the core updates the address information based on completion.).
Regarding claims 5 and 17, taking claim 5 as representative, York, as modified, discloses the elements of claim 1, as discussed above. York also discloses:
the coprocessor comprises a cache, a register, a selector, and a computing circuit (York discloses at Figure 5 and related description, a cache, a register bank, and a processor core. York also discloses, at Figure 4 and related description, multiplexers, which discloses selectors.);
…a first end of the selector, a first end of the register is coupled to a second end of the selector, and a third end of the selector is coupled to a first end of the computing circuit (York discloses, at Figure 4, selecting between two register locations and forwarding the selected data to the processor core.);
a second end of the computing circuit is coupled to the second end of the memory management unit (York discloses, at Figure 1 and related description, the coprocessor is coupled to the memory.); and
a third end of the computing circuit is coupled to the first end of the second memory (York discloses, at Figure 1 and related description, the coprocessor is coupled to core, which discloses memory therein.).
York does not explicitly disclose a first end of the cache is coupled to aforementioned selector.
However, in the same field of endeavor (e.g., processing) Zirr discloses:
data cache (Zirr discloses, at Figure 2A and related description, data cache.).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify York to include the data cache disclosed by Zirr in order to improve performance by providing an alternative source for data, e.g., a source that can be preloaded with previously used data.
Regarding claims 6, 10, and 18, taking claim 6 as representative, York, as modified, discloses the elements of claim 1, as discussed above. York also discloses:
…the computing circuit is configured to: receive the first operand from the memory management unit (York discloses, at Figure 1 and related description, the coprocessor operates on data received from the memory.);
receive, …a third operand in the register (York discloses, at Figure 5 and related description, the coprocessor operates on data retrieved from a register bank.),
…obtain the operation result based on the first operand and the third operand (York discloses, at Figure 1 and related description, the coprocessor operates on data, which discloses two operands and a result.).
York does not explicitly disclose the shader further comprises an arithmetic logical circuit, a first end of the arithmetic logical circuit is coupled to a third end of the scheduler, and a second end of the arithmetic logical circuit is coupled to a second end of the cache; the arithmetic logical circuit is configured to: receive an operation instruction sent by the scheduler; and obtain a second operand through computing according to the operation instruction; the cache is configured to store the second operand; and from the selector, the second operand from the cache or wherein the second operand is a preconfigured constant value; and obtain the operation result based on the first operand and the second operand.
However, in the same field of endeavor (e.g., processing) Zirr discloses:
a compute unit that comprises an arithmetic logical circuit coupled to a thread dispatcher and cache (Zirr discloses, at Figure 2D, a GPU comprising cores that receive instructions and data via a cache, which discloses the shader further comprises an arithmetic logical circuit, a first end of the arithmetic logical circuit is coupled to a third end of the scheduler, and a second end of the arithmetic logical circuit is coupled to a second end of the cache, the arithmetic logical circuit is configured to: receive an operation instruction sent by the scheduler; obtaining a second operand through computing according to the operation instruction; the cache is configured to store the second operand; selecting the second operand from the cache, and obtain the operation result based on the first operand and the second operand.);
an operand is a preconfigured constant value (Zirr discloses, at Figure 2D and related description, a constant cache, which discloses operands that are preconfigured constants.).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify York to include the data cache disclosed by Zirr in order to improve performance by providing an additional computing mechanism and an alternative source for data.
Regarding claims 7, 11, and 19, taking claim 7 as representative, York, as modified, discloses the elements of claim 1, as discussed above. York also discloses:
the computing circuit is configured to: obtain a first intermediate operation result after performing computing on the first operand and the second operand, or obtain a first intermediate operation result after performing computing on the first operand and the third operand (York discloses, at Figure 3 and related description, multiplying two values to obtain a product, which discloses an intermediate operation result.); and
add the first intermediate operation result as an addend to at least one other intermediate operation result to obtain the operation result York discloses, at Figure 3 and related description, accumulating the product with another value.).
Regarding claims 8 , 12, and 20, taking claim 8 as representative, York, as modified, discloses the elements of claim 1, as discussed above. York also discloses:
the computing circuit comprises: at least one multiplication circuit and at least one addition circuit, wherein: each multiplication circuit in the at least one multiplication circuit is configured to perform a multiply operation; each addition circuit in the at least one addition circuit is configured to perform an add operation (York discloses, at Figure 3 and related description, a multiplier and an adder.); and
a combination of a subset of the at least one multiplication circuit and a subset of the at least one addition circuit is used for a floating-point multiply–add operation or a floating-point multiply–accumulate operation (York discloses, at Figure 3 and related description, performing a multiply accumulate operation. As disclosed at col. 1, lines 47-51, the coprocessor can operate on floating point values.).
Conclusion
The following prior art made of record and not relied upon is considered pertinent to Applicant’s disclosure.
US 20220366628 by Alfieri discloses a GPU, shader, scheduler, and memory.
US 20020069344 by Guey discloses a CPU coupled to memory, a coprocessor coupled to both, and an instruction that identifies the coprocessor and coprocessor register.
US 6247113 by Jaggar discloses a coprocessor instruction.
US 20220214887 by Kesiraju discloses a cpu coupled to a coprocessor and memory coupled to both of them.
US 20230028666 by Ray discloses an accelerator between memory and a GPU.
US 20210117333 by Qureshi discloses direct data access between accelerator and memory.
US 20160092238 by Codrescu discloses a CPU coupled to memory, a coprocessor coupled to both.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHAWN DOMAN whose telephone number is (571)270-5677. The examiner can normally be reached on Monday through Friday 8:30am-6pm Eastern Time.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jyoti Mehta can be reached on 571-270-3995. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SHAWN DOMAN/Primary Examiner, Art Unit 2183