DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-8 and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over to Sen et al., (US Patent Application Pub. No: 20200104275 A1) in view of Kapur et al. (US Patent Application Pub. No: 20080025289 A1) and Jau et al., (US 2020/0029458), currently sited 02/05/2025 in IDS, in further view of Byers et al., (US 2020/0125529), previously sited as prior art in IDS, 09/20/21, and further in view of Marolia et al. (US 2019/0297015).
It has been noted that, a claimed invention is unpatentable if the differences between it and the prior art are "such that the subject matter as a whole would have been obvious at the time the invention was made to a person having ordinary skill in the art." 35 U.S.C. § 103(a) (2000); KSRInt'lr. Teleflex Inc., 127 S.Ct. 1727, 1734 (2007); Graham v.John Deere Co., 383 U.S. 1, 13-14 (1966).
In Graham, the Court held that that the obviousness analysis is bottomed on several basic factual inquiries: "[(1)] the scope and content of the prior art are to be determined; [(2)] differences between the prior art and the claims at issue are to be ascertained; and [(3)] the level of ordinary skill in the pertinent art resolved." 383 U.S. at 17. See also KSR, 127 S.Ct. at 1734. "The combination of familiar elements according to known methods is likely to be obvious when it does no more; than yield predictable results." KSR, at 1739.
"When a work is available in one field of endeavor, design incentives and other market forces can prompt variations of it, either in the same field or in a different one. If a person of ordinary skill in the art can implement a predictable variation, § 103 likely bars its patentability." Id. at 1740.
"For the same reason, if a technique has been used to improve one device, and a person of ordinary skill in the art would recognize that it would improve similar devices in the same way, using the technique is obvious unless its actual application is beyond his or her skill." Id.
"Under the correct analysis, any need or problem known in the field of endeavor at the time of invention and addressed by the patent can provide a reason for combining the elements in the manner claimed." Id. 11742.
As per claims 1 and 20, Sen teaches a system (Fig.1), comprising: a first server (Figs. 1 and 5, target computing platform, 150 or other computing devices, [0074]), comprising: a stored-program processing circuit (Figs. 1 and 5, controller, 582, note code/data 586), a network interface circuit (Fig. 5, lines [0074], network interface, 550), a cache-coherent switch (cache-coherent interface ([0015], “..some interconnects and fabrics such as Intel compute express line (CXL)”), note cache coherent interconnect for accelerators, Fig. 1a, [0032] “remote direct memory access (RDMA)…CXL…Cache Coherent Interconnect for Accelerators (CCIX)..”, further note [0032-0074]), and a first memory module (Figs. 1 and 5, memory subsystem, 520, memory controller, 522, [0074]).
Sen does not explicitly disclose a cache-coherent switch and wherein: the first memory module is connected to the cache-coherent switch and the network interface circuit (Examiner’s amendment 9/18/24), the cache-coherent switch is connected to the server-linking switch, and the stored-program processing circuit is connected to the cache-coherent switch.
Kapur discloses a cache-coherent switch (Figs.1 and 2, a cache coherent interconnect (CCl) port 102); wherein: the first memory module is connected to the cache-coherent switch (a connection between memory controller 105(1)-105(N)), the cache-coherent switch is connected to the server-linking switch (a connection between a cache coherent interconnect (CCl) port 102 and one or more peripheral interconnects (e.g., 135, 136, and 130); [0030-0031]; Figs.1 and 2, The IOH 115 may include a cache coherent interconnect (CCl) port 102 connected to the processor 101, one or more peripheral interconnects (e.g., 135, 136, and 130), datapath (DP) logic 102 (e.g., switch) to route transactions between the processor 101, I/O devices 104 (e.g., 195, 196, 190) and any internal agents (e.g., 140, 145, 150)), and the stored-program processing circuit is connected to the cache-coherent switch ([0030-0031] ,Figs.1 and 2, Each processor may include a memory controller, memory controllers 105(1)-105(N), and each may be coupled to a corresponding system memory 110(1)-110(N).).
It would have been obvious one ordinary skill in the art before the effective filling date of the claimed invention, to include Kapur‘s electronic system, for use in a workstation or server, for controlling traffic flow and ordering of packet data between peripheral devices such as network interface controller, storage interface to modify Sen’s system with Kapur. Doing so would simplify the processing operations and the apparatus provides an ordering and transaction flow for the packet processing engine, thus ensuring correct operation both within and external to the packet processing engine/virtualization engine (Kapur, [0020-0021]) to obtain the invention as specified in claim 1.
Accordingly, Applicant previously argued that a combination that only unites old elements with no change in the respective functions of those old elements, and the combination of those elements yields predictable results; absent evidence that the modifications necessary to affect the combination of elements is uniquely challenging or difficult for one of ordinary skilled in the art, the claim is unpatentable as obvious under 35 U.S.C. 103fa). Ex Parte Smith, 88 USPO.2d at 1518-19 (BPAI, 2007) (citing KSA, 127 S.Ct. at 1740, 82 USPQ2d at 1396. As per claims 14, 23, and 28, see the rejections for claim 1 above.
Neither Sen nor Kapur specifically teaches the stored-program processing circuit being connected to the first memory module via the cache-coherent switch and the network interface circuit. Jau discloses a first memory device (e.g., one of memory devices 442); and a second switch (e.g., top rack switch (TOR) 114) connected to the first server ([0031]; the TOR 114 is connected to the processing server 120 through the smart NIC card130 via a network link; see again para, 0031), wherein the first memory device is connected to the first switch ( via a first interface [e.g., physical PCle 422), and the first switch is connected to the second switch (Jau’s second switch corresponds to the network interface circuit) via a second interface different from the first interface (see again [0031]; the smart NIC card 130 is connected to the TOR114 via a network link).
Sen-Kapur and Jau are analogous because they are from the same field of endeavor, computer architecture and memory management. Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Sen-Kapur before him or her, to modify Sen-Kapur’s device incorporating Jau’s first switch and second switch (Jau’s second switch corresponds to the network interface circuit) connected to the first memory device wherein via a second interface different from the first interface (see again [0031]; the smart NIC card 130 is connected to the TOR114 via a network link) because doing so would add and expand the flexibility of Sen-Kapur by having a plurality of network switches when identifying data stored in Jau’s node memories 442.
Further, Byers discloses a controller being configured to communicate between a cache coherent interface and a memory interface (note ‘supervisor 37’ that communicates via ‘high speed interconnect 40’ between cache coherent interface ‘coherent bus interface 32’ and ‘memory interface controller 31,’ Fig. 3, [0033-0034])
Sen-Kapur-Jau and Byers are analogous because they are from the same field of endeavor, computer architecture and memory management. Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Sen-Kapur-Jau before him or her, to modify Sen-Kapur-Jau’s device incorporating Byers’ “RDMA interface modules 24” and inter-socket coherency bus ([0028], [0029]). The suggestion/motivation for doing so would have been to implement an RDMA interface module on a multi-socket motherboard over a coherency interface and transmitting the data from the RDMA interface module on an RDMA link to a server in an RDMA domain (Byers [0013]) without significantly loading either computer's operating system thereby freeing up resources and facilitates faster data transfer and low-latency networking (Byers [0003]). Therefore, it would have been obvious to combine Byers with Sen-Kapur-Jau to obtain the invention as specified in the instant claim(s).
However, neither one of the group consisting of Sen, Kapur, Jau, and Bryers does not appear to explicitly disclose wherein a cache coherent memory protocol to connect various types of memory having different characteristics and perform memory address translation between the processing circuit and a corresponding memory of the various types of memory, the various types of memory comprising the first memory module.
Therein, Marolia discloses wherein the controller having a cache coherent memory protocol being configured to convert a packet (a controller "Address decoding logic 312", see Fig. 3b, [0042]) from the cache-coherent interface ("Control plane 306 can direct traffic" ([0039]) from cache coherency "accelerator fabric can be used to connect GPUs or accelerators together (e.g., Nvidia NVLink, PCIe switching, CCIX, GenZ, [cache coherent interface] Compute Express Link (CXL))", see Fig. 1 [0017] last sentence) to a data suitable for the memory interface ("[memory interface] secondary head is connected to an accelerator scale-up fabric to provide a NUMA input/output solution", (Note see Fig. 1 [0020]) via "One or more DMA engines 318 can support direct copy of received data... to a destination memory buffer 324" (see Fig. 3D, last two sentences of [0046])); perform memory address translation between the processing circuit and a corresponding memory of the various types of memory (note the first network circuit "Network interface 250 can support multi protocols", see Figs. 2b [0025], via a host to device fabric 256") that connects the controller to the stored-program processing circuit and wherein the various types of memory comprising the first memory module. ( a secondary head can be used whereby a direct memory access (DMA) operation is invoked to copy the portion of the received packet through accelerator fabric to the destination memory [and subsequent stored-program processing circuit" (see Fig. 1 [0020] ) such as "accelerator scale-up fabric 322" (see Fig. 3c) where each accelerator comprises additional stored-program processing circuit "compute nodes with high bandwidth memory (HBM) can be used to process the large datasets" ([0017], 1st sentence) and HBM "high bandwidth memory" (see Fig. 2b, [0068])).
Sen-Kapur-Jau-Bryers and Marolia are analogous because they are from the same field of endeavor, computer architecture and memory management.
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Sen-Kapur-Jau-Bryers and Marolia before him or her, to modify Sen-Kapur-Jau-Bryers’ device incorporating Marolia cache "network interface 300" and respective "address translation service 312" (Fig. 3B) that would allow for the Sen-Kapur-Jau-Bryers’s device to perform memory address translation between the processing circuit and a corresponding memory of the various types of memory. The suggestion/motivation for doing so would have been to implement algorithms can be implemented as data parallel, model parallel or hybrid models; and, consolidation of results from networked nodes for the final results (Marolia [0003]). Therefore, it would have been obvious to combine Marolia with Sen-Kapur-Jau-Bryers’ device to obtain the invention as specified in the instant claim (s).
As per claim 2, Sen-Kapur-Jau in further view of Byers or Marolia teaches further comprising a second memory module connected to the cache-coherent switch, wherein the first memory module comprises volatile memory and the second memory module comprises persistent memory. Kapur discloses a cache-coherent switch (Figs.1 and 2, a cache coherent interconnect (CCl) port 102); wherein: the first memory module is connected to the cache-coherent switch (a connection between memory controller 105(1)-105(N)), the cache-coherent switch is connected to the server-linking switch (a connection between a cache coherent interconnect (CCl) port 102 and one or more peripheral interconnects (e.g., 135, 136, and 130); [0030-0031]; Figs.1 and 2, The IOH 115 may include a cache coherent interconnect (CCl) port 102 connected to the processor 101, one or more peripheral interconnects (e.g., 135, 136, and 130), datapath (DP) logic 102 (e.g., switch) to route transactions between the processor 101, I/O devices 104 (e.g., 195, 196, 190) and any internal agents (e.g., 140, 145, 150)), and the stored-program processing circuit is connected to the cache-coherent switch ([0030-0031] ,Figs.1 and 2, Each processor may include a memory controller, memory controllers 105(1)-105(N), and each may be coupled to a corresponding system memory 110(1)-110(N). Byers discloses the controller being configured to communicate between the cache coherent interface and the memory interface (note ‘supervisor 37’ that communicates via ‘high speed interconnect 40’ between cache coherent interface ‘coherent bus interface 32’ and ‘memory interface controller 31,’ Fig. 3, [0033-0034])
As per claim 3, Sen-Kapur-Jau in further view of Byers or Marolia teaches wherein the cache-coherent switch is configured to virtualize the first memory module and the second memory module. (Sen [0032-0074], Kapur [0020-0031], Byers discloses the controller being configured to communicate between the cache coherent interface and the memory interface (note ‘supervisor 37’ that communicates via ‘high speed interconnect 40’ between cache coherent interface ‘coherent bus interface 32’ and ‘memory interface controller 31,’ Fig. 3, [0033-0034])
As per claim 4, Sen-Kapur-Jau in further view of Byers or Marolia teaches wherein the first memory module comprises flash memory, and the cache-coherent switch is configured to provide a flash translation layer for the flash memory. (Sen [0032-0074], Kapur [0020-0031], Byers discloses the controller being configured to communicate between the cache coherent interface and the memory interface (note ‘supervisor 37’ that communicates via ‘high speed interconnect 40’ between cache coherent interface ‘coherent bus interface 32’ and ‘memory interface controller 31,’ Fig. 3, [0033-0034])
As per claim 5, Sen-Kapur-Jau in further view of Byers or Marolia teaches wherein the cache-coherent switch is configured to: monitor an access frequency of a first memory location in the first memory module; determine that the access frequency exceeds a first threshold; and copy the contents of the first memory location into a second memory location, the second memory location being in the second memory module. (Sen [0032-0074], Kapur [0020-0031], Byers discloses the controller being configured to communicate between the cache coherent interface and the memory interface (note ‘supervisor 37’ that communicates via ‘high speed interconnect 40’ between cache coherent interface ‘coherent bus interface 32’ and ‘memory interface controller 31,’ Fig. 3, [0033-0034])
As per claim 6, Sen-Kapur-Jau in further view of Byers or Marolia teaches wherein the second memory module comprises high bandwidth memory (HBM). (Sen [0032-0074], Kapur [0020-0031])
As per claim 7, Sen-Kapur-Jau in further view of Byers or Marolia teaches wherein the cache-coherent switch is configured to maintain a table for mapping processor-side addresses to memory-side addresses. (Sen [0032-0074], Kapur [0020-0031])
As per claim 8, Sen-Kapur in further view of Byers or Marolia teaches further comprising: a second server, and a network switch connected to first server and the second server. (Figs. 1 and 2, Kapur teaches a cache coherent interconnect (CCI) port 102, Byers discloses the controller being configured to communicate between the cache coherent interface and the memory interface (note ‘supervisor 37’ that communicates via ‘high speed interconnect 40’ between cache coherent interface ‘coherent bus interface 32’ and ‘memory interface controller 31,’ Fig. 3, [0033-0034])).
Claims 9-19 is/are rejected under 35 U.S.C. 103 as being unpatentable over to Sen et al., (US Patent Application Pub. No: 20200104275 A1) and Kapur et al. (US Patent Application Pub. No: 20080025289 A1) and Jau et al., (US 2020/0029458), currently sited 02/05/2025 in IDS, and Byers and Marolia, and in further view of Devireddy (US Patent Application Pub. No: 20200412798 A1), all previously sited.
As per claim 9, Sen-Kapur-Jau-Byers-Marolia does not expressly teach wherein the network switch comprises a top of rack (ToR) Ethernet switch. Devireddy teaches a plurality of racks 210 that may include one or more top-of-rack (TOR) switches 215, [0025-0029]) It would have been obvious one ordinary skill in the art before the effective filling date of the claimed invention, to include Devireddy‘s system teaches that includes a TOR CXL switch, for use in a workstation or server, for controlling traffic flow and ordering of packet data between peripheral devices such as network interface controller, storage interface to modify the Sen-Kapur-Jau-Byers. Doing so would simplify the processing operations and the apparatus provides an ordering and transaction flow for the packet processing engine in order to obtain the invention as specified in claim 1. (Devireddy [0025-0029], Sen [0032-0074], Kapur [0020-0031])
As per claim 10, Sen-Kapur-Jau-Byers-Marolia-Devireddy teaches wherein the cache-coherent switch is configured to receive remote direct memory access (RDMA) requests, and to send RDMA responses. (Devireddy [0025-0029], Sen [0032-0074], Kapur [0020-0031], Byers discloses the controller being configured to communicate between the cache coherent interface and the memory interface (note ‘supervisor 37’ that communicates via ‘high speed interconnect 40’ between cache coherent interface ‘coherent bus interface 32’ and ‘memory interface controller 31,’ Fig. 3, [0033-0034])
As per claim 11, Sen-Kapur-Jau-Byers-Marolia-Devireddy teaches wherein the cache-coherent switch is configured to receive the remote direct memory access (RDMA) requests through the ToR Ethernet switch and through the network interface circuit, and to send RDMA responses through the ToR Ethernet switch and through the network interface circuit. (Devireddy [0025-0029], Sen [0032-0074], Kapur [0020-0031], Byers discloses the controller being configured to communicate between the cache coherent interface and the memory interface (note ‘supervisor 37’ that communicates via ‘high speed interconnect 40’ between cache coherent interface ‘coherent bus interface 32’ and ‘memory interface controller 31,’ Fig. 3, [0033-0034]))
As per claim 12, Sen-Kapur-Jau-Byers- Marolia-Devireddy teaches wherein the cache coherent memory protocol is
As per claim 13, Sen-Kapur-Jau-Byers- Marolia-Devireddy teaches wherein the first server comprises an expansion socket adapter, connected to an expansion socket of the first server, the expansion socket adapter comprising: the cache-coherent switch; and a memory module socket, the first memory module being connected to the cache-coherent switch through the memory module socket. (Devireddy [0025-0029], Sen [0032-0074], Kapur [0020-0031], Byers discloses the controller being configured to communicate between the cache coherent interface and the memory interface (note ‘supervisor 37’ that communicates via ‘high speed interconnect 40’ between cache coherent interface ‘coherent bus interface 32’ and ‘memory interface controller 31,’ Fig. 3, [0033-0034])
As per claim 14, Sen-Kapur-Jau-Byers-Marolia-Devireddy teaches wherein the memory module socket comprises an M.2 socket. (Devireddy [0025-0029], Sen [0032-0074], Kapur [0020-0031])
As per claim 15, Sen-Kapur-Jau-Byers-Marolia-Devireddy teaches wherein the network interface circuit is on the expansion socket adapter. (Devireddy [0025-0029], Sen [0032-0074], Kapur [0020-0031])
As per claim 16, Sen-Kapur-Jau-Byers-Marolia-Devireddy teaches a method for performing remote direct memory access in a computing system (Sen, note cache coherent interconnect for accelerators, Fig. 1a, [0016, 0031-0032] “remote direct memory access (RDMA)…CXL…Cache Coherent Interconnect for Accelerators (CCIX)..”, further note [Sen - 0015, 0016, 0031-0074]; or Byers’ “RDMA interface modules 24” and inter-socket coherency bus (Byers - [0003], [0013], [0028-0029]), the computing system comprising: a first server (Sen-Kapur-Jau-Byers-Marolia) and a second server (Sen, “..servers or other computing devices,” Fig. 5, [0074]), the first server comprising: a stored-program processing circuit (Sen-Kapur-Jau-Byers-Marolia – note rejection for claim 1), a network interface circuit (Sen-Kapur-Jau-Byers -Marolia – note rejection for claim 1), a cache-coherent switch (Sen-Kapur-Jau-Byers-Marolia – note rejection for claim 1), and a first memory module (Sen-Kapur-Jau-Byers-Marolia – note rejection for claim 1), the first memory module being connected to the stored-program processing circuit via the cache-coherent switch (Sen-Kapur-Jau-Byers-Marolia – note rejection for claim 1), the switch circuit being configured to support a cache coherent memory protocol to connect various types of memory having different characteristics and perform memory address translation between the processing circuit and a corresponding memory of the various types of memory, the various types of memory comprising the first memory module, (Sen-Kapur-Jau-Byers-Marolia – note rejection for claim 1), the method comprising: receiving, by the cache-coherent switch (Sen-Kapur-Jau-Byers-Marolia – note rejection for claim 1, Devireddy), a remote direct memory access (RDMA) request (Sen-Kapur-Jau-Byers-Marolia – note rejection for claim 1, Sen, [0016, 0031-0032, 0074], Byers [0003], [0013], [0028-0029]), and sending, by the cache-coherent switch, a RDMA response (Sen-Kapur-Jau-Byers-Marolia – note rejection for claim 1, Sen, [0016, 0031-0032, 0074], Byers [0003], [0013], [0028-0029]).
As per claim 17, Sen-Kapur-Byers-Jau- Marolia-Devireddy wherein: the computing system further comprises an Ethernet switch, and the receiving of the RDMA request comprises receiving the RDMA request through the Ethernet switch. Devireddy teaches a plurality of racks 210 that may include one or more top-of-rack (TOR) switches 215, [0025-0029]) Further, the combination of Sen-Kapur-Byers- Marolia teaches that the cache-coherent switch is configured to receive the remote direct memory access (RDMA) requests through the ToR Ethernet switch (Devireddy) and through the network interface circuit, and to send RDMA responses through the ToR Ethernet switch and through the network interface circuit. It would have been obvious one ordinary skill in the art before the effective filling date of the claimed invention, include Devireddy‘s system teaches that includes a TOR CXL switch, for use in a workstation or server, for controlling traffic flow and ordering of packet data between peripheral devices such as network interface controller, storage interface to modify the Sen-Kapur-Jau-Byers- Marolia.
As per claim 18, Sen-Kapur-Byers-Jau- Marolia-Devireddy teaches further comprising: receiving, by the cache-coherent switch (Sen, transmitting data between the device and network, 550, [0074], Fig. 5), a read command, from the stored- program processing circuit, for a first memory address, translating, by the cache-coherent switch, the first memory address to a second memory address switch (Sen-Kapur-Jau-Byers – note rejection for claim 1, Devireddy, a remote direct memory access (RDMA) request, Sen-Kapur-Jau-Byers- Marolia – note rejection for claim 1, Sen, [0016, 0031-0032, 0074], Byers [0003], [0013], [0028-0029]), and sending, by the cache-coherent switch, a RDMA response - Sen-Kapur-Jau-Byers- Marolia – note rejection for claim 1, Sen, [0016, 0031-0032, 0074], Byers [0003], [0013], [0028-0029]), and retrieving, by the cache-coherent switch (cache-coherent invalidating Sen -[0112]), data from the first memory module at the second memory address.
As per claim 19, Sen-Kapur-Byers-Jau- Marolia-Devireddy teaches further comprising: receiving data, by the cache-coherent switch (Sen, transmitting data between the device and network, 550, [0074], Fig. 5), storing, by the cache-coherent switch (Sen - [0030]), the data in the first memory module, and sending, by the cache-coherent switch, to the stored-program processing circuit, a command for invalidating (Sen -[0112]) a cache line.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
The examiner requests, in response to this office action, support be shown for language added to any original claims on amendment and any new claims. That is, indicate support for newly added claim language by specifically pointing to page(s) and line number(s) in the specification and/or drawing figure(s). This will assist the examiner in prosecuting the application. When responding to this office action, applicant is advised to clearly point out the patentable novelty which he or she thinks the claims present, in view of the state of art disclosed by the references cited or the objections made. He or she must also show how the amendments avoid such references or objections. See 37 C.F.R.I .Hi(c). In amending in reply to a rejection of claims in an application or patent under reexamination, the applicant or patent owner must clearly point out the patentable novelty which he or she thinks the claims present in view the state of the art disclosed by the references cited or the objections made. The applicant or patent owner must also show how the amendments avoid such references or objections. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Tammara Peyton whose telephone number is (571) 272-4157. The examiner can normally be reached between 8:30- 6:00 from Monday to Thursday, (I am off every first Friday), and 7:30- 4:00 every second Friday. If attempts to reach the examiner by telephone are unsuccessful, the examiner's supervisor Henry Tsai can be reached on (571)272-4176. The fax phone number for the organization where this application or proceeding is assigned is (571) 273-8300. Any inquiry of a general nature of relating to the status of this application should be directed to the Group receptionist whose telephone number is (571) 272- 2100.
/TAMMARA R PEYTON/Primary Examiner, Art Unit 2184 April 4, 2026