DETAILED ACTION
This application has been examined. Claims 1-18 are pending.
In order to facilitate communication with the Examiner and expedite the prosecution of the instant application the Applicant is requested to submit written authorization to authorize the USPTO to communicate via electronic mail. The written authorization must be compliant with the language from MPEP § 502.03.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
This application claims benefits of priority from PCT Application PCT/JP2019/050426 (JAPAN) filed December 23, 2019.
The effective date of the claims described in this application is December 23, 2019.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 11/12/2024, 11/14/2024 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-2,11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Huang (USPGPUB 20080148291) further in view of Kobayashi (USPGPUB 20200089525)
Regarding Claim 1
Huang Paragraph 10 disclosed a multi-tasking operating system that includes a user space and a kernel in a kernel space, a receive buffer and a plurality of application processes, each of the plurality of application processes including a user application that runs in the user space. The method may include steps of polling the receive buffer from a user polling function that runs in the kernel space.
Huang disclosed (re. Claim 1) a device comprising: computing hardware including one or more central processing units (CPUs); (Huang-Paragraph 6, HPC systems are multi-processor systems with a high degree of inter-processor communication) and an operating system (OS) implemented on the computing hardware and comprising a kernel, wherein the device is configured to perform packet processing according to a polling model in the kernel of the OS.(Huang-Paragraph Figure 5, Paragraph 51, multitasking operating system such as Linux,Paragraph 120, kernel polling thread )
While Huang substantially disclosed the claimed invention Huang does not disclose (re. Claim 1) a New API (NAPI) polling model in the kernel of the OS.
Kobayashi Paragraph 59 disclosed wherein NAPI operates by polling while a frame exists in a queue, and shifts to a state waiting for an interrupt when a queue becomes empty. Consequently, while high-speed frame transfer is realized by polling, when no frame exists in a queue, CPU resources can be allocated to another processing (task).
Kobayashi disclosed (re. Claim 1) a New API (NAPI) polling model in the kernel of the OS.( Kobayashi-Paragraph 59,NAPI operates by polling while a frame exists in a queue, and shifts to a state waiting for an interrupt when a queue becomes empty. Consequently, while high-speed frame transfer is realized by polling, when no frame exists in a queue, CPU resources can be allocated to another processing (task). )
Huang and Kobayashi are analogous art because they present concepts and practices regarding packet data transfer mechanisms. Before the time of the effective filing date of the claimed invention it would have been obvious to combine Kobayashi into Huang. The motivation for the said combination would have been to enable task execution time to strictly managed, and unpredictable fluctuations in processing time (fluctuations in processing time hard to estimate due to a context switch, for example) can be eliminated. This makes it possible to guarantee a worst-case delay.(Kobayashi-Paragraph 140)
Regarding Claim 11
Claim 11 (re. non-transitory computer-readable medium) recites substantially similar limitations as Claim 1. Claim 11 is rejected on the same basis as Claim 1.
Regarding Claim 2
Huang-Kobayashi disclosed (re. Claim 2) wherein the kernel is configured to execute a kernel thread (Huang-Paragraph Figure 5, Paragraph 120, kernel polling thread ) occupying a specific CPU of the one or more CPUs in the kernel,(Huang-Paragraph 59, multi-core CPUs and multiple CPUs configured as symmetric multi-processor (SMP) clusters) and wherein the kernel thread is configured to perform the packet processing according to NAPI polling model. ( Kobayashi-Paragraph 59,NAPI operates by polling while a frame exists in a queue, and shifts to a state waiting for an interrupt when a queue becomes empty. Consequently, while high-speed frame transfer is realized by polling, when no frame exists in a queue, CPU resources can be allocated to another processing (task). )
Claim(s) 3-4,6,9-10,12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Huang (USPGPUB 20080148291) further in view of Kobayashi (USPGPUB 20200089525) further in view of Singh (USPGPUB 20190280991)
Regarding Claim 3
While Huang-Kobayashi substantially disclosed the claimed invention Huang-Kobayashi does not disclose (re. Claim 3) wherein the kernel comprises a ring buffer for storing arrived packets.
Singh Figure 2B,Figure 2C Paragraph 40-41 disclosed virtual switch 270 for allocation of packets from NIC 200 to descriptor rings 220-0 to 220-n and memory pools 224-0 to 224-n.
Singh disclosed (re. Claim 3) wherein the kernel comprises a ring buffer for storing arrived packets (Singh-Figure 2B,Figure 2C Paragraph 40-41,virtual switch 270 for allocation of packets from NIC 200 to descriptor rings 220-0 to 220-n and memory pools 224-0 to 224-n.)
Huang and Singh are analogous art because they present concepts and practices regarding packet data transfer mechanisms. Before the time of the effective filing date of the claimed invention it would have been obvious to combine Singh into Huang-Kobayashi. The motivation for the said combination would have been to enable a CPU core or cores can perform non-uniform polling (e.g., non-uniform CPU resource allocation) using a scheduling mechanism such as weighted round robin, strict priority. A polling rate can be adjusted based on priority level or packets or volume of packets.(Singh-Paragraph 22)
Huang-Kobayashi-Singh disclosed (re. Claim 3) wherein the kernel thread is configured to: monitor packet arrivals to the ring buffer according to NAPI polling model (Kobayashi-Paragraph 59,NAPI operates by polling while a frame exists in a queue, and shifts to a state waiting for an interrupt when a queue becomes empty. Consequently, while high-speed frame transfer is realized by polling, when no frame exists in a queue, CPU resources can be allocated to another processing (task). ) and when an arrival of a packet to the ring buffer is detected by the kernel thread,(Huang-Paragraph 56, When the kernel polling thread detects new data arriving) retrieve, from the ring buffer, the packet whose arrival to the ring buffer is detected. (Huang-Paragraph Figure 5, Paragraph 120, kernel polling thread ,Paragraph 114, If a data packet is thus indicated to be available in the hardware, the payload data of the packet is copied to the application memory space in the next step)
Regarding Claim 4
Huang-Kobayashi-Singh disclosed (re. Claim 4) wherein the kernel further comprises a poll list in which packet arrival information is to be registered, (Kobayashi-Paragraph 127, when an entry including “transmission source” information (MAC address and VLAN ID) of the read frame does not exist in the FDB 104, the transfer destination determining unit 312 updates the FDB 104 by newly registering the entry) the packet arrival information being indicative of an arrival of a packet to the ring buffer, and wherein the kernel thread is configured to monitor the poll list to monitor the packet arrivals (Huang-Paragraph Figure 5, Paragraph 120, kernel polling thread ) to the ring buffer according to NAPI polling model.
Regarding Claim 6
Huang-Kobayashi-Singh disclosed (re. Claim 6) wherein the kernel thread is further configured to analyze a content of the packet retrieved from the ring buffer and assign processing to a subsequent processing part of the kernel in a manner depending on a type of the packet retrieved from the ring buffer.( Singh-Paragraph 43, If NIC 200 does not perform packet classification, then virtual switch 270 can identify the packet priority and destination VM by reading it from the packet header and put the packet into appropriate priority queues in response to polls from a destination VM)
Regarding Claim 9
While Huang-Kobayashi substantially disclosed the claimed invention Huang-Kobayashi does not disclose (re. Claim 9) wherein the OS is a Host OS implemented on the computing hardware, on which Host OS a virtual machine and an external process formed outside the virtual machine can operate.
While Huang-Kobayashi substantially disclosed the claimed invention Huang-Kobayashi does not disclose (re. Claim 9) wherein the Host OS comprises a ring buffer for storing arrived packets, wherein the ring buffer is managed by the kernel, in a memory space in which the Host OS is deployed.
Singh disclosed (re. Claim 9) wherein the OS is a Host OS implemented on the computing hardware, on which Host OS a virtual machine and an external process formed outside the virtual machine can operate (Singh- Figure 2B,Figure 2C ,Paragraph 44, virtual machine (VM) can be software that runs an operating system and one or more applications… backed by the physical resources of a host computing platform.)
Singh disclosed (re. Claim 9) wherein the Host OS comprises a ring buffer for storing arrived packets, wherein the ring buffer is managed by the kernel (Singh-Figure 2B,Figure 2C Paragraph 40-41,virtual switch 270 for allocation of packets from NIC 200 to descriptor rings 220-0 to 220-n and memory pools 224-0 to 224-n.)
Huang and Singh are analogous art because they present concepts and practices regarding packet data transfer mechanisms. Before the time of the effective filing date of the claimed invention it would have been obvious to combine Singh into Huang-Kobayashi. The motivation for the said combination would have been to enable a CPU core or cores to perform non-uniform polling (e.g., non-uniform CPU resource allocation) using a scheduling mechanism such as weighted round robin, strict priority. A polling rate can be adjusted based on priority level or packets or volume of packets.(Singh-Paragraph 22)
Huang-Kobayashi-Singh disclosed (re. Claim 9) wherein the Host OS comprises a ring buffer for storing arrived packets, (Singh-Figure 2B,Figure 2C Paragraph 40-41,virtual switch 270 for allocation of packets from NIC 200 to descriptor rings 220-0 to 220-n and memory pools 224-0 to 224-n.)
wherein the ring buffer is managed by the kernel, in a memory space in which the Host OS is deployed, (Barnes-Paragraph 60, virtual computing device 210 may host a virtual computing component (e.g., a virtual image with an operating system (OS) instance) on a kernel.)
wherein the kernel thread is further configured to: monitor packet arrivals to the ring buffer according to NAPI polling model; and when an arrival of a packet to the ring buffer is detected by the kernel thread, retrieve, from the ring buffer, the packet whose arrival to the ring buffer is detected.
Regarding Claim 10
Huang-Kobayashi-Singh disclosed (re. Claim 10) a device control method to be executed by a device comprising: computing hardware including one or more central processing units (CPUs); and an operating system (OS) implemented on the computing hardware and comprising a kernel, the kernel comprising a ring buffer for storing arrived packets, (Singh-Figure 2B,Figure 2C Paragraph 40-41,virtual switch 270 for allocation of packets from NIC 200 to descriptor rings 220-0 to 220-n and memory pools 224-0 to 224-n.) the device control method comprising steps of:
executing, by the kernel, a kernel thread occupying a specific CPU of the one or more CPUs in the kernel; monitoring, by the kernel thread, packet arrivals to the ring buffer according to New API (NAPI) polling model; ( Kobayashi-Paragraph 59,NAPI operates by polling while a frame exists in a queue, and shifts to a state waiting for an interrupt when a queue becomes empty. Consequently, while high-speed frame transfer is realized by polling, when no frame exists in a queue, CPU resources can be allocated to another processing (task). ) and when an arrival of a packet to the ring buffer is detected, retrieving, by the kernel thread, the packet whose arrival to the ring buffer is detected, from the ring buffer.
Regarding Claim 12
Huang-Kobayashi-Singh disclosed (re. Claim 12) wherein the packet processing according to New API (NAPI) polling mode comprises steps of: executing, by the kernel, a kernel thread occupying a specific CPU of the one or more CPUs in the kernel,(Huang-Paragraph 59, multi-core CPUs and multiple CPUs configured as symmetric multi-processor (SMP) clusters) monitoring, by the kernel thread, packet arrivals to the ring buffer according to NAPI polling model, (Singh-Figure 2B,Figure 2C Paragraph 40-41,virtual switch 270 for allocation of packets from NIC 200 to descriptor rings 220-0 to 220-n and memory pools 224-0 to 224-n.) and when an arrival of a packet to the ring buffer is detected, retrieving, by the kernel thread the packet whose arrival to the ring buffer is detected, from the ring buffer.
Claim(s) 5 is/are rejected under 35 U.S.C. 103 as being unpatentable over Huang (USPGPUB 20080148291) further in view of Kobayashi (USPGPUB 20200089525) further in view of Barnes (USPGPUB 2018/0336051)
Regarding Claim 5
Huang-Kobayashi disclosed (re. Claim 5) wherein the kernel is further configured to execute a plurality of kernel threads, including the kernel thread, (Huang-Paragraph Figure 5, Paragraph 120, kernel polling thread )
each occupying a respective one of the one or more CPUs and configured to perform the packet processing according to NAPI polling model,
While Huang-Kobayashi substantially disclosed the claimed invention Huang-Kobayashi does not disclose (re. Claim 5) wherein the kernel is configured to allocate CPUs to the plurality of kernel threads such that a number of the CPUs allocated to the plurality of kernel threads is varied according to an amount of incoming packets.
Barnes Figure 4A-4B,Paragraph 60 disclosed wherein a virtual computing device 210 may host a virtual computing component (e.g., a virtual image with an operating system (OS) instance) on a kernel. The OS (e.g., OS.sub.1) may host a number of applications A.sub.1, A.sub.2, and A.sub.3 (e.g., in an application container). Additional virtual computing components may be added as capacity needs increase. For example, the virtual computing device 210 may include a virtual computing management device 215 that monitors resource utilization, such as CPU and I/O utilization (at step 1.1). At step 1.2, the virtual computing management device 215 may instantiate a new virtual computing component (e.g., a virtual image with OS.sub.2) based on the capacity needs (e.g., based on the resource utilization exceeding thresholds). The virtual computing management device 215 may host the previously instantiated virtual computing component (virtual computing component 1) and the new virtual computing component (virtual computing component 2) on the same kernel.
Barnes disclosed (re. Claim 5) wherein the kernel is configured to allocate CPUs to the plurality of kernel threads such that a number of the CPUs allocated to the plurality of kernel threads is varied according to an amount of incoming packets.( Barnes-Figure 4A-4B,Paragraph 60, Additional virtual computing components may be added as capacity needs increase. For example, the virtual computing device 210 may include a virtual computing management device 215 that monitors resource utilization, such as CPU and I/O utilization (at step 1.1). At step 1.2, the virtual computing management device 215 may instantiate a new virtual computing component (e.g., a virtual image with OS.sub.2) based on the capacity needs (e.g., based on the resource utilization exceeding thresholds). The virtual computing management device 215 may host the previously instantiated virtual computing component (virtual computing component 1) and the new virtual computing component (virtual computing component 2) on the same kernel.)
Huang and Barnes are analogous art because they present concepts and practices regarding packet data transfer mechanisms. Before the time of the effective filing date of the claimed invention it would have been obvious to combine Barnes into Huang. The motivation for the said combination would have been to identify when additional application instances or other virtual computing components are performing inadequately (e.g., as a result of the MP effect), and may redeploy additional virtual computing components on a separate kernel. As a result, application performance may be improved by preventing the MP effect from adversely affecting the application.(Barnes-Paragraph 15)
Claim(s) 7 is/are rejected under 35 U.S.C. 103 as being unpatentable over Huang (USPGPUB 20080148291) further in view of Kobayashi (USPGPUB 20200089525) further in view of Zytaruk (USPGPUB 20190332373)
Regarding Claim 7
While Huang-Kobayashi substantially disclosed the claimed invention Huang-Kobayashi does not disclose (re. Claim 7) wherein the kernel starts the kernel thread by being applied a kernel patch without rebooting the OS.
Zytaruk Paragraph 2 disclosed wherein security patches are periodically applied to a live production kernel module code base.
Zytaruk disclosed (re. Claim 7) wherein the kernel starts the kernel thread by being applied a kernel patch without rebooting the OS.( Zytaruk- Paragraph 2,security patches are periodically applied to a live production kernel module code base,Paragraph 18, performing a live update of a kernel device module)
Huang and Zytaruk are analogous art because they present concepts and practices regarding implementation of kernel modules. Before the time of the effective filing date of the claimed invention it would have been obvious to combine Zytaruk into Huang. The motivation for the said combination would have been to update the kernel such that during the period of time where the kernel device module 305C has been quiesced any interrupts or operations destined for the VMs may continue to be processed unhindered.(Zytaruk-Paragraph 39)
Claim(s) 8,13-15,17-18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Huang (USPGPUB 20080148291) further in view of Kobayashi (USPGPUB 20200089525) further in view of Singh (USPGPUB 20190280991) further in view of Ho (USPGPUB 20160378545)
Regarding Claim 8
While Huang-Kobayashi substantially disclosed the claimed invention Huang-Kobayashi does not disclose (re. Claim 8) wherein the device further includes a virtual machine implemented on the computing hardware, wherein the OS is a Guest OS operating on the virtual machine.
While Huang-Kobayashi substantially disclosed the claimed invention Huang-Kobayashi does not disclose (re. Claim 8) wherein the kernel comprises a ring buffer for storing arrived packets.
Ho Paragraph 13 disclosed virtual machines (VMs), each of which may contain a guest operating system, which may be the same or different from the host OS, together with the user-space software programs to be virtualized.
Ho disclosed (re. Claim 8) wherein the device further includes a virtual machine implemented on the computing hardware, wherein the OS is a Guest OS operating on the virtual machine.(Ho-Paragraph 13,virtual machines (VMs), each of which may contain a guest operating system, which may be the same or different from the host OS, together with the user-space software programs to be virtualized.)
Huang and Ho are analogous art because they present concepts and practices regarding implementation of kernel modules. Before the time of the effective filing date of the claimed invention it would have been obvious to combine Ho into Huang. The motivation for the said combination would have been to enable the software applications in different groups to be executed in parallel on different cores of a multi-core processor. Data for processing by a particular software applications, received via I/O controllers, may be directed to the cores on which the particular applications are executing in parallel. A set of resource management services selected for each particular group of related applications may be used therefore. The set of resource management services for each particular group may be based on the related requirements for resource management services of that group to reduce processing overhead and limitations by reducing mode switching, contentions, non-locality of caches, inter-cache communications and/or kernel synchronizations during execution of software applications in the first plurality of software applications. (Ho-Paragraph 23)
Singh disclosed (re. Claim 8) wherein the kernel comprises a ring buffer for storing arrived packets. (Singh-Figure 2B,Figure 2C Paragraph 40-41,virtual switch 270 for allocation of packets from NIC 200 to descriptor rings 220-0 to 220-n and memory pools 224-0 to 224-n.)
Huang and Singh are analogous art because they present concepts and practices regarding packet data transfer mechanisms. Before the time of the effective filing date of the claimed invention it would have been obvious to combine Singh into Huang-Kobayashi. The motivation for the said combination would have been to enable a CPU core or cores to perform non-uniform polling (e.g., non-uniform CPU resource allocation) using a scheduling mechanism such as weighted round robin, strict priority. A polling rate can be adjusted based on priority level or packets or volume of packets.(Singh-Paragraph 22)
Huang-Kobayashi-Singh-Ho disclosed (re. Claim 8) wherein the kernel comprises a ring buffer for storing arrived packets, (Singh-Figure 2B,Figure 2C Paragraph 40-41,virtual switch 270 for allocation of packets from NIC 200 to descriptor rings 220-0 to 220-n and memory pools 224-0 to 224-n) wherein the ring buffer is managed by the kernel, in a memory space in which the Guest OS is deployed, (Ho-Paragraph 13,virtual machines (VMs), each of which may contain a guest operating system, which may be the same or different from the host OS, together with the user-space software programs to be virtualized.) and wherein the kernel thread is further configured to: monitor packet arrivals to the ring buffer according to NAPI polling model; and when an arrival of a packet to the ring buffer is detected by the kernel thread, retrieve, from the ring buffer, the packet whose arrival to the ring buffer is detected.
Regarding Claim 13
Huang-Kobayashi-Singh disclosed (re. Claim 13) a server delay control device deployed in a kernel of an operating system (OS) of a server (Huang-Paragraph Figure 5, Paragraph 120, kernel polling thread ) implemented on a computer comprising one or more central processing units (CPUs), wherein the OS comprises: the kernel; a ring buffer managed by the kernel, in a memory space in which the server deploys the OS; (Singh-Figure 2B,Figure 2C Paragraph 40-41,virtual switch 270 for allocation of packets from NIC 200 to descriptor rings 220-0 to 220-n and memory pools 224-0 to 224-n.) and a poll list in which packet arrival information is to be registered, the packet arrival information being indicative of an arrival of a packet to the ring buffer, (Kobayashi-Paragraph 127, when an entry including “transmission source” information (MAC address and VLAN ID) of the read frame does not exist in the FDB 104, the transfer destination determining unit 312 updates the FDB 104 by newly registering the entry) wherein the server delay control device is configured to execute a kernel thread in the kernel, the kernel thread configured to monitor a packet arrival according to New API (NAPI) polling model, .( Kobayashi-Paragraph 59,NAPI operates by polling while a frame exists in a queue, and shifts to a state waiting for an interrupt when a queue becomes empty. Consequently, while high-speed frame transfer is realized by polling, when no frame exists in a queue, CPU resources can be allocated to another processing (task). )
and wherein the server delay control device comprises: a packet arrival monitoring part configured to monitor from the kernel thread whether the packet arrival information has been registered in the poll list;
While Huang-Kobayashi-Singh substantially disclosed the claimed invention Huang-Kobayashi-Singh does not disclose (re. Claim 13) a packet dequeuer configured to, when the packet arrival information has been registered in the poll list, dequeue the packet from the ring buffer on the basis of the packet arrival information.
Ho Paragraph 322 disclosed removing queue element(s) 133 from the arriving workloads buffered in ingress queue 31 in a first in, first out (FIFO) manner.
Ho disclosed (re. Claim 13) a packet dequeuer configured to, when the packet arrival information has been registered in the poll list, dequeue the packet from the ring buffer on the basis of the packet arrival information.( Ho-Paragraph 322,removing queue element(s) 133 from the arriving workloads buffered in ingress queue 31 in a first in, first out (FIFO) manner.)
Huang and Ho are analogous art because they present concepts and practices regarding implementation of kernel modules. Before the time of the effective filing date of the claimed invention it would have been obvious to combine Ho into Huang. The motivation for the said combination would have been to enable the software applications in different groups to be executed in parallel on different cores of a multi-core processor. Data for processing by a particular software applications, received via I/O controllers, may be directed to the cores on which the particular applications are executing in parallel. A set of resource management services selected for each particular group of related applications may be used therefore. The set of resource management services for each particular group may be based on the related requirements for resource management services of that group to reduce processing overhead and limitations by reducing mode switching, contentions, non-locality of caches, inter-cache communications and/or kernel synchronizations during execution of software applications in the first plurality of software applications. (Ho-Paragraph 23)
Regarding Claim 14
Huang-Kobayashi-Singh-Ho disclosed (re. Claim 14) wherein the OS is a Guest OS configured to operate in a virtual machine of the server, (Ho-Paragraph 13,virtual machines (VMs), each of which may contain a guest operating system, which may be the same or different from the host OS, together with the user-space software programs to be virtualized.)
and wherein the Guest OS further comprises a protocol processor configured to perform protocol processing on the packet dequeued from the ring buffer (Huang-Paragraph 87, “Process Packet" processes the header of the received data packet to determine the target application process of the packet.)
Regarding Claim 15
Huang-Kobayashi-Singh-Ho disclosed (re. Claim 15) wherein the OS is a Host OS on which a virtual machine and an external process formed outside the virtual machine can operate, (Singh- Figure 2B,Figure 2C ,Paragraph 44, virtual machine (VM) can be software that runs an operating system and one or more applications… backed by the physical resources of a host computing platform.) and wherein the Host OS further comprises a TAP device, which is a virtual interface created by the kernel.(Singh- Figure 2B,Figure 2C,Paragraph 41, virtual switch 270 can determine a virtual machine to process the received packet and allocate the received packet to a virtual interface among virtual interfaces 278-0 to 278-m.v)
Regarding Claim 17
Claim 17 (re. method) recites substantially similar limitations as Claim 13. Claim 17 is rejected on the same basis as Claim 13.
Regarding Claim 18
Claim 18 (re. non-transitory computer-readable medium) recites substantially similar limitations as Claim 13. Claim 18 is rejected on the same basis as Claim 13.
Claim(s) 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Huang (USPGPUB 20080148291) further in view of Kobayashi (USPGPUB 20200089525) further in view of Singh (USPGPUB 20190280991) further in view of Ho (USPGPUB 20160378545) further in view of Zytaruk (USPGPUB 20190332373)
Regarding Claim 16
While Huang-Kobayashi-Singh substantially disclosed the claimed invention Huang-Kobayashi-Singh does not disclose (re. Claim 16) wherein the server delay control device is deployed in the kernel to start the kernel thread by applying a kernel patch to the kernel without rebooting the OS.
Zytaruk Paragraph 2 disclosed wherein security patches are periodically applied to a live production kernel module code base.
Zytaruk disclosed (re. Claim 16) wherein the server delay control device is deployed in the kernel to start the kernel thread by applying a kernel patch to the kernel without rebooting the OS. (Zytaruk- Paragraph 2,security patches are periodically applied to a live production kernel module code base,Paragraph 18, performing a live update of a kernel device module)
Huang and Zytaruk are analogous art because they present concepts and practices regarding implementation of kernel modules. Before the time of the effective filing date of the claimed invention it would have been obvious to combine Zytaruk into Huang. The motivation for the said combination would have been to update the kernel such that during the period of time where the kernel device module 305C has been quiesced any interrupts or operations destined for the VMs may continue to be processed unhindered.(Zytaruk-Paragraph 39)
Conclusion
Examiner’s Note: In the case of amending the claimed invention, Applicant is respectfully requested to indicate the portion(s) of the specification which dictate(s) the structure relied on for proper interpretation and also to verify and ascertain the metes and bounds of the claimed invention.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to GREG C BENGZON whose telephone number is (571)272-3944. The examiner can normally be reached on Monday - Friday 8 AM - 4:30 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, John Follansbee can be reached on (571) 272-3964. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/GREG C BENGZON/ Primary Examiner, Art Unit 2444