Prosecution Insights
Last updated: April 19, 2026
Application No. 18/284,704

PROCESS COMMUNICATION METHODS AND APPARATUSES

Non-Final OA §102§103§112
Filed
Sep 28, 2023
Examiner
KIM, DONG U
Art Unit
2197
Tech Center
2100 — Computer Architecture & Software
Assignee
BOE TECHNOLOGY GROUP CO., LTD.
OA Round
1 (Non-Final)
87%
Grant Probability
Favorable
1-2
OA Rounds
2y 10m
To Grant
99%
With Interview

Examiner Intelligence

Grants 87% — above average
87%
Career Allow Rate
610 granted / 702 resolved
+31.9% vs TC avg
Moderate +14% lift
Without
With
+13.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
35 currently pending
Career history
737
Total Applications
across all art units

Statute-Specific Performance

§101
10.4%
-29.6% vs TC avg
§103
44.2%
+4.2% vs TC avg
§102
10.4%
-29.6% vs TC avg
§112
28.0%
-12.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 702 resolved cases

Office Action

§102 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim(s) 1-8, 10-16, 18 and 24 is/are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 1 (similarly claim 24) recites the limitation “the to-be-processed data”. There is insufficient antecedent basis for this limitation in the claim. It is unclear if “the to-be-processed data” is referring to the data or something else. Claim 1 recite: “non-local” and “local processes”. The examiner is unclear how the locality is determined. For example, local to a node, local to a geographical region, local to a data center, intranet, etc. Claim 11 (similarly claim 12) recite: “anonymous-pipe communication manner”. The examiner is unclear in what particular manner would constitute as an anonymous-pipe communication manner. The specification does not provide any details (other than reciting the terms repeatedly), therefore, anonymous-pipe communication manner vs non-anonymous-pipe communication manner vs. other communication manner cannot be clearly distinguished. Claim 18 recite: “a preset slot function, based on the preset slot function, performing processing on the data.” The examiner is unclear what a preset slot function is. The specification does not provide any clarity. Claims 2-8, 10-16 are rejected based on rejection of its corresponding dependent claim. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-8, 10, 13-16, 19, 20 and 24 is/are rejected under 35 U.S.C. 103 as being unpatentable over Wang et al. (Pub 20190349305) (hereafter Wang) in view of Pope et al. (Pub 20160246657) (hereafter Pope). As per claim 1, Wang teaches: A method, comprising: determining a second process corresponding to data to be processed by a first process, wherein the first process and the second process are implemented based on a cross-platform application development framework; and ([Paragraph 11], According to a preferred mode, the present invention discloses a container communication method for parallel-applications, the method comprises: when a first process of a first container has to communicate with a second process of a second container and the first and second containers are in the same host machine, the host machine creating a second channel that is different from a TCP (Transmission Control Protocol)-based first channel between the first container and the second container; [Paragraph 7], Some researchers suggest a shared-memory based communication method between containers. The known method provides a communication framework based on client-server models. In order to utilize this framework, container communication through a shared memory is only possible when source codes of the application have been modified and then translated based on the communication framework. [Paragraph 67], All the existing application programs capable of TCP-based communication may be accelerated using the method of the present invention without any modification, making the present invention more practical and operable, and more desirable and acceptable to developers.) when the second process is not on a first processing node, according to a first communication manner, sending the data to the second process and enabling the second process to process the data, wherein the first communication manner indicates a communication manner between non-local processes; ([Paragraph 11], According to a preferred mode, the present invention discloses a container communication method for parallel-applications, the method comprises: when a first process of a first container has to communicate with a second process of a second container and the first and second containers are in the same host machine, the host machine creating a second channel that is different from a TCP (Transmission Control Protocol)-based first channel between the first container and the second container; [Paragraph 19], The system comprises: at least one processor and at least one computer-readable storage medium, which are configured for: when a first process of a first container has to communicate with a second process of a second container and the first and second containers, are in the same host machine, the host machine creating a second channel that is different from a TCP-based first channel between the first container and the second container; the first container sending communication data of communication between the first process and the second process to a shared memory area assigned to the first container and/or the second container by the host machine and sending metadata of the communication data to the second container through the first channel; and when the second process acknowledges receiving the communication data based on the received metadata, the first container transmitting the communication data to the second container through the second channel and the second process feeding acknowledgement of the data back to the first process through the first channel.) when the second process is on the first processing node, according to a second communication manner, sending the to-be-processed data to the second process and enabling the second process to process the data, wherein the second communication manner indicates a communication manner between local processes. ([Paragraph 11], According to a preferred mode, the present invention discloses a container communication method for parallel-applications, the method comprises: when a first process of a first container has to communicate with a second process of a second container and the first and second containers are in the same host machine, the host machine creating a second channel that is different from a TCP (Transmission Control Protocol)-based first channel between the first container and the second container; [Paragraph 19], The system comprises: at least one processor and at least one computer-readable storage medium, which are configured for: when a first process of a first container has to communicate with a second process of a second container and the first and second containers, are in the same host machine, the host machine creating a second channel that is different from a TCP-based first channel between the first container and the second container; the first container sending communication data of communication between the first process and the second process to a shared memory area assigned to the first container and/or the second container by the host machine and sending metadata of the communication data to the second container through the first channel; and when the second process acknowledges receiving the communication data based on the received metadata, the first container transmitting the communication data to the second container through the second channel and the second process feeding acknowledgement of the data back to the first process through the first channel.) Although Wang silently discloses indicates a communication manner. ([Paragraph 33], Metadata describing attributes of communication data). Wang does not explicitly disclose indicates a communication manner. ([Paragraph 49], a method for processing data to be transmitted over a network wherein the network is such that data is transmitted according to a data transfer protocol from any of a plurality of destination identities, the method comprising the steps of:.. [Paragraph 50], According to a fifth aspect of the present invention there is provided a data processing system arranged for receiving over a network, according to a data transfer protocol, groups of data each directed to any of a plurality of destination identities, the data processing system comprising: a plurality of buffers for storing groups of data received over the network; a processing arrangement for performing processing in accordance with the data transfer protocol on received data in the buffers, for making the received data available to respective destination identities; and a controller arranged to, in dependence on the destination identity to which the group is directed, select for each received group of data, one of the plurality of buffers in which to store the group of data, and to store the group in the selected buffer prior to processing of the group by the processing arrangement in accordance with the data transfer protocol. [Paragraph 58], The driver 12 can thereby be informed when new L5 data is available, and can perform protocol processing on the new data or wake the relevant application. Preferably the memory mapping between the OS and the L5 stack is read only, to avoid corruption of data held in the OS by the stack 5. [Paragraph 83], Protocol processing of received data directed to active sockets could be triggered by receipt of the data at the NIC or the event queues. Alternatively, protocol processing of data directed to an active socket could be triggered by the receipt at the message former of a poll( ) call requesting a response for that socket. [Paragraph 93], A further typical function of the operating system is processing data that is either received at or to be transmitted from the device. Such data typically requires processing in accordance with a data transfer protocol, such as TCP.) It would have been obvious to a person with ordinary skill in the art, before the effective filing date of the invention, to combine the teachings of Wang wherein processes within corresponding nodes (i.e. containers) communicate by communicating to-be-processed data and processing the data received via proxy thread (i.e. socket programming/driver interface), into teachings of Pope wherein communication manner is indicated, because this would enhance the teachings of Wang wherein by indicating the communication manner, it allows proper identification of communication protocol to be determined for processing. [Pope paragraph 93] As per claim 2, rejection of claim 1 is incorporated: Wang teaches wherein according to the first communication manner, sending the data to the second process comprises: determining a first target proxy thread, wherein the first target proxy thread is configured to serve as a proxy for a communication service between the first process and the second process; and sending the data to the first target proxy thread, and enabling the first target proxy thread to send the data to the second process. ([Paragraph 15], According to a preferred mode, the step of the first container sending the communication data to the shared memory area assigned to the first container and/or the second container by the host machine comprises: identifying a status that the first process sends the communication data to a kernel through the socket programming interface, and then a driver interface copying the communication data from the kernel to the shared memory area. In this way, the need of modifying source codes of the first process can be eliminated, thereby ensuring good compatibility and practicality of the present invention.) As per claim 3, rejection of claim 2 is incorporated: Wang teaches wherein the first target proxy thread is a proxy thread corresponding to the second process, and is created in a case that a connection between the second process and the first process is established, wherein the first target proxy thread indicates a first socket corresponding to the second process; and sending the data to the first target proxy thread and enabling the first target proxy thread to send the data to the second process comprises: sending the data to a proxy thread corresponding to the second process, and enabling the proxy thread corresponding to the second process to send the data to the second process based on the first socket. ([Paragraph 15], According to a preferred mode, the step of the first container sending the communication data to the shared memory area assigned to the first container and/or the second container by the host machine comprises: identifying a status that the first process sends the communication data to a kernel through the socket programming interface, and then a driver interface copying the communication data from the kernel to the shared memory area. In this way, the need of modifying source codes of the first process can be eliminated, thereby ensuring good compatibility and practicality of the present invention. [Paragraph 51], According to a preferred mode, the step of the first container C1 sending the communication data to the shared memory area assigned to the first container C1 and/or the second container C2 by the host machine HM1 comprises: identifying a status that the first process P1 sends the communication data to a kernel through the socket programming interface, and then a driver interface copying the communication data from the kernel to the shared memory area. [Paragraph 73], According to a preferred mode, Step S1 may comprise at least one of the following sub-steps: for every container in which high-performance parallel applications (hereinafter referred to as HPPAs) are run, the applications being run in the Linux kernel loaded with the PCI expansion card 10B, wherein each of the high-performance parallel applications is preferably written based on the MPI (Message Passing Interface); and When the process of one HPPA in a container needs to communicate with the process in another container, the processes of the HPPAs in different containers communicating through the socket and the processes of the local HPPAs creating the socket connection by calling the socket API (Application Programming Interface). Each container has a dedicated network Namespace and the TCP (Transmission Control Protocol)/IP (Internet Protocol) network protocol stacks which are separated with each other.) As per claim 4, rejection of claim 3 is incorporated: Wang teaches further comprising: starting a transmission control protocol (TCP) service, and when the second process connects to the TCP service, creating the first socket corresponding to the second process; or in response to receiving a protocol Websocket connection request for full duplex communication sent by the second process, creating a first socket corresponding to the second process; and based on the first socket, creating a proxy thread corresponding to the second process. ([Paragraph 15], According to a preferred mode, the step of the first container sending the communication data to the shared memory area assigned to the first container and/or the second container by the host machine comprises: identifying a status that the first process sends the communication data to a kernel through the socket programming interface, and then a driver interface copying the communication data from the kernel to the shared memory area. In this way, the need of modifying source codes of the first process can be eliminated, thereby ensuring good compatibility and practicality of the present invention. [Paragraph 51], According to a preferred mode, the step of the first container C1 sending the communication data to the shared memory area assigned to the first container C1 and/or the second container C2 by the host machine HM1 comprises: identifying a status that the first process P1 sends the communication data to a kernel through the socket programming interface, and then a driver interface copying the communication data from the kernel to the shared memory area. [Paragraph 73], According to a preferred mode, Step S1 may comprise at least one of the following sub-steps: for every container in which high-performance parallel applications (hereinafter referred to as HPPAs) are run, the applications being run in the Linux kernel loaded with the PCI expansion card 10B, wherein each of the high-performance parallel applications is preferably written based on the MPI (Message Passing Interface); and When the process of one HPPA in a container needs to communicate with the process in another container, the processes of the HPPAs in different containers communicating through the socket and the processes of the local HPPAs creating the socket connection by calling the socket API (Application Programming Interface). Each container has a dedicated network Namespace and the TCP (Transmission Control Protocol)/IP (Internet Protocol) network protocol stacks which are separated with each other.) As per claim 5, rejection of claim 2 is incorporated: Wang teaches wherein the first communication manner comprises a user datagram protocol (UDP) communication manner, the first target proxy thread is a universal proxy thread corresponding to the UDP communication manner, the universal proxy thread is configured to serve as a proxy for a data transmission service when the first process communicates with other processes based on the UDP communication manner, the UDP communication manner indicates a second socket, and the second socket is configured to send data; and sending the data to the first target proxy thread and enabling the first target proxy thread to send the data to the second process comprise: sending the data to the universal proxy thread and enabling the universal proxy thread to send the data to the second process based on the second socket. ([Paragraph 15], According to a preferred mode, the step of the first container sending the communication data to the shared memory area assigned to the first container and/or the second container by the host machine comprises: identifying a status that the first process sends the communication data to a kernel through the socket programming interface, and then a driver interface copying the communication data from the kernel to the shared memory area. In this way, the need of modifying source codes of the first process can be eliminated, thereby ensuring good compatibility and practicality of the present invention. [Paragraph 51], According to a preferred mode, the step of the first container C1 sending the communication data to the shared memory area assigned to the first container C1 and/or the second container C2 by the host machine HM1 comprises: identifying a status that the first process P1 sends the communication data to a kernel through the socket programming interface, and then a driver interface copying the communication data from the kernel to the shared memory area. [Paragraph 73], According to a preferred mode, Step S1 may comprise at least one of the following sub-steps: for every container in which high-performance parallel applications (hereinafter referred to as HPPAs) are run, the applications being run in the Linux kernel loaded with the PCI expansion card 10B, wherein each of the high-performance parallel applications is preferably written based on the MPI (Message Passing Interface); and When the process of one HPPA in a container needs to communicate with the process in another container, the processes of the HPPAs in different containers communicating through the socket and the processes of the local HPPAs creating the socket connection by calling the socket API (Application Programming Interface). Each container has a dedicated network Namespace and the TCP (Transmission Control Protocol)/IP (Internet Protocol) network protocol stacks which are separated with each other.) Wang further teaches utilizing not only TCP but any other mature/existing communication protocol such as UDP. ([Paragraph 35], Preferably, the second channel may be based on a mature, existing communication protocol. People skilled in the art may make selection from existing communication protocols according to practical needs.) Pope teaches ([Paragraph 375], Preferably the network protocol is TCP/IP. Alternatively it may be UDP/IP or any other suitable protocol, including non-IP protocols.) As per claim 6, rejection of claim 5 is incorporated: Pope teaches wherein the UDP communication manner further indicates a third socket, wherein the third socket is configured to receive data; and the method further comprises: based on the third socket, receiving first data sent by other processes. ([Paragraph 59], However it is possible for one library instance to manage a number of event queues. Since one transport library is capable of supporting a large number of sockets (i.e. application level connections), it can therefore occur that a single queue contains data relating to a number of network endpoints, and thus a single queue can contain data relating to a number of file descriptors. [Paragraph 395], The protocol processing (typically TCP/IP and UDP/IP protocol processing) of raw received data and of traffic data that is to be transmitted is performed in response to requests from applications rather than in response to the receipt of data. This can reduce the need for context switching both between user and kernel context or between threads in a user-level library. Multiple blocks of raw data can be received and stored in the data buffers, but protocol processing need not be performed after each one arrives.) As per claim 7, rejection of claim 1 is incorporated: Pope teaches wherein according to the first communication manner, sending the data to the second process comprises: in response to determining that a target message queue is not full, obtaining a production function address; and based on the production function address, calling a corresponding production function to write the data to the target message queue, enabling the second process to read the data from the target message queue, wherein the production function indicates a write operation on the data. ([Paragraph 84], t will be understood that the example described above could be applied to queues for outgoing as well as incoming data. In an embodiment of the invention in this situation, data waits in buffers to be protocol processed, after which it is passed to one or more transmit queues from where it can be transmitted over the network. A socket related to an application may have further data which it wished to pass to the buffers in order that it can be sent over the network, but such further data cannot efficiently be sent unless there is sufficient capacity in the transmit queues for the data to enter when it has been processed. [Paragraph 89], The socket connects the application to remote entities by means of a network protocol, in this example TCP/IP. The application can send and receive TCP/IP messages by opening a socket and reading and writing data to and from the socket, and the operating system causes the messages to be transported across the network. For example, the application can invoke a system call (syscall) for transmission of data through the socket and then via the operating system to the network. Syscalls can be thought of as functions taking a series of arguments which cause execution of the CPU to switch to a privileged level and start executing the operating system. A given syscall will be composed of a specific list of arguments, and the combination of arguments will vary depending on the type of syscall. [Paragraph 383], FIG. 21 shows a library 14 which implements an API. The library provides a set of functions that can be called by applications. The functions include functions for transmitting and receiving data.) As per claim 8, rejection of claim 7 is incorporated: Pope teaches further comprising: locking the target message queue. ([Paragraph 84], It will be understood that the example described above could be applied to queues for outgoing as well as incoming data. In an embodiment of the invention in this situation, data waits in buffers to be protocol processed, after which it is passed to one or more transmit queues from where it can be transmitted over the network. A socket related to an application may have further data which it wished to pass to the buffers in order that it can be sent over the network, but such further data cannot efficiently be sent unless there is sufficient capacity in the transmit queues for the data to enter when it has been processed. [Paragraph 32], When a select( ) or poll( ) call is triggered by an application, providing an up-to-date response requires that new data received at an event queue 31 has been validated. In the case of user-level stacks such as the stack 5 of FIG. 2, performing a poll( ) call on new data in a stack can give rise to a high processing overhead. This is due to lock contention (caused by the fact that the stack requires access to shared memory for the validation processing to be carried out) and the requirement for all of the new data in the event queues 31-33 to be processed before it can be recognised for the purpose of a response to a poll( ) call. Thus, in the example of a TCP stack, TCP processing must be carried out on all data in an incoming event queue which may be relevant to the set of file descriptors referenced by the poll( ) call, for a valid response to the poll( ) call to be returned. [Paragraph 90], Syscalls made by applications in a computer system can indicate a file descriptor (sometimes called a handle), which is usually an integer number that identifies an open file within a process. A file descriptor is obtained each time a file is opened or a socket or other resource is created.) As per claim 10, rejection of claim 1 is incorporated: Pope teaches wherein according to the second communication manner, sending the data to the second process comprises: obtaining a handle of a target communication object, and based on the handle of the target communication object, sending the data to the second process. ([Paragraph 84], It will be understood that the example described above could be applied to queues for outgoing as well as incoming data. In an embodiment of the invention in this situation, data waits in buffers to be protocol processed, after which it is passed to one or more transmit queues from where it can be transmitted over the network. A socket related to an application may have further data which it wished to pass to the buffers in order that it can be sent over the network, but such further data cannot efficiently be sent unless there is sufficient capacity in the transmit queues for the data to enter when it has been processed. [Paragraph 32], When a select( ) or poll( ) call is triggered by an application, providing an up-to-date response requires that new data received at an event queue 31 has been validated. In the case of user-level stacks such as the stack 5 of FIG. 2, performing a poll( ) call on new data in a stack can give rise to a high processing overhead. This is due to lock contention (caused by the fact that the stack requires access to shared memory for the validation processing to be carried out) and the requirement for all of the new data in the event queues 31-33 to be processed before it can be recognised for the purpose of a response to a poll( ) call. Thus, in the example of a TCP stack, TCP processing must be carried out on all data in an incoming event queue which may be relevant to the set of file descriptors referenced by the poll( ) call, for a valid response to the poll( ) call to be returned. [Paragraph 90], Syscalls made by applications in a computer system can indicate a file descriptor (sometimes called a handle), which is usually an integer number that identifies an open file within a process. A file descriptor is obtained each time a file is opened or a socket or other resource is created. File descriptors can be re-used within a computer system, but at any given time a descriptor uniquely identifies an open file or other resource. Thus, when a resource (such as a file) is closed down, the descriptor will be destroyed, and when another resource is subsequently opened the descriptor can be re-used to identify the new resource. Any operations which for example read from, write to or close the resource take the corresponding file descriptor as an input parameter.) As per claim 13, rejection of claim 10 is incorporated: Pope teaches wherein the target communication object comprises a form of the second process; and wherein based on the handle of the target communication object, sending the data to the second process comprises: obtaining a form name of the second process, and determining a form handle of the second process corresponding to the form name; and based on the form handle, sending the data to the form of the second process. ([Paragraph 84], It will be understood that the example described above could be applied to queues for outgoing as well as incoming data. In an embodiment of the invention in this situation, data waits in buffers to be protocol processed, after which it is passed to one or more transmit queues from where it can be transmitted over the network. A socket related to an application may have further data which it wished to pass to the buffers in order that it can be sent over the network, but such further data cannot efficiently be sent unless there is sufficient capacity in the transmit queues for the data to enter when it has been processed. [Paragraph 32], When a select( ) or poll( ) call is triggered by an application, providing an up-to-date response requires that new data received at an event queue 31 has been validated. In the case of user-level stacks such as the stack 5 of FIG. 2, performing a poll( ) call on new data in a stack can give rise to a high processing overhead. This is due to lock contention (caused by the fact that the stack requires access to shared memory for the validation processing to be carried out) and the requirement for all of the new data in the event queues 31-33 to be processed before it can be recognised for the purpose of a response to a poll( ) call. Thus, in the example of a TCP stack, TCP processing must be carried out on all data in an incoming event queue which may be relevant to the set of file descriptors referenced by the poll( ) call, for a valid response to the poll( ) call to be returned. [Paragraph 90], Syscalls made by applications in a computer system can indicate a file descriptor (sometimes called a handle), which is usually an integer number that identifies an open file within a process. A file descriptor is obtained each time a file is opened or a socket or other resource is created. File descriptors can be re-used within a computer system, but at any given time a descriptor uniquely identifies an open file or other resource. Thus, when a resource (such as a file) is closed down, the descriptor will be destroyed, and when another resource is subsequently opened the descriptor can be re-used to identify the new resource. Any operations which for example read from, write to or close the resource take the corresponding file descriptor as an input parameter.) As per claim 14, rejection of claim 1 is incorporated: Pope teaches wherein according to the second communication manner, sending the data to the second process comprises: obtaining a local socket corresponding to the second process, wherein the local socket corresponding to the second process is created when the second process is connected to a named pipe created by the first process; and based on the local socket corresponding to the second process, sending the data to the second process. ([Paragraph 23], FIG. 1 represents equipment capable of implementing a prior art protocol stack, such as a transmission control protocol (TCP) stack in a computer connected to a network. The equipment includes an application 1, a socket 2 and an operating system 3 incorporating a kernel 4. The socket connects the application to remote entities by means of a network protocol, in this example TCP/IP. The application can send and receive TCP/IP messages by opening a socket and reading and writing data to and from the socket, and the operating system causes the messages to be transported across the network. For example, the application can invoke a system call (syscall) for transmission of data through the socket and then via the operating system to the network. [Paragraph 24], Syscalls made by applications in a computer system can indicate a file descriptor (sometimes called a handle), which is usually an integer number that identifies an open file within a process. A file descriptor is obtained each time a file is opened or a socket or other resource is created. File descriptors can be re-used within a computer system, but at any given time a descriptor uniquely identifies an open file or other resource.) As per claim 15, rejection of claim 1 is incorporated: wherein according to the second communication manner, sending the data to the second process comprises: writing the data to a target data bus and enabling a data bus service corresponding to the target data bus to determine a response function bound to the data; and when the second process is bound to the response function, sending the data to the second process. ([Paragraph 23], FIG. 1 represents equipment capable of implementing a prior art protocol stack, such as a transmission control protocol (TCP) stack in a computer connected to a network. The equipment includes an application 1, a socket 2 and an operating system 3 incorporating a kernel 4. The socket connects the application to remote entities by means of a network protocol, in this example TCP/IP. The application can send and receive TCP/IP messages by opening a socket and reading and writing data to and from the socket, and the operating system causes the messages to be transported across the network. For example, the application can invoke a system call (syscall) for transmission of data through the socket and then via the operating system to the network. [Paragraph 24], Syscalls made by applications in a computer system can indicate a file descriptor (sometimes called a handle), which is usually an integer number that identifies an open file within a process. A file descriptor is obtained each time a file is opened or a socket or other resource is created. File descriptors can be re-used within a computer system, but at any given time a descriptor uniquely identifies an open file or other resource. [Paragraph 45], The said making the received data available to respective destination identities could comprise passing the data from the data storage to one or more buffers associated with the respective destination identities. [Paragraph 47], trigger processing by the first processing arrangement in accordance with the protocol on only the identified data; and subsequently form a response based at least partly on the result of the triggered processing, wherein the response is formed so as to comprise a positive indication of availability of data for transmission for a destination identity of the group if the triggered processing caused data from the respective destination identity to be made available for transmission over the network. [Paragraph 181], If those checks are satisfied then it transmits the data to the remote terminal. At the remote terminal the remote NIC looks up the address to issue on its IO bus in order to store the received data from its buffer table. ) As per claim 16, rejection of claim 1 is incorporated: Wang teaches wherein according to the second communication manner, sending the data to the second process comprises: determining an idle shared-memory region; and locking the idle shared-memory region, writing the data into the idle shared-memory region, and unlocking the idle shared-memory region and enabling the second process to read the data from the idle shared-memory region. ([Paragraph 6], To address the low efficiency of cross-process communication in a virtualized environment of high performance parallel-application containers, proposals to modify the MPI library so that it can detect adjacent containers in the same host machine have been made. Such a method allows communication among MPI processes of different container to be performed using a shared memory instead of network channel communication by default, thereby to certain degrees, improving computation efficiency of the high-performance parallel application systems. [Paragraph 7], Some researchers suggest a shared-memory based communication method between containers. The known method provides a communication framework based on client-server models. In order to utilize this framework, container communication through a shared memory is only possible when source codes of the application have been modified and then translated based on the communication framework. Although this existing method improves communication efficiency in the same host machine to some extent, it is less compatible and less operable, and is not practical because it costs too much to modify the many existing codes. ) Pope teaches locking shared memory ([Paragraph 32], This is due to lock contention (caused by the fact that the stack requires access to shared memory for the validation processing to be carried out) and the requirement for all of the new data in the event queues 31-33 to be processed before it can be recognised for the purpose of a response to a poll( ) call.) As per claim 19, rejection of claim 17 is incorporated: Although Wang teaches reading/writing of communication data ([Paragraph 8]) Wang does not explicitly disclose wherein receiving the data sent by the first process comprises: obtaining a consumption function address; and based on the consumption function address, calling a corresponding consumption function to read the data from a target message queue, wherein the consumption function indicates a read operation on the data. Pope teaches wherein receiving the data sent by the first process comprises: obtaining a consumption function address; and based on the consumption function address, calling a corresponding consumption function to read the data from a target message queue, wherein the consumption function indicates a read operation on the data. ([Paragraph 76], Once a socket (or, more precisely, a file descriptor representing the socket, which is typically itself associated with a network endpoint) enters the poll cache 40, it is monitored by means of a thread (a process or a part of a process) dedicated to the poll cache. The thread runs on the OS. The monitoring involves checking periodically for each descriptor within the poll cache whether there is any data on the corresponding event queue 31-33 which is awaiting processing. If there is data on an event queue, then the dedicated thread will perform TCP processing on the data such that the data becomes available to be read by the associated application. In the example shown in FIG. 4, descriptors X, Y and Z are being held in the poll cache. When the thread in the OS monitors for new data relating to descriptor X it identifies a block of data in event queue 31. The thread proceeds to perform protocol processing on this block of data, and the processed data is passed to a receive queue 34. The data is then available be read from the receive queue by the application for which it is intended.) It would have been obvious to a person with ordinary skill in the art, before the effective filing date of the invention, to combine the teachings of Wang wherein processes within corresponding nodes (i.e. containers) communicate by communicating to-be-processed data and processing the data received via proxy thread (i.e. socket programming/driver interface), into teachings of Pope queues are used for transmission of communication data for writing/reading, because this would enhance the teachings of Wang by utilizing queues to read/write, it allows downstream process(es) to read/write when the communication data is available by monitoring corresponding queue. [Pope paragraph 76] As per claim 20, rejection of claim 17 is incorporated: Pope teaches further comprising: locking the target message queue; and in response to determining that the target message queue is not empty, obtaining the consumption function address. ([Paragraph 84], It will be understood that the example described above could be applied to queues for outgoing as well as incoming data. In an embodiment of the invention in this situation, data waits in buffers to be protocol processed, after which it is passed to one or more transmit queues from where it can be transmitted over the network. A socket related to an application may have further data which it wished to pass to the buffers in order that it can be sent over the network, but such further data cannot efficiently be sent unless there is sufficient capacity in the transmit queues for the data to enter when it has been processed. [Paragraph 32], When a select( ) or poll( ) call is triggered by an application, providing an up-to-date response requires that new data received at an event queue 31 has been validated. In the case of user-level stacks such as the stack 5 of FIG. 2, performing a poll( ) call on new data in a stack can give rise to a high processing overhead. This is due to lock contention (caused by the fact that the stack requires access to shared memory for the validation processing to be carried out) and the requirement for all of the new data in the event queues 31-33 to be processed before it can be recognised for the purpose of a response to a poll( ) call. Thus, in the example of a TCP stack, TCP processing must be carried out on all data in an incoming event queue which may be relevant to the set of file descriptors referenced by the poll( ) call, for a valid response to the poll( ) call to be returned. [Paragraph 90], Syscalls made by applications in a computer system can indicate a file descriptor (sometimes called a handle), which is usually an integer number that identifies an open file within a process. A file descriptor is obtained each time a file is opened or a socket or other resource is created.) As per claim 24, this is a non-transitory computer-readable storage medium claim corresponding to the method claim 1. Therefore, rejected based on similar rationale. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 17 and 18 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Wang. As per claim 17, Wang teaches: A method, comprising: receiving data sent by a first process, wherein the data is sent by the first process based on a first communication manner or a second communication manner; and performing processing on the data. ([Paragraph 11], According to a preferred mode, the present invention discloses a container communication method for parallel-applications, the method comprises: when a first process of a first container has to communicate with a second process of a second container and the first and second containers are in the same host machine, the host machine creating a second channel that is different from a TCP (Transmission Control Protocol)-based first channel between the first container and the second container; [Paragraph 7], Some researchers suggest a shared-memory based communication method between containers. The known method provides a communication framework based on client-server models. In order to utilize this framework, container communication through a shared memory is only possible when source codes of the application have been modified and then translated based on the communication framework. [Paragraph 16], According to a preferred mode, the step of transmitting the communication data to the second container through the second channel is achieved by calling the driver interface to copy the communication data to a process space of the second process. In this way, the need of modifying source codes of the second process can be eliminated, thereby ensuring good compatibility and practicality of the present invention.) As per claim 18, rejection of claim 17 is incorporated: Wang teaches further comprising: when the data is bound to a preset slot function, based on the preset slot function, performing processing on the data. ([Paragraph 5], As is known to all, typical high-performance parallel application systems are implemented based on MPIs (Message Passing Interfaces) programming models. In a traditional case with a physical machine, for optimizing process efficiency of multiple MPIs in the same host machine, an MPI library is used to provide two process information transmission channels, i.e., Shared Memory and Cross Memory Attach, so as to optimize the efficiency of message passing across different processes in the same host machine. ) Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to DONG U KIM whose telephone number is (571)270-1313. The examiner can normally be reached 9:00am - 5:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bradley Teets can be reached at 5712723338. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DONG U KIM/Primary Examiner, Art Unit 2197
Read full office action

Prosecution Timeline

Sep 28, 2023
Application Filed
Feb 24, 2026
Non-Final Rejection — §102, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596564
PRE-LOADING SOFTWARE APPLICATIONS IN A CLOUD COMPUTING ENVIRONMENT
2y 5m to grant Granted Apr 07, 2026
Patent 12596594
REINFORCEMENT LEARNING POLICY SERVING AND TRAINING FRAMEWORK IN PRODUCTION CLOUD SYSTEMS
2y 5m to grant Granted Apr 07, 2026
Patent 12591760
CROSS-INSTANCE INTELLIGENT RESOURCE POOLING FOR DISPARATE DATABASES IN CLOUD NATIVE ENVIRONMENT
2y 5m to grant Granted Mar 31, 2026
Patent 12591449
Merging Streams For Call Enhancement In Virtual Desktop Infrastructure
2y 5m to grant Granted Mar 31, 2026
Patent 12586064
BLOCKCHAIN PROVISION SYSTEM AND METHOD USING NON-COMPETITIVE CONSENSUS ALGORITHM AND MICRO-CHAIN ARCHITECTURE TO ENSURE TRANSACTION PROCESSING SPEED, SCALABILITY, AND SECURITY SUITABLE FOR COMMERCIAL SERVICES
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
87%
Grant Probability
99%
With Interview (+13.7%)
2y 10m
Median Time to Grant
Low
PTA Risk
Based on 702 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month