DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Introduction
The claims 1-5 are pending in this application. This is a non-final office action in response to Application Number 18/687,875 filed on 29 February 2024 with a preliminary amendment also filed on 29 February 2024 in which claims 1-5 are amended. The instant application is a 371 of PCT/JP2021/032313 filed on 2 September 2021. The applicant of record is Nippon Telegraph and Telephone Corporation. The inventors of record are Tomoya Hibi, Junki Ichikawa, Yukio Tsukishima, Kenji Shimizu, Hideki Nishizawa, Kiwami Inoue, and Toru Mano.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 29 February 2024 was filed on the filing date of the instant application and before the first office action on the merits. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Interpretation
The claims have been considered according to the latest Patent Eligibility Guidelines and are considered eligible.
With respect to the independent claims, examiner notes the following structural support within applicant’s specification:
Claims 1 and 3 recite “a communication unit configured to perform transmission of setting information for setting the main signal communication path to a computing machine of an access destination via the control communication path” and claim 4 recites “a step of transmitting setting information for setting the main signal communication path to a computing machine of an access destination via the control communication path”. A review of the specification shows that the following appears to be the corresponding structure:
specification [0029]: “Each of the computing machines 1 and 2 includes an RDMA communication unit 11 (communication unit)…”;
specification [0030]: “The RDMA communication unit 11 is an application that performs RDMA communication. The RDMA communication unit 11 is installed by using a CPU and a memory. The RDMA communication unit 11 controls setting and cancelling of the main signal communication path 4. For example, the RDMA communication unit 11 of the computing machine 1 transmits setting information (control message) for setting the main signal communication path 4 to the computing machine 2 of the access destination via the control communication path 3.”)
Claims 1 and 3 recite “a main signal transmitting/receiving unit configured to establish the main signal communication path based on the setting information” and claim 4 recites “a step of establishing the main signal communication path based on the setting information”. A review of the specification shows that the following appears to be the corresponding structure:
specification [0029]: “Each of the computing machines 1 and 2 includes…a main signal transmitting/receiving unit 16.”;
specification [0036]: “The main signal transmitting/receiving unit 16 transmits and receives a main signal flowing through the main signal communication path 4. As the main signal transmitting/receiving unit 16, an NIC equipped with a laser or the like can be used. The main signal transmitting/receiving unit 16 establishes the main signal communication path 4 based on the setting information of the main signal communication path 4.”
Claims 1 and 3 recite “wherein the computing machine further includes a control unit configured to secure a memory area for RDMA communication based on the instruction and transmit setting information including the memory area to the computing machine of the access destination via the control communication path to set the RDMA communication path” and claim 4 recites “a step of securing a memory area for remote direct memory access (RDMA) communication and transmitting setting information including the memory area to the access destination computing machine via the control communication path to set the RDMA communication path, in parallel with the step of transmitting the setting information”. A review of the specification shows that the following appears to be the corresponding structure:
specification [0029]: “Each of the computing machines 1 and 2 includes…an RDMA control unit 14 (control unit)…”;
specification [0034]: “The RDMA control unit 14 controls data transfer by the RDMA communication. A direct memory access (DMA) controller can be used as the RDMA control unit 14. The DMA controller is a dedicated application (IC chip) that controls DMA transfer and is installed on, for example, a field programmable gate array network interface card (FPGA NIC). The RDMA control unit 14 of the present embodiment secures a memory area for the RDMA communication based on an instruction from the RDMA communication unit 11 and transmits setting information including the memory area to the computing machine 2 of the access destination via the control communication path 3 to set the RDMA communication path.”
In light of the above support, the claims are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph because the claim limitation(s) recite(s) sufficient structure, materials, or acts to entirely perform the recited function.
Because this/these claim limitation(s) is/are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are not being interpreted to cover only the corresponding structure, material, or acts described in the specification as performing the claimed function, and equivalents thereof.
If applicant intends to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to remove the structure, materials, or acts that performs the claimed function; or (2) present a sufficient showing that the claim limitation(s) does/do not recite sufficient structure, materials, or acts to perform the claimed function.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 1 and 3-5 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-5 of U.S. Patent No. 12,407,755. Although the claims at issue are not identical, they are not patentably distinct from each other due to the overlapping scope as illustrated in the comparison table below comparing representative claim 1 of the instant application with claims 1-2 of U.S. Patent 12,407,755.
Claim 1 of the instant application corresponds to claims 1-2 of the patent. Claim 3 of the instant application corresponds to claim 3 of the patent (same statutory class) in combination with claim 2 of the patent. Claim 4 of the instant application corresponds to claim 5 of the patent (same statutory class) in combination with claim 2 of the patent. Claim 5 of the instant application corresponds to claim 4 of the patent.
Instant Application
U.S. Patent 12,407,755
1. A communication system comprising a plurality of computing machines connected to an optical network,
wherein the optical network comprises a main signal communication path and a control communication path,
wherein each computing machine comprises
a communication unit configured to perform transmission of setting information for setting the main signal communication path to a computing machine of an access destination via the control communication path, and
wherein the communication unit is configured to give an instruction for setting of a remote direct memory access (RDMA) communication path in parallel with the transmission of the setting information, and
a main signal transmitting/receiving unit configured to establish the main signal communication path based on the setting information,
wherein the computing machine further includes a control unit configured to secure a memory area for RDMA communication based on the instruction and transmit setting information including the memory area to the computing machine of the access destination via the control communication path to set the RDMA communication path
1. A communication system comprising: a plurality of computing machines connected to an optical network,
[claim 1] wherein the optical network comprises a main signal communication path that is a lower layer of a communication scheme and a control communication path for transferring a control signal, and
[claim 1] wherein each computing machine comprises
[claim 1] a communication unit that transmits a setting request including setting information of the RDMA communication path set by the control unit and setting information for setting the main signal communication path to a computing machine of an access destination via the control communication path, and
[claim 1] wherein communication unit further establishes the RDMA communication path on the established main signal communication path,
[claim 1] a main signal transmitting/receiving unit that establishes the main signal communication path based on the setting information of the main signal communication path,
[claim 1] a control unit that sets a remote direct memory access (RDMA) communication path, wherein to set the RDMA communication path, the control unit secures a memory area for RDMA communication,
[claim 2] The communication system according to claim 1, wherein the communication unit instructs the control unit to set the RDMA communication path, and wherein the control unit notifies the communication unit of setting information including the secured memory area.
[claim 1] wherein the RDMA communication path is an upper layer of the communication scheme.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-5 are rejected under 35 U.S.C. 103 as being unpatentable over Nikami et al. (U.S. Patent Publication 2016/0062943) in view of Zhu et al. (U.S. Patent Publication 2019/0303345).
Regarding claim 1, Nikami disclosed a communication system comprising a plurality of computing machines (see Nikami Fig. 2: transmission node #10A in communication with reception node #10B) connected to an optical network (see Zhu combination below),
wherein the optical (see Zhu combination below) network comprises a main signal communication path (see Nikami Fig. 8: RECV side and SEND side each perform “LANE START UP” to establish that all lanes are operating; #S14->#A13: sending data via all lanes; examiner notes that “all lanes operating” is the communication path for sending data and is interpreted as being functionally equivalent to the claimed “main signal communication path”) and a control communication path (see Nikami Fig. 8 #S12->A11-#S22->#S23: instructions to operate all lanes are sent via a portion of lanes that are operating; examiner notes that “portion of lanes operating” is the communication path for sending instructions and is interpreted as being functionally equivalent to the claimed “control communication path”),
wherein each computing machine comprises (see Nikami Fig. 8, [0099]: CPU 141 of send Buffer #10A executes process #0 by using a SEND side memory area #14A; a CPU 141 of the Recv Buffer 10B executes process #1 by using a RECV side memory area 14B | Fig. 2: transmission node #10A in communication with reception node #10B)
a communication unit (see Nikami Fig. 8, [0099]: CPU 141 of send Buffer #10A executes process #0 by using a SEND side memory area #14A | Fig. 3 #141 CPU in transmission node #10A and reception node #10B) configured to perform transmission of setting information (see Nikami Fig. 8: #S12 an instruction to operate all lanes of RECV side; #S13 an instruction to operate all lanes of SEND side | [0100]: “…the process #0 of the SEND side (a transmission side) calls a transmitting function of MPI_Send (see, e.g., step S11). The process #0 transmits a “transfer destination buffer inquiry (Query)” message from the SEND side to the RECV side (reception side) before the data body is transferred (see, e.g., step S12, arrow A11). In this case, the process #0 appends the instruction to operate the entire lanes for the RECV side to the “transfer destination buffer inquiry” message (see, e.g., step S12). Further, the process #0 issues the instruction to operate the entire lanes for the SEND side (see, e.g., step S13). Accordingly, the operation state of the entire lanes 2 of the SEND side switches from a state where a portion of the lanes are being operated, through a state where the lanes are in a startup state during the lane startup time “To” (see, e.g., Fig. 1).”; [0101]: “In the meantime, process #1 at the RECV side (reception side) calls a function for receiving MPI_Recv (see, e.g., step S21)…Accordingly, the operation state of all the lanes 2 of the RECV side switches from a state where a portion of the lanes are being operated to a state where the entire lanes are being operated through the lane startup state.”) for setting the main signal communication path (see Nikami Fig. 8: #S12 an instruction to operate all lanes of RECV side; #S13 an instruction to operate all lanes of SEND side; #S14->#A13: sending data via all lanes; #S12-#A11-#S22->#S23: an instruction to operate all lanes of RECV side) to a computing machine of an access destination (see Nikami [0099], Fig. 8 #10A SEND buffer; #14A SEND side memory area; #10B RECV buffer; /#14B RECV side memory area) via the control communication path (see Nikami Fig. 8 #S12->A11-#S22->#S23: instructions to operate all lanes are sent via a portion of lanes that are operating; examiner notes that “portion of lanes operating” is the communication path for sending instructions and is interpreted as being functionally equivalent to the claimed “control communication path”), and
a main signal transmitting/receiving unit (see Nikami Fig. 8, [0099]: CPU 141 of send Buffer #10A executes process #0 by using a SEND side memory area #14A | Fig. 3 #142 interconnect hardware and #3 high speed serial link in transmission node #10A and reception node #10B) configured to establish the main signal communication path (see Nikami Fig. 8: RECV side and SEND side each perform “LANE START UP” to establish that all lanes are operating; #S14->#A13: sending data via all lanes; examiner notes that “all lanes operating” is the communication path for sending data and is interpreted as being functionally equivalent to the claimed “main signal communication path”) based on the setting information (see Nikami Fig. 8: #S12 an instruction to operate all lanes of RECV side; #S13 an instruction to operate all lanes of SEND side; RECV side and SEND side each perform “LANE START UP” to establish that all lanes are operating; #S14->#A13: sending data via all lanes),
wherein the communication unit is configured to give an instruction for setting of a remote direct memory access (RDMA) communication path (see Nikami Fig. 8: RECV side and SEND side each perform “LANE START UP” to establish that all lanes are operating | Fig. 8 One-to-One Communication Protocol (Rendezvous Send/Recv) process including #S211 “MPI_Send” and #S21 “MPI_Recv”; Fig. 9 illustrates One-to-One Communication Protocol (RDMA Put) process including #S31 “MPI_Put”; Fig. 10 illustrates One-to-One Communication Protocol (RDMA Get) process including #S51 “MPI_Get”; examiner notes that Figures 8, 9, and 10 each describe sending and receiving memory areas, sending instructions via “portion of lanes operating”, starting all lanes via “LANE START UP”, and sending data via “all lanes operating”, however Figures 9 and 10 specify the RDMA Put and RDMA Get; examiner notes that “all lanes operating” is the communication path for sending data via RDMA in Figures 9 and 10 and is interpreted as being functionally equivalent to the claimed “RDMA communication path”; examiner also notes that the claims do not specify whether or not the “main signal communication path” and “RDMA communication path” are the same or different paths) in parallel (see Nikami [0100]: “…the process #0 of the SEND side (a transmission side) calls a transmitting function of MPI_Send (see, e.g., step S11). The process #0 transmits a “transfer destination buffer inquiry (Query)” message from the SEND side to the RECV side (reception side) before the data body is transferred (see, e.g., step S12, arrow A11). In this case, the process #0 appends the instruction to operate the entire lanes for the RECV side to the “transfer destination buffer inquiry” message (see, e.g., step S12). Further, the process #0 issues the instruction to operate the entire lanes for the SEND side (see, e.g., step S13). Accordingly, the operation state of the entire lanes 2 of the SEND side switches from a state where a portion of the lanes are being operated, through a state where the lanes are in a startup state during the lane startup time “To” (see, e.g., Fig. 1).”; [0101]: “In the meantime, process #1 at the RECV side (reception side) calls a function for receiving MPI_Recv (see, e.g., step S21)…Accordingly, the operation state of all the lanes 2 of the RECV side switches from a state where a portion of the lanes are being operated to a state where the entire lanes are being operated through the lane startup state.”) with the transmission of the setting information (see Nikami Fig. 8: #S12 an instruction to operate all lanes of RECV side; #S13 an instruction to operate all lanes of SEND side; RECV side and SEND side start up all lanes; #S14->#A13: sending data via all lanes; examiner notes that #S12 and #S13 are performed in parallel as well as performing “LANE START UP” at the SEND and RECV sides to establish that all lanes are operating in order for the SEND side to receive the response from the RECV side that all lanes are operating and for the SEND side to then be able to send data via all lanes operating), and
wherein the computing machine further includes a control unit (see Nikami Fig. 8, [0099]: CPU 141 of send Buffer #10A executes process #0 by using a SEND side memory area #14A | Fig. 3 #141 CPU in transmission node #10A and reception node #10B) configured to secure a memory area (see Nikami [0099], Fig. 8 #10A SEND buffer; #14A SEND side memory area; #10B RECV buffer; /#14B RECV side memory area | Fig. 8 #S12->#A11, [0100]: “…The process #0 transmits a “transfer destination buffer inquiry (QUERY)” message from the SEND side to the RECV side (reception side) before the data body is transferred…”; [0101]: “…Upon receiving the “transfer destination buffer inquiry (QUERY)” message to which the instruction to operate the entire lanes for the RECV side is appended, the process #1 replies a “notification of transfer destination buffer (RESPONSE)” message in response to the “transfer destination buffer inquiry (QUERY)” (see step S22, arrow A12). In the meantime, address information of the RECV buffer in the RECV side memory area 14B in which data transferred from the SEND side is to be recorded is included in the “notification of transfer destination buffer”…”) for RDMA communication (see Nikami Fig. 8 One-to-One Communication Protocol (Rendezvous Send/Recv) process including #S211 “MPI_Send” and #S21 “MPI_Recv”; Fig. 9 illustrates One-to-One Communication Protocol (RDMA Put) process including #S31 “MPI_Put”; Fig. 10 illustrates One-to-One Communication Protocol (RDMA Get) process including #S51 “MPI_Get”; examiner notes that Figures 8, 9, and 10 each describe sending and receiving memory areas, sending instructions via “portion of lanes operating”, starting all lanes via “LANE START UP”, and sending data via “all lanes operating”, however Figures 9 and 10 specify the RDMA Put and RDMA Get, i.e. “RDMA communication”) based on the instruction (see Nikami Fig. 8: #S12 an instruction to inquire the destination buffer and an instruction to operate all lanes of RECV side; #S13 an instruction to operate all lanes of SEND side) and transmit setting information (see Nikami Fig. 8: #S12 an instruction to inquire the destination buffer and an instruction to operate all lanes of RECV side; #S13 an instruction to operate all lanes of SEND side) including the memory area (see Nikami Fig. 8: #S12 “Inquire Transfer Destination Buffer”, #A11 Query, #S22 “Notify Transfer Destination Buffer”; [0100]: “…The process #0 transmits a “transfer destination buffer inquiry (QUERY)” message from the SEND side to the RECV side (reception side) before the data body is transferred…”; [0101]: “…Upon receiving the “transfer destination buffer inquiry (QUERY)” message to which the instruction to operate the entire lanes for the RECV side is appended, the process #1 replies a “notification of transfer destination buffer (RESPONSE)” message in response to the “transfer destination buffer inquiry (QUERY)” (see step S22, arrow A12). In the meantime, address information of the RECV buffer in the RECV side memory area 14B in which data transferred from the SEND side is to be recorded is included in the “notification of transfer destination buffer”…” | [0121]: “A direction of data transfer by the RDMA Get is opposite to that by the RDMA Put…Accordingly, an address of the Caller Window is needed for the process #1 of the Target side and the information (which includes address) of the Target Window is needed for the process #0 of the Caller side in order to issue “Get Request”; [0122]: “In the RDMA Get, the process #0 of the Caller side…transmits an “inquiry of Target Window (Query)” message from the Caller side to the Target side”; [0124]: “…the process #1 of the Target side replies a “notification of Target Window (Response)” message…the address information of the Target Window in the Target side memory area 14B in which data transferred from the Caller side is to be recorded is included in the “notification of Target Window”…”; [0125]: “…the process #0 sends the “data transfer request” message (“Get Response”) to the Target side (step S54, arrow A33). In this case, the address information of the Caller Window on the Caller side memory area 14A in which the data to be transferred is recorded is appended to the data transfer request message.”) to the computing machine of the access destination (see Nikami [0099], Fig. 8 #10A SEND buffer; #14A SEND side memory area; #10B RECV buffer; /#14B RECV side memory area) via the control communication path (see Nikami Fig. 8 #S12->A11-#S22->#S23: instructions to operate all lanes are sent via a portion of lanes that are operating; examiner notes that “portion of lanes operating” is the communication path for sending instructions and is interpreted as being functionally equivalent to the claimed “control communication path”) to set the RDMA communication path (see Nikami Fig. 8: RECV side and SEND side each perform “LANE START UP” to establish that all lanes are operating | Fig. 8 One-to-One Communication Protocol (Rendezvous Send/Recv) process including #S211 “MPI_Send” and #S21 “MPI_Recv”; Fig. 9 illustrates One-to-One Communication Protocol (RDMA Put) process including #S31 “MPI_Put”; Fig. 10 illustrates One-to-One Communication Protocol (RDMA Get) process including #S51 “MPI_Get”; examiner notes that Figures 8, 9, and 10 each describe sending and receiving memory areas, sending instructions via “portion of lanes operating”, starting all lanes via “LANE START UP”, and sending data via “all lanes operating”, however Figures 9 and 10 specify the RDMA Put and RDMA Get; examiner notes that “all lanes operating” is the communication path for sending data via RDMA in Figures 9 and 10 and is interpreted as being functionally equivalent to the claimed “RDMA communication path”).
Nikami did not explicitly disclose “an optical network” that connects the transmitting and receiving nodes and that includes a control path and data path, however in a related art, Zhu disclosed using a fiber optic network when RDMA enabled NICs to communicate between hosts (see [0045]). The RDMA virtualization framework also has full access to the control path and data path of network communications (see Zhu [0008]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Nikami and Zhu to further clarify the type of network in which RDMA can be used in combination with control and data paths. Incorporating Zhu’s teachings would provide isolation and portability on the control plane while also implementing quality of service and traffic metering when also using RDMA (see Zhu [0029]).
Regarding claim 2, Nikami-Zhu disclosed the communication system according to claim 1, wherein the control unit is configured to transmit the setting information including a communication start timing of RDMA communication (see Nikami Fig. 8, [0100]: “…In this case, the process #0 appends the instruction to operate the entire lanes for the Recv side to the “transfer destination buffer inquiry” message (see, e.g., step S12). Further, the process #0 issues the instruction to operate the entire lanes for the Send side (see, e.g., step S13). Accordingly, the operation state of the entire lanes 2 of the Send side switches from a state where a portion of the lanes are being operated to a state where the entire lanes are being operated, through a state where the lanes are in a startup state during the lane startup time “To” (see, e.g., FIG. 1).”; Fig. 1, [0030]: “In the embodiment, it is assumed that a time (a startup time) “To” taken for the startup of a lane is acquired in advance as a specification of the lane. As illustrated in FIG. 1, it is assumed that an instruction to start up a portion of lanes is issued concurrently with a transmission start of the lightweight message (see, e.g., “Request” of arrow A1). In this case, in the transmission side (a transmission node), a time taken from the transmission start of the lightweight message before a response (see, e.g., “Response” of arrow A2) to the lightweight message is received becomes the “RU plus transfer time.” Here, the RU is an abbreviation of Round Trip Time and corresponds to a round-trip delay time between a transmission side (a transmission node) and a reception side (a reception node). Further, the transfer time is obtained by dividing a transfer time for the lightweight message, that is, a data amount of the lightweight message, by a communication speed.”).
Regarding claim 3, the claim contains the limitations, substantially as claimed, as described in claim 1 above. Nikami disclosed, as recited in claim 3: A computing machine connected to a computing machine of an access destination (see Nikami Fig. 2: transmission node #10A in communication with reception node #10B) via an optical network (see Zhu combination below),
wherein the optical (see Zhu combination below) network comprises a main signal communication path (see Nikami Fig. 8: RECV side and SEND side each perform “LANE START UP” to establish that all lanes are operating; #S14->#A13: sending data via all lanes; examiner notes that “all lanes operating” is the communication path for sending data and is interpreted as being functionally equivalent to the claimed “main signal communication path”) and a control communication path (see Nikami Fig. 8 #S12->A11-#S22->#S23: instructions to operate all lanes are sent via a portion of lanes that are operating; examiner notes that “portion of lanes operating” is the communication path for sending instructions and is interpreted as being functionally equivalent to the claimed “control communication path”),
wherein the computing machine comprises (see Nikami Fig. 8, [0099]: CPU 141 of send Buffer #10A executes process #0 by using a SEND side memory area #14A; a CPU 141 of the Recv Buffer 10B executes process #1 by using a RECV side memory area 14B | Fig. 2: transmission node #10A in communication with reception node #10B)
a communication unit (see Nikami Fig. 8, [0099]: CPU 141 of send Buffer #10A executes process #0 by using a SEND side memory area #14A | Fig. 3 #141 CPU in transmission node #10A and reception node #10B) configured to perform transmission of setting information (see Nikami Fig. 8: #S12 an instruction to operate all lanes of RECV side; #S13 an instruction to operate all lanes of SEND side | [0100]: “…the process #0 of the SEND side (a transmission side) calls a transmitting function of MPI_Send (see, e.g., step S11). The process #0 transmits a “transfer destination buffer inquiry (Query)” message from the SEND side to the RECV side (reception side) before the data body is transferred (see, e.g., step S12, arrow A11). In this case, the process #0 appends the instruction to operate the entire lanes for the RECV side to the “transfer destination buffer inquiry” message (see, e.g., step S12). Further, the process #0 issues the instruction to operate the entire lanes for the SEND side (see, e.g., step S13). Accordingly, the operation state of the entire lanes 2 of the SEND side switches from a state where a portion of the lanes are being operated, through a state where the lanes are in a startup state during the lane startup time “To” (see, e.g., Fig. 1).”; [0101]: “In the meantime, process #1 at the RECV side (reception side) calls a function for receiving MPI_Recv (see, e.g., step S21)…Accordingly, the operation state of all the lanes 2 of the RECV side switches from a state where a portion of the lanes are being operated to a state where the entire lanes are being operated through the lane startup state.”) for setting the main signal communication path (see Nikami Fig. 8: #S12 an instruction to operate all lanes of RECV side; #S13 an instruction to operate all lanes of SEND side; #S14->#A13: sending data via all lanes; #S12-#A11-#S22->#S23: an instruction to operate all lanes of RECV side) to the computing machine of the access destination (see Nikami [0099], Fig. 8 #10A SEND buffer; #14A SEND side memory area; #10B RECV buffer; /#14B RECV side memory area) via the control communication path (see Nikami Fig. 8 #S12->A11-#S22->#S23: instructions to operate all lanes are sent via a portion of lanes that are operating; examiner notes that “portion of lanes operating” is the communication path for sending instructions and is interpreted as being functionally equivalent to the claimed “control communication path”); and
a main signal transmitting/receiving unit (see Nikami Fig. 8, [0099]: CPU 141 of send Buffer #10A executes process #0 by using a SEND side memory area #14A | Fig. 3 #142 interconnect hardware and #3 high speed serial link in transmission node #10A and reception node #10B) configured to establish the main signal communication path (see Nikami Fig. 8: RECV side and SEND side each perform “LANE START UP” to establish that all lanes are operating; #S14->#A13: sending data via all lanes; examiner notes that “all lanes operating” is the communication path for sending data and is interpreted as being functionally equivalent to the claimed “main signal communication path”) based on the setting information (see Nikami Fig. 8: #S12 an instruction to operate all lanes of RECV side; #S13 an instruction to operate all lanes of SEND side; RECV side and SEND side each perform “LANE START UP” to establish that all lanes are operating; #S14->#A13: sending data via all lanes)
wherein the communication unit is configured to give an instruction for setting of a remote direct memory access (RDMA) communication path (see Nikami Fig. 8: RECV side and SEND side each perform “LANE START UP” to establish that all lanes are operating | Fig. 8 One-to-One Communication Protocol (Rendezvous Send/Recv) process including #S211 “MPI_Send” and #S21 “MPI_Recv”; Fig. 9 illustrates One-to-One Communication Protocol (RDMA Put) process including #S31 “MPI_Put”; Fig. 10 illustrates One-to-One Communication Protocol (RDMA Get) process including #S51 “MPI_Get”; examiner notes that Figures 8, 9, and 10 each describe sending and receiving memory areas, sending instructions via “portion of lanes operating”, starting all lanes via “LANE START UP”, and sending data via “all lanes operating”, however Figures 9 and 10 specify the RDMA Put and RDMA Get; examiner notes that “all lanes operating” is the communication path for sending data via RDMA in Figures 9 and 10 and is interpreted as being functionally equivalent to the claimed “RDMA communication path”; examiner also notes that the claims do not specify whether or not the “main signal communication path” and “RDMA communication path” are the same or different paths) in parallel (see Nikami [0100]: “…the process #0 of the SEND side (a transmission side) calls a transmitting function of MPI_Send (see, e.g., step S11). The process #0 transmits a “transfer destination buffer inquiry (Query)” message from the SEND side to the RECV side (reception side) before the data body is transferred (see, e.g., step S12, arrow A11). In this case, the process #0 appends the instruction to operate the entire lanes for the RECV side to the “transfer destination buffer inquiry” message (see, e.g., step S12). Further, the process #0 issues the instruction to operate the entire lanes for the SEND side (see, e.g., step S13). Accordingly, the operation state of the entire lanes 2 of the SEND side switches from a state where a portion of the lanes are being operated, through a state where the lanes are in a startup state during the lane startup time “To” (see, e.g., Fig. 1).”; [0101]: “In the meantime, process #1 at the RECV side (reception side) calls a function for receiving MPI_Recv (see, e.g., step S21)…Accordingly, the operation state of all the lanes 2 of the RECV side switches from a state where a portion of the lanes are being operated to a state where the entire lanes are being operated through the lane startup state.”) with the transmission of the setting information (see Nikami Fig. 8: #S12 an instruction to operate all lanes of RECV side; #S13 an instruction to operate all lanes of SEND side; RECV side and SEND side start up all lanes; #S14->#A13: sending data via all lanes; examiner notes that #S12 and #S13 are performed in parallel as well as performing “LANE START UP” at the SEND and RECV sides to establish that all lanes are operating in order for the SEND side to receive the response from the RECV side that all lanes are operating and for the SEND side to then be able to send data via all lanes operating), and
wherein the computing machine further comprises a control unit (see Nikami Fig. 8, [0099]: CPU 141 of send Buffer #10A executes process #0 by using a SEND side memory area #14A | Fig. 3 #141 CPU in transmission node #10A and reception node #10B) configured to secure a memory area (see Nikami [0099], Fig. 8 #10A SEND buffer; #14A SEND side memory area; #10B RECV buffer; /#14B RECV side memory area | Fig. 8 #S12->#A11, [0100]: “…The process #0 transmits a “transfer destination buffer inquiry (QUERY)” message from the SEND side to the RECV side (reception side) before the data body is transferred…”; [0101]: “…Upon receiving the “transfer destination buffer inquiry (QUERY)” message to which the instruction to operate the entire lanes for the RECV side is appended, the process #1 replies a “notification of transfer destination buffer (RESPONSE)” message in response to the “transfer destination buffer inquiry (QUERY)” (see step S22, arrow A12). In the meantime, address information of the RECV buffer in the RECV side memory area 14B in which data transferred from the SEND side is to be recorded is included in the “notification of transfer destination buffer”…”) for RDMA communication (see Nikami Fig. 8 One-to-One Communication Protocol (Rendezvous Send/Recv) process including #S211 “MPI_Send” and #S21 “MPI_Recv”; Fig. 9 illustrates One-to-One Communication Protocol (RDMA Put) process including #S31 “MPI_Put”; Fig. 10 illustrates One-to-One Communication Protocol (RDMA Get) process including #S51 “MPI_Get”; examiner notes that Figures 8, 9, and 10 each describe sending and receiving memory areas, sending instructions via “portion of lanes operating”, starting all lanes via “LANE START UP”, and sending data via “all lanes operating”, however Figures 9 and 10 specify the RDMA Put and RDMA Get, i.e. “RDMA communication”) based on the instruction (see Nikami Fig. 8: #S12 an instruction to inquire the destination buffer and an instruction to operate all lanes of RECV side; #S13 an instruction to operate all lanes of SEND side) and transmit setting information (see Nikami Fig. 8: #S12 an instruction to inquire the destination buffer and an instruction to operate all lanes of RECV side; #S13 an instruction to operate all lanes of SEND side) including the memory area (see Nikami Fig. 8: #S12 “Inquire Transfer Destination Buffer”, #A11 Query, #S22 “Notify Transfer Destination Buffer”; [0100]: “…The process #0 transmits a “transfer destination buffer inquiry (QUERY)” message from the SEND side to the RECV side (reception side) before the data body is transferred…”; [0101]: “…Upon receiving the “transfer destination buffer inquiry (QUERY)” message to which the instruction to operate the entire lanes for the RECV side is appended, the process #1 replies a “notification of transfer destination buffer (RESPONSE)” message in response to the “transfer destination buffer inquiry (QUERY)” (see step S22, arrow A12). In the meantime, address information of the RECV buffer in the RECV side memory area 14B in which data transferred from the SEND side is to be recorded is included in the “notification of transfer destination buffer”…” | [0121]: “A direction of data transfer by the RDMA Get is opposite to that by the RDMA Put…Accordingly, an address of the Caller Window is needed for the process #1 of the Target side and the information (which includes address) of the Target Window is needed for the process #0 of the Caller side in order to issue “Get Request”; [0122]: “In the RDMA Get, the process #0 of the Caller side…transmits an “inquiry of Target Window (Query)” message from the Caller side to the Target side”; [0124]: “…the process #1 of the Target side replies a “notification of Target Window (Response)” message…the address information of the Target Window in the Target side memory area 14B in which data transferred from the Caller side is to be recorded is included in the “notification of Target Window”…”; [0125]: “…the process #0 sends the “data transfer request” message (“Get Response”) to the Target side (step S54, arrow A33). In this case, the address information of the Caller Window on the Caller side memory area 14A in which the data to be transferred is recorded is appended to the data transfer request message.”) to the access destination computing machine (see Nikami [0099], Fig. 8 #10A SEND buffer; #14A SEND side memory area; #10B RECV buffer; /#14B RECV side memory area) via the control communication path (see Nikami Fig. 8 #S12->A11-#S22->#S23: instructions to operate all lanes are sent via a portion of lanes that are operating; examiner notes that “portion of lanes operating” is the communication path for sending instructions and is interpreted as being functionally equivalent to the claimed “control communication path”) to set the RDMA communication path (see Nikami Fig. 8: RECV side and SEND side each perform “LANE START UP” to establish that all lanes are operating | Fig. 8 One-to-One Communication Protocol (Rendezvous Send/Recv) process including #S211 “MPI_Send” and #S21 “MPI_Recv”; Fig. 9 illustrates One-to-One Communication Protocol (RDMA Put) process including #S31 “MPI_Put”; Fig. 10 illustrates One-to-One Communication Protocol (RDMA Get) process including #S51 “MPI_Get”; examiner notes that Figures 8, 9, and 10 each describe sending and receiving memory areas, sending instructions via “portion of lanes operating”, starting all lanes via “LANE START UP”, and sending data via “all lanes operating”, however Figures 9 and 10 specify the RDMA Put and RDMA Get; examiner notes that “all lanes operating” is the communication path for sending data via RDMA in Figures 9 and 10 and is interpreted as being functionally equivalent to the claimed “RDMA communication path”).
Nikami did not explicitly disclose “an optical network” that connects the transmitting and receiving nodes and that includes a control path and data path, however in a related art, Zhu disclosed using a fiber optic network when RDMA enabled NICs to communicate between hosts (see [0045]). The RDMA virtualization framework also has full access to the control path and data path of network communications (see Zhu [0008]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Nikami and Zhu to further clarify the type of network in which RDMA can be used in combination with control and data paths. Incorporating Zhu’s teachings would provide isolation and portability on the control plane while also implementing quality of service and traffic metering when also using RDMA (see Zhu [0029]).
Regarding claim 4, the claim contains the limitations, substantially as claimed, as described in claim 1 above. Nikami disclosed, as recited in claim 4: A communication method (see Nikami Fig. 8 Rendezvous Send/Recv method; Fig. 9 RDMA Put method; Fig. 10 RDMA Get method) performed by a communication system comprising a plurality of computing machines (see Nikami Fig. 2: transmission node #10A in communication with reception node #10B) connected to an optical network (see Zhu combination below),
wherein the optical (see Zhu combination below) network comprises a main signal communication path (see Nikami Fig. 8: RECV side and SEND side each perform “LANE START UP” to establish that all lanes are operating; #S14->#A13: sending data via all lanes; examiner notes that “all lanes operating” is the communication path for sending data and is interpreted as being functionally equivalent to the claimed “main signal communication path”) and a control communication path (see Nikami Fig. 8 #S12->A11-#S22->#S23: instructions to operate all lanes are sent via a portion of lanes that are operating; examiner notes that “portion of lanes operating” is the communication path for sending instructions and is interpreted as being functionally equivalent to the claimed “control communication path”), and
wherein each computing machine is configured to perform (see Nikami Fig. 8, [0099]: CPU 141 of send Buffer #10A executes process #0 by using a SEND side memory area #14A; a CPU 141 of the Recv Buffer 10B executes process #1 by using a RECV side memory area 14B | Fig. 2: transmission node #10A in communication with reception node #10B)
a step of transmitting setting information (see Nikami Fig. 8, [0099]: CPU 141 of send Buffer #10A executes process #0 by using a SEND side memory area #14A | Fig. 3 #141 CPU in transmission node #10A and reception node #10B | Fig. 8: #S12 an instruction to operate all lanes of RECV side; #S13 an instruction to operate all lanes of SEND side | [0100]: “…the process #0 of the SEND side (a transmission side) calls a transmitting function of MPI_Send (see, e.g., step S11). The process #0 transmits a “transfer destination buffer inquiry (Query)” message from the SEND side to the RECV side (reception side) before the data body is transferred (see, e.g., step S12, arrow A11). In this case, the process #0 appends the instruction to operate the entire lanes for the RECV side to the “transfer destination buffer inquiry” message (see, e.g., step S12). Further, the process #0 issues the instruction to operate the entire lanes for the SEND side (see, e.g., step S13). Accordingly, the operation state of the entire lanes 2 of the SEND side switches from a state where a portion of the lanes are being operated, through a state where the lanes are in a startup state during the lane startup time “To” (see, e.g., Fig. 1).”; [0101]: “In the meantime, process #1 at the RECV side (reception side) calls a function for receiving MPI_Recv (see, e.g., step S21)…Accordingly, the operation state of all the lanes 2 of the RECV side switches from a state where a portion of the lanes are being operated to a state where the entire lanes are being operated through the lane startup state.”) for setting the main signal communication path (see Nikami Fig. 8: #S12 an instruction to operate all lanes of RECV side; #S13 an instruction to operate all lanes of SEND side; #S14->#A13: sending data via all lanes; #S12-#A11-#S22->#S23: an instruction to operate all lanes of RECV side) to a computing machine of an access destination (see Nikami [0099], Fig. 8 #10A SEND buffer; #14A SEND side memory area; #10B RECV buffer; /#14B RECV side memory area) via the control communication path (see Nikami Fig. 8 #S12->A11-#S22->#S23: instructions to operate all lanes are sent via a portion of lanes that are operating; examiner notes that “portion of lanes operating” is the communication path for sending instructions and is interpreted as being functionally equivalent to the claimed “control communication path”),
a step of establishing the main signal communication path (see Nikami Fig. 8, [0099]: CPU 141 of send Buffer #10A executes process #0 by using a SEND side memory area #14A | Fig. 3 #142 interconnect hardware and #3 high speed serial link in transmission node #10A and reception node #10B | Fig. 8: RECV side and SEND side each perform “LANE START UP” to establish that all lanes are operating; #S14->#A13: sending data via all lanes; examiner notes that “all lanes operating” is the communication path for sending data and is interpreted as being functionally equivalent to the claimed “main signal communication path”) based on the setting information (see Nikami Fig. 8: #S12 an instruction to operate all lanes of RECV side; #S13 an instruction to operate all lanes of SEND side; RECV side and SEND side each perform “LANE START UP” to establish that all lanes are operating; #S14->#A13: sending data via all lanes), and
a step of securing a memory area (see Nikami Fig. 8, [0099]: CPU 141 of send Buffer #10A executes process #0 by using a SEND side memory area #14A | Fig. 3 #141 CPU in transmission node #10A and reception node #10B | [0099], Fig. 8 #10A SEND buffer; #14A SEND side memory area; #10B RECV buffer; /#14B RECV side memory area | Fig. 8 #S12->#A11, [0100]: “…The process #0 transmits a “transfer destination buffer inquiry (QUERY)” message from the SEND side to the RECV side (reception side) before the data body is transferred…”; [0101]: “…Upon receiving the “transfer destination buffer inquiry (QUERY)” message to which the instruction to operate the entire lanes for the RECV side is appended, the process #1 replies a “notification of transfer destination buffer (RESPONSE)” message in response to the “transfer destination buffer inquiry (QUERY)” (see step S22, arrow A12). In the meantime, address information of the RECV buffer in the RECV side memory area 14B in which data transferred from the SEND side is to be recorded is included in the “notification of transfer destination buffer”…”) for remote direct memory access (RDMA) communication (see Nikami Fig. 8 One-to-One Communication Protocol (Rendezvous Send/Recv) process including #S211 “MPI_Send” and #S21 “MPI_Recv”; Fig. 9 illustrates One-to-One Communication Protocol (RDMA Put) process including #S31 “MPI_Put”; Fig. 10 illustrates One-to-One Communication Protocol (RDMA Get) process including #S51 “MPI_Get”; examiner notes that Figures 8, 9, and 10 each describe sending and receiving memory areas, sending instructions via “portion of lanes operating”, starting all lanes via “LANE START UP”, and sending data via “all lanes operating”, however Figures 9 and 10 specify the RDMA Put and RDMA Get, i.e. “RDMA communication”) and transmitting setting information (see Nikami Fig. 8: #S12 an instruction to inquire the destination buffer and an instruction to operate all lanes of RECV side; #S13 an instruction to operate all lanes of SEND side) including the memory area (see Nikami Fig. 8: #S12 “Inquire Transfer Destination Buffer”, #A11 Query, #S22 “Notify Transfer Destination Buffer”; [0100]: “…The process #0 transmits a “transfer destination buffer inquiry (QUERY)” message from the SEND side to the RECV side (reception side) before the data body is transferred…”; [0101]: “…Upon receiving the “transfer destination buffer inquiry (QUERY)” message to which the instruction to operate the entire lanes for the RECV side is appended, the process #1 replies a “notification of transfer destination buffer (RESPONSE)” message in response to the “transfer destination buffer inquiry (QUERY)” (see step S22, arrow A12). In the meantime, address information of the RECV buffer in the RECV side memory area 14B in which data transferred from the SEND side is to be recorded is included in the “notification of transfer destination buffer”…” | [0121]: “A direction of data transfer by the RDMA Get is opposite to that by the RDMA Put…Accordingly, an address of the Caller Window is needed for the process #1 of the Target side and the information (which includes address) of the Target Window is needed for the process #0 of the Caller side in order to issue “Get Request”; [0122]: “In the RDMA Get, the process #0 of the Caller side…transmits an “inquiry of Target Window (Query)” message from the Caller side to the Target side”; [0124]: “…the process #1 of the Target side replies a “notification of Target Window (Response)” message…the address information of the Target Window in the Target side memory area 14B in which data transferred from the Caller side is to be recorded is included in the “notification of Target Window”…”; [0125]: “…the process #0 sends the “data transfer request” message (“Get Response”) to the Target side (step S54, arrow A33). In this case, the address information of the Caller Window on the Caller side memory area 14A in which the data to be transferred is recorded is appended to the data transfer request message.”) to the access destination computing machine (see Nikami [0099], Fig. 8 #10A SEND buffer; #14A SEND side memory area; #10B RECV buffer; /#14B RECV side memory area) via the control communication path (see Nikami Fig. 8 #S12->A11-#S22->#S23: instructions to operate all lanes are sent via a portion of lanes that are operating; examiner notes that “portion of lanes operating” is the communication path for sending instructions and is interpreted as being functionally equivalent to the claimed “control communication path”) to set the RDMA communication path (see Nikami Fig. 8: RECV side and SEND side each perform “LANE START UP” to establish that all lanes are operating | Fig. 8 One-to-One Communication Protocol (Rendezvous Send/Recv) process including #S211 “MPI_Send” and #S21 “MPI_Recv”; Fig. 9 illustrates One-to-One Communication Protocol (RDMA Put) process including #S31 “MPI_Put”; Fig. 10 illustrates One-to-One Communication Protocol (RDMA Get) process including #S51 “MPI_Get”; examiner notes that Figures 8, 9, and 10 each describe sending and receiving memory areas, sending instructions via “portion of lanes operating”, starting all lanes via “LANE START UP”, and sending data via “all lanes operating”, however Figures 9 and 10 specify the RDMA Put and RDMA Get; examiner notes that “all lanes operating” is the communication path for sending data via RDMA in Figures 9 and 10 and is interpreted as being functionally equivalent to the claimed “RDMA communication path”; examiner also notes that the claims do not specify whether or not the “main signal communication path” and “RDMA communication path” are the same or different paths), in parallel (see Nikami [0100]: “…the process #0 of the SEND side (a transmission side) calls a transmitting function of MPI_Send (see, e.g., step S11). The process #0 transmits a “transfer destination buffer inquiry (Query)” message from the SEND side to the RECV side (reception side) before the data body is transferred (see, e.g., step S12, arrow A11). In this case, the process #0 appends the instruction to operate the entire lanes for the RECV side to the “transfer destination buffer inquiry” message (see, e.g., step S12). Further, the process #0 issues the instruction to operate the entire lanes for the SEND side (see, e.g., step S13). Accordingly, the operation state of the entire lanes 2 of the SEND side switches from a state where a portion of the lanes are being operated, through a state where the lanes are in a startup state during the lane startup time “To” (see, e.g., Fig. 1).”; [0101]: “In the meantime, process #1 at the RECV side (reception side) calls a function for receiving MPI_Recv (see, e.g., step S21)…Accordingly, the operation state of all the lanes 2 of the RECV side switches from a state where a portion of the lanes are being operated to a state where the entire lanes are being operated through the lane startup state.”) with the step of transmitting the setting information (see Nikami Fig. 8: #S12 an instruction to operate all lanes of RECV side; #S13 an instruction to operate all lanes of SEND side; RECV side and SEND side start up all lanes; #S14->#A13: sending data via all lanes; examiner notes that #S12 and #S13 are performed in parallel as well as performing “LANE START UP” at the SEND and RECV sides to establish that all lanes are operating in order for the SEND side to receive the response from the RECV side that all lanes are operating and for the SEND side to then be able to send data via all lanes operating).
Nikami did not explicitly disclose “an optical network” that connects the transmitting and receiving nodes and that includes a control path and data path, however in a related art, Zhu disclosed using a fiber optic network when RDMA enabled NICs to communicate between hosts (see [0045]). The RDMA virtualization framework also has full access to the control path and data path of network communications (see Zhu [0008]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Nikami and Zhu to further clarify the type of network in which RDMA can be used in combination with control and data paths. Incorporating Zhu’s teachings would provide isolation and portability on the control plane while also implementing quality of service and traffic metering when also using RDMA (see Zhu [0029]).
Regarding claim 5, Nikami-Zhu disclosed a non-transitory computer-readable storage medium storing a program (see Nikami [0049]: program stored on computer readable medium) for causing a computer to function (see Nikami Fig. 3, [0047]: control unit may be implemented by executing a predetermined program in CPU) as the computing machine according to claim 3 (Examiner notes that the citations and explanations provided in claim 3 above are incorporated in their entirety into the rejection of this claim).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Angela Widhalm de Rodriguez whose telephone number is (571)272-1035. The examiner can normally be reached M-F: 6am-2:30pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Nicholas Taylor can be reached on (571)272-3889. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ANGELA WIDHALM DE RODRIGUEZ/Examiner, Art Unit 2443