DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant’s arguments, filed 11/20/2025, with respect to the rejection(s) of claim(s) under the combination of prior arts have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Men (CN 115858152 B).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 4-8, 11-15, and 18-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Wang (US pg. no. 20180285151), further in view of Wang2 (CN117294629A), further in view of Men (CN 115858152 B).
Regarding claim 1. Wang disclose a method for network traffic management in a system comprising a network interface card (NIC)(fig. 2 216 NIC) operatively coupled to a processor with multiple cores( fig. 2, multi-cores core 1-n of a processor), the NIC configured to execute receiver side scaling (RSS) ([0004-0006] discloses [0004] NICs can steer data flows, e.g., data packets, to any of a number of receive queues by way of Receive Side Scaling (RSS). Servers generally take advantage of such capabilities to distribute connections, e.g., transmission control protocol (TCP) connections, to different CPU cores for processing. [0005] The use of RSS typically includes application of a filter that applies a hash function over the packet headers of received data packets. An indirection table can then be used to map each data packet to a certain receive queue, e.g., based on the corresponding hash value. The CPU cores can then be assigned to work on one or more specific queues in order to enable distributed processing. [0006] RSS usually involves the mapping of many data flows into a limited number of receive queues targeting a limited number of CPU cores), the method comprising:
generating, by the NIC, a hash table for tracking communications flows that have been assigned to selected cores of the multiple cores ([0032-0034] discloses with RSS, data packet headers having a certain hash value can be mapped to a certain CPU core based on the corresponding indirection table (hash table). If a certain CPU core is handling a few large TCP flows, or temporarily gets too many flows mapped to it (tracking communication flow using the table), that CPU core becomes overloaded. In such situations, new data flows may be re-assigned to CPU cores that have a lighter load);
in response to receiving, at the NIC, a packet associated with a new communication flow, accessing, by the NIC, a flag indicating that a first core of the multiple cores exceeds a threshold for CPU utilization ([0032-0034] discloses with RSS, data packet headers having a certain hash value can be mapped to a certain CPU core based on the corresponding indirection table (hash table). If a certain CPU core is handling a few large TCP flows, or temporarily gets too many flows mapped to it, that CPU core becomes overloaded. In such situations, new data flows may be re-assigned to CPU cores that have a lighter load. [0033] In situations involving KVS-type workloads…requests having the same key may be sent to the same CPU core for processing, thus significantly reducing cross-core communication overhead and improving performance, often significantly. [0034] In order to perform dynamic load balancing, overloading of a CPU core must be detected. This may be accomplished by enabling the CPU cores to communicate with the NIC, e.g., using out-of-band messaging, about their utilization…If the NIC determines that a certain receive queue length exceeds a particular threshold (flag), it may determine that overloading is occurring and subsequently steer data traffic to the CPU core elsewhere);
in response to determining that the flag indicates that the first core of the multiple cores exceeds the threshold for CPU utilization, excluding queues associated with the first core of the multiple cores from an RSS function for load balancing the multiple cores and using a subset of queues for the multiple cores that exclude the queues associated with first core for load balancing the multiple cores ([0021-0022] Each of the receive queues of the NIC 216 may be mapped to one or more CPU cores. In the example, data packets sent to the first receive queue 217 and the nth receive queue 218 are mapped to a first CPU core 230. [0022] Responsive to a determination that the first CPU core 230 is overloaded, e.g., the lengths of either or both of the first and nth receive queues 217 and 218 exceed a certain threshold, the data packets from either or both of the first and nth receive queues 217 and 218 may be redirected, e.g., re-mapped to, another CPU core such as the nth CPU core 231. The CPU core to which the data packets are redirected may be selected based on a determination that the CPU core is less busy than the first CPU core 230; ([0032-0034] discloses with RSS, data packet headers having a certain hash value can be mapped to a certain CPU core based on the corresponding indirection table (hash table). If a certain CPU core is handling a few large TCP flows, or temporarily gets too many flows mapped to it, that CPU core becomes overloaded. In such situations, new data flows may be re-assigned to CPU cores that have a lighter load. [0033] In situations involving KVS-type workloads…requests having the same key may be sent to the same CPU core for processing, thus significantly reducing cross-core communication overhead and improving performance, often significantly. [0034] In order to perform dynamic load balancing, overloading of a CPU core must be detected. This may be accomplished by enabling the CPU cores to communicate with the NIC, e.g., using out-of-band messaging, about their utilization…If the NIC determines that a certain receive queue length exceeds a particular threshold, it may determine that overloading is occurring and subsequently steer data traffic to the CPU core elsewhere);
But, Wang does not explicitly disclose:
executing the RSS function, using the subset, for load balancing the multiple cores to select a second core for processing the packet associated with the new communication flow;
assigning the new communication flow to a queue associated with the second core for processing the new communication flow;
updating the hash table to include the new communication flow and indicating that the new communication flow has been assigned to the queue associated with the second core; and
sending the packet associated with the new communication flow to the queue associated with second core.
However, in the same field of endeavor, Wang2 discloses what Wang discloses that is in response to determining that the flag indicates that the first core of the multiple cores exceeds the threshold for CPU utilization, excluding queues associated with the first core of the multiple cores from an RSS function for load balancing the multiple cores and using a subset of queues for the multiple cores that exclude the queues associated with first core for load balancing the multiple cores (page 3, lines 24-27 and 45-50 when processing the new connection, the CPU load balancing degree needs to be calculated; if the load balance degree of the CPU exceeds the threshold value, the CPU core with the heaviest load (corresponds to the flag that is determined and indicating that the first core of the multiple cores exceeds the threshold for CPU utilization) is removed (the queue associated with the removed queue corresponds to queue), and the Hash calculation is performed again. after finishing the calculation, distributing the data packet to the corresponding recombination core according to the hash value (corresponds to excluding queues associated with the first core of the multiple cores from an RSS function for load balancing), and creating the record of the data stream in the stream table of the recombination core…. Among them, pi(t) represents the utilization efficiency of the i-th CPU core at time t, and n is the number of cores of the CPU multi-core processor; when the value of the load balancing degree is higher than 0.2, adjusting the processor load can achieve better results. The good effect is that when the load balancing value exceeds 0.2, the CPU core with the largest load will be eliminated (excluded), the hash calculation (RSS function) will be re-calculated and allocated to the corresponding CPU core, thereby achieving data processing load balancing among CPU multi-core processors);
Wang2 further discloses executing the RSS function, using the subset, for load balancing the multiple cores to select a second core for processing the packet associated with the new communication flow (page 3, lines 24-27 and 45-50 when processing the new connection, the CPU load balancing degree needs to be calculated; if the load balance degree of the CPU exceeds the threshold value, the CPU core with the heaviest load is removed, and the Hash calculation is performed again. after finishing the calculation, distributing the data packet to the corresponding recombination core according to the hash value, and creating the record of the data stream in the stream table of the recombination core…. Among them, pi(t) represents the utilization efficiency of the i-th CPU core at time t, and n is the number of cores of the CPU multi-core processor; when the value of the load balancing degree is higher than 0.2, adjusting the processor load can achieve better results. The good effect is that when the load balancing value exceeds 0.2, the CPU core with the largest load will be eliminated, the hash calculation will be re-calculated (executing the RSS function, using the subset excluding the removed core and its queue) and allocated to the corresponding CPU core, thereby achieving data processing load balancing among CPU multi-core processors);
assigning the new communication flow to a queue associated with the second core for processing the new communication flow (page 3, lines 24-27 and 45-50 when processing the new connection, the CPU load balancing degree needs to be calculated; if the load balance degree of the CPU exceeds the threshold value, the CPU core with the heaviest load is removed, and the Hash calculation is performed again. after finishing the calculation, distributing the data packet to the corresponding recombination core according to the hash value, and creating the record of the data stream in the stream table of the recombination core…. Among them, pi(t) represents the utilization efficiency of the i-th CPU core at time t, and n is the number of cores of the CPU multi-core processor; when the value of the load balancing degree is higher than 0.2, adjusting the processor load can achieve better results. The good effect is that when the load balancing value exceeds 0.2, the CPU core with the largest load will be eliminated, the hash calculation will be re-calculated and allocated to the corresponding CPU core (having corresponding queue), thereby achieving data processing load balancing among CPU multi-core processors);
updating ,by the NIC, the hash table to include the new communication flow and indicating that the new communication flow has been assigned to the queue associated with the second core (page 3, lines 24-27 and 45-50 when processing the new connection, the CPU load balancing degree needs to be calculated; if the load balance degree of the CPU exceeds the threshold value, the CPU core with the heaviest load is removed, and the Hash calculation is performed again (updating the hash mapping). after finishing the calculation, distributing the data packet to the corresponding recombination core (having corresponding queue) according to the hash value, and creating the record of the data stream in the stream table (updating the hash table) of the recombination core…. when the value of the load balancing degree is higher than 0.2, adjusting the processor load can achieve better results. The good effect is that when the load balancing value exceeds 0.2, the CPU core with the largest load will be eliminated, the hash calculation will be re-calculated and allocated to the corresponding CPU core, thereby achieving data processing load balancing among CPU multi-core processors);
sending the packet associated with the new communication flow to the queue associated with second core (page 3, lines 24-27 and 45-50 when processing the new connection, the CPU load balancing degree needs to be calculated; if the load balance degree of the CPU exceeds the threshold value, the CPU core with the heaviest load is removed, and the Hash calculation is performed again. after finishing the calculation, distributing the data packet to the corresponding recombination core according to the hash value, and creating the record of the data stream in the stream table of the recombination core).
Therefore, it would have been obvious to a person having ordinary skill in the art at the time of the invention was effectively filed to combine the teaching of Wang with Wang2. The modification would allow dynamic load balancing considering only not overloaded cores. The modification would allow dynamically updating load balancing information to perform dynamic load balancing that adapts to current load states of the cores for efficient load balancing.
Wang2 further inherently disclose: wherein the hash table is updated by re-executing a hash function using only the subset of queues for the multiple cores that exclude the queues associated with first core; and sending the packet associated with the new communication flow to the queue associated with second core (page 3, lines 24-27 and 45-50 when processing the new connection, the CPU load balancing degree needs to be calculated; if the load balance degree of the CPU exceeds the threshold value, the CPU core with the heaviest load is removed, and the Hash calculation is performed again (updating the hash mapping). after finishing the calculation, distributing the data packet to the corresponding recombination core (having corresponding queue) according to the hash value, and creating the record of the data stream in the stream table (updating the hash table) of the recombination core…. when the value of the load balancing degree is higher than 0.2, adjusting the processor load can achieve better results. The good effect is that when the load balancing value exceeds 0.2, the CPU core with the largest load will be eliminated, the hash calculation will be re-calculated and allocated to the corresponding CPU core, thereby achieving data processing load balancing among CPU multi-core processors).
The combination does not explicitly disclose: wherein the hash table is updated by re-executing a hash function using only the subset of queues for the multiple cores that exclude the queues associated with first core; and sending the packet associated with the new communication flow to the queue associated with second core.
But, the combination does not explicitly disclose: wherein the hash table is updated by re-executing a hash function using only the subset of queues for the multiple cores that exclude the queues associated with first core; and sending the packet associated with the new communication flow to the queue associated with second core;
However, in the same field of endeavor, Men discloses wherein the hash table is updated by re-executing a hash function using only the subset of queues for the multiple cores that exclude the queues associated with first core; and sending the packet associated with the new communication flow to the queue associated with second core (page 3, line 16-21 discloses the overall load balance of the CPU needs to be calculated. If the CPU load balance exceeds the threshold, the overloaded cores in the CPU core are removed (excluding overloaded core/queue) and the Toeplitz hash value is recalculated (re-executing hash function). After the calculation is completed, the data flow and the mapping relationship of the CPU core obtained according to the hash result are added to the key-core hash table (updating hash table), and the traffic data is sent to the rte_ring of the corresponding CPU core, and the data flow information is appended to the key-stream hash table in the corresponding core).
Therefore, it would have been obvious to a person having ordinary skill in the art at the time of the invention was effectively filed to combine the teaching of the combination with Men. The modification would allow active core load monitoring and dynamic RSS load balancing based on core load state that avoids overloaded cores from being considered for load balancing for effective load distribution to only functioning cores.
Regarding claim 4. The combination discloses method of claim 1.
Wang2 further discloses, wherein the hash table is not populated when no flags indicate that any of the multiple cores exceed the threshold (page 3, lines 24-27 and 45-50 when processing the new connection, the CPU load balancing degree needs to be calculated; if the load balance degree of the CPU exceeds the threshold value, the CPU core with the heaviest load is removed, and the Hash calculation is performed again. after finishing the calculation, distributing the data packet to the corresponding recombination core according to the hash value, and creating the record of the data stream in the stream table of the recombination core. The converse will be not recalculating the hash to update the hash table and not re-updating the table if the overload condition is not exceeded).
Regarding claim 5. The combination discloses method of claim 1.
Wang discloses, wherein the hash table is only populated for TCP flows ([0004] Generally, NICs can steer data flows, e.g., data packets, to any of a number of receive queues by way of Receive Side Scaling (RSS) or implementation of a flow director. Servers generally take advantage of such capabilities to distribute connections, e.g., transmission control protocol (TCP) connections, to different CPU cores for processing. [0005] The use of RSS typically includes application of a filter that applies a hash function over the packet headers of received data packets. An indirection table (hash table) can then be used to map each data packet to a certain receive queue, e.g., based on the corresponding hash value. The CPU cores can then be assigned to work on one or more specific queues in order to enable distributed processing. The system can be programmed to be used only for TCP flow).
Regarding claim 6. The combination discloses method of claim 1.
Wang discloses, wherein the hash table is only populated for TCP flows and UDP flows that are QUIC flows (([0004] Generally, NICs can steer data flows, e.g., data packets, to any of a number of receive queues by way of Receive Side Scaling (RSS) or implementation of a flow director. Servers generally take advantage of such capabilities to distribute connections, e.g., transmission control protocol (TCP) connections, to different CPU cores for processing. [0005] The use of RSS typically includes application of a filter that applies a hash function over the packet headers of received data packets. An indirection table can then be used to map each data packet to a certain receive queue, e.g., based on the corresponding hash value. The CPU cores can then be assigned to work on one or more specific queues in order to enable distributed processing. The system is capable of being programmed and used to any data flow encapsulated in IP header with five tuple header information to be hashed to generate a hash value including protocols such as: UDP, TCP).
Regarding claim 7. The combination discloses method of claim 1.
CJ discloses, wherein the hash table comprises an index to each entry, a five tuple for each flow in the hash table, and a queue number associated with one of the multiple cores ([0246] Each PE may reside and/or execute on one of a plurality of cores and/or clusters in the intermediary device. In some embodiments, the RSS hash may take tuple information (e.g., a five tuple, comprising source and destination IP addresses, source and destination ports, and transport protocol, determined from an incoming packet) as input. In various embodiments, tuple information may include one or more of source and/or destination IP addresses, source and/or destination ports, and transport protocol, and may include other information such as a MAC address and/or a virtual IP address. A corresponding RSS hash output may be used to direct an incoming packet to a specific PE (core) from a plurality of Pes. The index of the hash value information mapped to specific core/PE and its corresponding queue and corresponding queue and stored in the hash table used to fetch to forward data corresponds to indexing).
Regarding claim 8. In the combination Wang discloses a system for network traffic management in a system comprising a network interface card (NIC) fig. 2 216 NIC) operatively coupled to a processing system with multiple cores (fig. 2 core 1-coren), the NIC configured to execute receiver side scaling (RSS), the system comprising: the NIC the processing system ([0004-0006] discloses [0004] NICs can steer data flows, e.g., data packets, to any of a number of receive queues by way of Receive Side Scaling (RSS). Servers generally take advantage of such capabilities to distribute connections, e.g., transmission control protocol (TCP) connections, to different CPU cores for processing. [0005] The use of RSS typically includes application of a filter that applies a hash function over the packet headers of received data packets. An indirection table can then be used to map each data packet to a certain receive queue, e.g., based on the corresponding hash value. The CPU cores can then be assigned to work on one or more specific queues in order to enable distributed processing. [0006] RSS usually involves the mapping of many data flows into a limited number of receive queues targeting a limited number of CPU cores); and
a computer-readable medium having encoded thereon computer-readable instructions that when executed by the processing system, cause the system to perform operations ([0002-0003] discloses the computing device 110 includes a central processing unit (CPU) 112 for executing instructions as well as a memory 114 for storing such instructions. The CPU 112 has n CPU cores. As used herein, the term core generally refers to a basic computation unit of the CPU. The memory 114 may include random access memory (RAM), flash memory, hard disks, solid state disks, optical disks, or any suitable combination thereof. [0003] The computing device 110 also includes a network interface card (NIC) 116 for enabling the computing device 110 to communicate with at least one other computing device 120, such as an external or otherwise remote device, by way of a communication medium such as a wired or wireless packet network, for example. The computing device 110 may thus transmit data to and/or receive data from the other computing device(s) by way of its NIC 116. For example, the NIC 116 has n receive queues for receiving data, e.g., ingress packets, from the other computing device(s)), comprising:
All other limitations of claim 8 are similar with the limitations of claim 1 above. Claim 8 is rejected on the analysis of claim 1 above.
Regarding claim 11. The combination discloses system of claim 8.
All other limitations of claim 11 are similar with the limitations of claim 4 above. Claim 11 is rejected on the analysis of claim 4 above.
Regarding claim 12. The combination discloses system of claim 8.
All other limitations of claim 12 are similar with the limitations of claim 5 and are rejected on similar basis.
Regarding claim 13. The combination discloses system of claim 8.
All other limitations of claim 13 are similar with the limitations of claim 6 and are rejected on similar basis.
Regarding claim 14. The combination discloses system of claim 8.
All other limitations of claim 14 are similar with the limitations of claim 7 and are rejected on similar basis.
Regarding claim 15. The combination discloses a non-transitory computer-readable storage medium having encoded thereon computer-readable instructions that when executed by a system, cause the system to perform operations comprising:
All other limitations of claim 15 are similar with the limitations of claim 1 and are rejected on similar basis.
Regarding claim 18. The combination discloses the non-transitory computer-readable storage medium of claim 15.
All other limitations of claim 18 are similar with the limitations of claim 4 and are rejected on similar basis.
Regarding claim 19. The combination discloses the non-transitory computer-readable storage medium of claim 15.
All other limitations of claim 19 are similar with the limitations of claim 5 and are rejected on similar basis.
Regarding claim 20. The combination discloses the non-transitory computer-readable storage medium of claim 15.
All other limitations of claim 20 are similar with the limitations of claim 6 and are rejected on similar basis.
Claim(s) 2, 9 and 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Wang (US pg. no. 20180285151), Wang2 (CN117294629A), and Men (CN 115858152 B), further in view of CJ (US pg. no. 20150124828).
Regarding claim 2. The combination discloses method of claim 1.
But, the combination does not explicitly disclose: wherein the hash table is indexed based on a five tuple of communications flow packets.
However, in the same field of endeavour, CJ discloses wherein the hash table is indexed based on a five tuple of communications flow packets ([0246] Each PE may reside and/or execute on one of a plurality of cores and/or clusters in the intermediary device. In some embodiments, the RSS hash may take tuple information (e.g., a five tuple, comprising source and destination IP addresses, source and destination ports, and transport protocol, determined from an incoming packet) as input. In various embodiments, tuple information may include one or more of source and/or destination IP addresses, source and/or destination ports, and transport protocol, and may include other information such as a MAC address and/or a virtual IP address. A corresponding RSS hash output may be used to direct an incoming packet to a specific PE from a plurality of PEs in the intermediary device).
Therefore, it would have been obvious to a person having ordinary skill in the art at the time of the invention was effectively filed to combine the teaching of the combination with CJ. The modification would allow Effective hashing system for efficient identification of mapping relation of components.
Regarding claim 9. The combination discloses system of claim 8.
All other limitations of claim 9 is similar with the limitations of claim 2 and are rejected on similar basis.
Regarding claim 16. The combination discloses the computer-readable storage medium of claim 15.
All other limitations of claim 16 are similar with the limitations of claim 2 and are rejected on similar basis.
Claim(s) 3, 10 and 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Wang (US pg. no. 20180285151), and Wang2 (CN117294629A), and Men (CN 115858152 B), further in view of Tumuluru (US pg. no. 20180069924).
Regarding claim 3. The combination discloses method of claim 1.
But, the combination does not explicitly disclose, wherein the RSS function comprises a modulo function based on a total number of available cores of the multiple cores or a total number of available queues.
However, in the same field of endeavor, Tumuluru discloses wherein the RSS function comprises a modulo function based on a total number of available cores of the multiple cores or a total number of available queues ([0052] On the cloud computing system 150 side, L2 concentrator 185 also uses RSS and RPS to process received packets and distributes those packets across queues 450.sub.i. However, as the received packets are encapsulated using FOU, the packets may all have the same source and destination IP address in their outer headers and be placed based on RSSS hashing into the same queue, which can create an undesirable bottleneck. One embodiment provides an enhancement to RPS that looks deeper in received packets at internal IP addresses rather than just IP addresses in the outer header. In one embodiment, L2 concentrator 185 determines whether a received packet is a FOU packet and, if such is the case, L2 concentrator 185 looks deeper at an IPsec outer IP address, which is used to hash and place the FOU packet in a receive queue associated with a CPU that removes the FOU header, decrypts and decapsulates the IPsec packet, and removes the GRE header. Doing so distributes FOU packets would otherwise hash to the same queue across different queues, thereby providing performance parallelism. In a particular embodiment, the hash on the IPsec outer IP address may include computing the outer IP address modulo a number of available cores. After CPUs or cores 460.sub.i process packets, the packets are sent to respective transmit queues 470.sub.i for transmission over a cloud-side network to which L2 concentrator 185 is connected).
Therefore, it would have been obvious to a person having ordinary skill in the art at the time of the invention was effectively filed to combine the teaching of the combination with Tumuluru. The modification would allow Effective hashing system for efficient identification of mapping relation of components.
Regarding claim 10. The combination discloses system of claim 8.
All other limitations of claim 10 are similar with the limitations of claim 3 above. Claim 10 is rejected on the analysis of claim 3 above.
Regarding claim 17. The combination discloses the computer-readable storage medium of claim 15.
All other limitations of claim 17 are similar with the limitations of claim 3 and are rejected on similar basis.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MESSERET F. GEBRE whose telephone number is (571)272-8272. The examiner can normally be reached 9:00 am-5:30PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Oscar Louie can be reached at 5712701684. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MESSERET F GEBRE/Primary Examiner, Art Unit 2445