DETAILED ACTION
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This Office Action is in response to the amendment filed 04 December 2025.
Claims 1, 12, and 20 were amended.
Claims 1-20 are pending in this Office Action.
Response to Amendment
Applicants’ amendments and arguments with respect to claims 1-20 filed on 04 December 2025 have been fully considered but they are deemed to be moot in view of the new grounds of rejection.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim 1-5, 7-15, 18, and 20 is rejected under 35 U.S.C. 103 as being unpatentable over Bhooma (U.S. 8,699,339) and further in view of Haider et al. (U.S. 2023/0180052).
With respect to claim 1, Bhooma teaches a computer-implemented method comprising: detecting (Bhooma, col. 5, lines 23-31), by a computing device of a server group (Bhooma, Fig. 1, element 102; col. 3, lines 45-46), a set of in-flight requests (Bhooma, col. 5, lines 9-22) for an application (Bhooma, col. 4, lines 8-13); determining, by the computing device, that the set of in-flight requests (Bhooma, col. 5, lines 9-22) exceeds a predetermined threshold for (Bhooma, col. 5, lines 23-31) the server group (Bhooma, Fig. 1, element 102; col. 3, lines 45-46); identifying, by the computing device, a first type of request (Bhooma, Fig. 3, element 304; col. 5, lines 16-19) and a second type of request (Bhooma, Fig. 3, element 302; col. 5, lines 9-11) in the set of in-flight requests (Bhooma, col. 5, lines 9-22); prioritizing, by performing a load-shedding process (Bhooma, col. 5, lines 23-31) for the server group (Bhooma, Fig. 1, element 102; col. 3, lines 45-46), the first type of request over the second type of request (Bhooma, col. 5, lines 23-31); and executing a remaining set of requests of (Bhooma, col. 5, lines 34-40) the set of in-flight requests (Bhooma, col. 5, lines 9-22) for the application (Bhooma, col. 4, lines 8-13).
Bhooma does not explicitly teach wherein the load-shedding process comprises intentionally dropping at least one request of lower priority from network traffic based on a type of request.
However, Haider teaches wherein the load-shedding process comprises intentionally dropping at least one request of lower priority from network traffic based on a type of request (Haider, page 9, paragraphs 84-86).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Bhooma in view of Haider in order to enable wherein the load-shedding process comprises intentionally dropping at least one request of lower priority from network traffic based on a type of request. One would be motivated to do so in order to reduce medium contention, reduce energy and power consumption, and improve the power efficiency of the devices (Haider, page 2, paragraph 29); to conserve computational resources and/or achieve bandwidth efficiency (Haider, page 2, paragraph 32); and to prioritize and/or reserve time periods for latency sensitive traffic (Haider, page 3, paragraph 33).
With respect to claim 2, the combination of Bhooma and Haider teaches the invention described in claim 1, including the method wherein detecting the set of in-flight requests comprises: receiving network traffic from at least one client device; and detecting at least one request for the application from the client device (Bhooma, Fig. 1, elements 110, 112; col. 3, lines 29-40).
The combination of references is made under the same rationale as claim 1 above.
With respect to claim 3, the combination of Bhooma and Haider teaches the invention described in claim 1, including the method wherein determining that the set of in-flight requests exceeds the predetermined threshold comprises at least one of: determining a total number of requests in the set of in-flight requests exceeds a threshold number of requests for the server group; or determining a system latency exceeds a threshold latency for the server group (Bhooma, col. 5, lines 23-31).
The combination of references is made under the same rationale as claim 1 above.
With respect to claim 4, the combination of Bhooma and Haider teaches the invention described in claim 3, including the method wherein the system latency comprises at least one of: a latency of at least one request in the set of in-flight requests (Bhooma, col. 5, lines 23-31); or a latency in a downstream service of the server group.
The combination of references is made under the same rationale as claim 1 above.
With respect to claim 5, the combination of Bhooma and Haider teaches the invention described in claim 1, including the method wherein the first type of request (Bhooma, Fig. 3, element 304; col. 5, lines 16-19) comprises a user-initiated request categorized by an application programming interface of the application (Bhooma, Fig. 1, elements 110, 112; col. 3, lines 29-40).
The combination of references is made under the same rationale as claim 1 above.
With respect to claim 7, the combination of Bhooma and Haider teaches the invention described in claim 1, including the method wherein prioritizing the first type of request over the second type of request comprises: executing all requests of the first type of request prior to executing any request of the second type of request (Bhooma, col. 6, lines 10-19); and dropping a request of the second type of request based on a timing of the request (Bhooma, col. 5, lines 16-22).
The combination of references is made under the same rationale as claim 1 above.
With respect to claim 8, the combination of Bhooma and Haider teaches the invention described in claim 1, including the method wherein prioritizing the first type of request over the second type of request comprises dynamically repurposing a reserved capacity of the server group for the first type of request (Bhooma, col. 5, lines 9-26).
The combination of references is made under the same rationale as claim 1 above.
With respect to claim 9, the combination of Bhooma and Haider teaches the invention described in claim 1, including the method further comprising isolating a request of the set of in-flight requests based on a type of the request (Bhooma, col. 6, lines 24-28).
The combination of references is made under the same rationale as claim 1 above.
With respect to claim 10, the combination of Bhooma and Haider teaches the invention described in claim 1, including the method further comprising: updating the set of in-flight requests for the application; determining that the updated set of in-flight requests does not exceed the predetermined threshold for the server group; and executing the updated set of in-flight requests (Bhooma, Fig. 3; col. 6, line 34 – col. 7, line 11).
The combination of references is made under the same rationale as claim 1 above.
With respect to claim 11, the combination of Bhooma and Haider teaches the invention described in claim 10, including the method wherein executing the updated set of in-flight requests comprises suspending the load-shedding process for the server group (Bhooma, Fig. 3; col. 6, line 34 – col. 7, line 30).
The combination of references is made under the same rationale as claim 1 above.
With respect to claim 12, Bhooma teaches a system comprising: a detection module, stored in memory, that detects (Bhooma, col. 5, lines 23-31), by a computing device of a server group (Bhooma, Fig. 1, element 102; col. 3, lines 45-46), a set of in-flight requests (Bhooma, col. 5, lines 9-22) for an application (Bhooma, col. 4, lines 8-13); a determination module, stored in memory, that determines, by the computing device, that the set of in-flight requests (Bhooma, col. 5, lines 9-22) exceeds a predetermined threshold for (Bhooma, col. 5, lines 23-31) the server group (Bhooma, Fig. 1, element 102; col. 3, lines 45-46); an identification module, stored in memory, that identifies, by the computing device, a first type of request (Bhooma, Fig. 3, element 304; col. 5, lines 16-19) and a second type of request (Bhooma, Fig. 3, element 302; col. 5, lines 9-11) in the set of in-flight requests (Bhooma, col. 5, lines 9-22); a prioritization module, stored in memory, that prioritizes, by performing a load-shedding process (Bhooma, col. 5, lines 23-31) for the server group (Bhooma, Fig. 1, element 102; col. 3, lines 45-46), the first type of request over the second type of request (Bhooma, col. 5, lines 23-31); an execution module, stored in memory, that executes a remaining set of requests of (Bhooma, col. 5, lines 34-40) the set of in-flight requests (Bhooma, col. 5, lines 9-22) for the application (Bhooma, col. 4, lines 8-13); and at least one processor that executes the detection module, the determination module, the identification module, the prioritization module, and the execution module (Bhooma, col. 2, line 66 – col. 3, line 8).
Bhooma does not explicitly teach wherein the load-shedding process comprises intentionally dropping at least one request of lower priority from network traffic based on a type of request.
However, Haider teaches wherein the load-shedding process comprises intentionally dropping at least one request of lower priority from network traffic based on a type of request (Haider, page 9, paragraphs 84-86).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Bhooma in view of Haider in order to enable wherein the load-shedding process comprises intentionally dropping at least one request of lower priority from network traffic based on a type of request. One would be motivated to do so in order to reduce medium contention, reduce energy and power consumption, and improve the power efficiency of the devices (Haider, page 2, paragraph 29); to conserve computational resources and/or achieve bandwidth efficiency (Haider, page 2, paragraph 32); and to prioritize and/or reserve time periods for latency sensitive traffic (Haider, page 3, paragraph 33).
With respect to claim 13, the combination of Bhooma and Haider teaches the invention described in claim 12, including the system wherein the server group comprises a distributed system with a set of servers (Bhooma, Fig. 1, element 102; col. 3, lines 45-46) that services application requests (Bhooma, col. 5, lines 9-22) for a set of client devices (Bhooma, Fig. 1, elements 110, 112; col. 3, lines 29-40).
The combination of references is made under the same rationale as claim 12 above.
With respect to claim 14, the combination of Bhooma and Haider teaches the invention described in claim 13, including the system wherein the determination module determines that the set of in-flight requests exceeds the predetermined threshold for the server group by: detecting a total current capacity of the set of servers; and determining that an expected capacity to execute the set of in-flight requests exceeds the total current capacity of the set of servers (Bhooma, col. 5, lines 23-31).
The combination of references is made under the same rationale as claim 12 above.
With respect to claim 15, the combination of Bhooma and Haider teaches the invention described in claim 13, including the system wherein the detection module detects the set of in-flight requests for the application by receiving, at an application programming interface of the server group, at least one application request from an application programming interface of a client device in the set of client devices (Bhooma, Fig. 1, elements 110, 112; col. 3, lines 29-40).
The combination of references is made under the same rationale as claim 12 above.
With respect to claim 18, the combination of Bhooma and Haider teaches the invention described in claim 12, including the system wherein the load-shedding process comprises a process to: select at least one request of the second type of request; and drop the request (Bhooma, col. 5, lines 16-22).
The combination of references is made under the same rationale as claim 12 above.
With respect to claim 20, Bhooma teaches a computer-readable medium comprising one or more computer-executable instructions that, when executed by at least one processor of a computing device, cause the computing device to: detect (Bhooma, col. 5, lines 23-31), by the computing device of a server group (Bhooma, Fig. 1, element 102; col. 3, lines 45-46), a set of in-flight requests (Bhooma, col. 5, lines 9-22) for an application (Bhooma, col. 4, lines 8-13); determine, by the computing device, that the set of in-flight requests (Bhooma, col. 5, lines 9-22) exceeds a predetermined threshold for (Bhooma, col. 5, lines 23-31) the server group (Bhooma, Fig. 1, element 102; col. 3, lines 45-46); identify, by the computing device, a first type of request (Bhooma, Fig. 3, element 304; col. 5, lines 16-19) and a second type of request (Bhooma, Fig. 3, element 302; col. 5, lines 9-11) in the set of in-flight requests (Bhooma, col. 5, lines 9-22); prioritize, by performing a load-shedding process (Bhooma, col. 5, lines 23-31) for the server group (Bhooma, Fig. 1, element 102; col. 3, lines 45-46), the first type of request over the second type of request (Bhooma, col. 5, lines 23-31); and execute a remaining set of requests of (Bhooma, col. 5, lines 34-40) the set of in-flight requests (Bhooma, col. 5, lines 9-22) for the application (Bhooma, col. 4, lines 8-13).
Bhooma does not explicitly teach wherein the load-shedding process comprises intentionally dropping at least one request of lower priority from network traffic based on a type of request.
However, Haider teaches wherein the load-shedding process comprises intentionally dropping at least one request of lower priority from network traffic based on a type of request (Haider, page 9, paragraphs 84-86).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Bhooma in view of Haider in order to enable wherein the load-shedding process comprises intentionally dropping at least one request of lower priority from network traffic based on a type of request. One would be motivated to do so in order to reduce medium contention, reduce energy and power consumption, and improve the power efficiency of the devices (Haider, page 2, paragraph 29); to conserve computational resources and/or achieve bandwidth efficiency (Haider, page 2, paragraph 32); and to prioritize and/or reserve time periods for latency sensitive traffic (Haider, page 3, paragraph 33).
Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Bhooma in view of Haider and further in view of Rai et al. (U.S. 10,810,784).
With respect to claim 6, Bhooma teaches the invention described in claim 1, including a computer-implemented method comprising: detecting (Bhooma, col. 5, lines 23-31), by a computing device of a server group (Bhooma, Fig. 1, element 102; col. 3, lines 45-46), a set of in-flight requests (Bhooma, col. 5, lines 9-22) for an application (Bhooma, col. 4, lines 8-13); determining, by the computing device, that the set of in-flight requests (Bhooma, col. 5, lines 9-22) exceeds a predetermined threshold for (Bhooma, col. 5, lines 23-31) the server group (Bhooma, Fig. 1, element 102; col. 3, lines 45-46); identifying, by the computing device, a first type of request (Bhooma, Fig. 3, element 304; col. 5, lines 16-19) and a second type of request (Bhooma, Fig. 3, element 302; col. 5, lines 9-11) in the set of in-flight requests (Bhooma, col. 5, lines 9-22); prioritizing, by performing a load-shedding process (Bhooma, col. 5, lines 23-31) for the server group (Bhooma, Fig. 1, element 102; col. 3, lines 45-46), the first type of request over the second type of request (Bhooma, col. 5, lines 23-31); and executing a remaining set of requests of (Bhooma, col. 5, lines 34-40) the set of in-flight requests (Bhooma, col. 5, lines 9-22) for the application (Bhooma, col. 4, lines 8-13); wherein the second type of request (Bhooma, Fig. 3, element 302; col. 5, lines 9-11) initiated by a client device for the application device (Bhooma, Fig. 1, elements 110, 112; col. 3, lines 29-40).
Bhooma does not explicitly teach wherein the load-shedding process comprises intentionally dropping at least one request of lower priority from network traffic based on a type of request.
However, Haider teaches wherein the load-shedding process comprises intentionally dropping at least one request of lower priority from network traffic based on a type of request (Haider, page 9, paragraphs 84-86).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Bhooma in view of Haider in order to enable wherein the load-shedding process comprises intentionally dropping at least one request of lower priority from network traffic based on a type of request. One would be motivated to do so in order to reduce medium contention, reduce energy and power consumption, and improve the power efficiency of the devices (Haider, page 2, paragraph 29); to conserve computational resources and/or achieve bandwidth efficiency (Haider, page 2, paragraph 32); and to prioritize and/or reserve time periods for latency sensitive traffic (Haider, page 3, paragraph 33).
The combination of Bhooma and Haider does not explicitly teach comprises a prefetch request.
However, Rai teaches comprises a prefetch request (Rai, col. 15, lines 1-10).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Bhooma and Haider in view of Rai in order to enable a prefetch request. One would be motivated to do so in order to not burden the memory system further with speculative prefetch requests (Rai, col. 15, lines 4-6).
Claims 16 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Bhooma in view of Haider and further in view of Chen et al. (U.S. 10,439,870).
With respect to claim 16, Bhooma teaches the invention described in claim 15, including a system comprising: a detection module, stored in memory, that detects (Bhooma, col. 5, lines 23-31), by a computing device of a server group (Bhooma, Fig. 1, element 102; col. 3, lines 45-46), a set of in-flight requests (Bhooma, col. 5, lines 9-22) for an application (Bhooma, col. 4, lines 8-13); a determination module, stored in memory, that determines, by the computing device, that the set of in-flight requests (Bhooma, col. 5, lines 9-22) exceeds a predetermined threshold for (Bhooma, col. 5, lines 23-31) the server group (Bhooma, Fig. 1, element 102; col. 3, lines 45-46); an identification module, stored in memory, that identifies, by the computing device, a first type of request (Bhooma, Fig. 3, element 304; col. 5, lines 16-19) and a second type of request (Bhooma, Fig. 3, element 302; col. 5, lines 9-11) in the set of in-flight requests (Bhooma, col. 5, lines 9-22); a prioritization module, stored in memory, that prioritizes, by performing a load-shedding process (Bhooma, col. 5, lines 23-31) for the server group (Bhooma, Fig. 1, element 102; col. 3, lines 45-46), the first type of request over the second type of request (Bhooma, col. 5, lines 23-31); an execution module, stored in memory, that executes a remaining set of requests of (Bhooma, col. 5, lines 34-40) the set of in-flight requests (Bhooma, col. 5, lines 9-22) for the application (Bhooma, col. 4, lines 8-13); and at least one processor that executes the detection module, the determination module, the identification module, the prioritization module, and the execution module (Bhooma, col. 2, line 66 – col. 3, line 8); and the server group (Bhooma, Fig. 1, element 102; col. 3, lines 45-46).
Bhooma does not explicitly teach wherein the load-shedding process comprises intentionally dropping at least one request of lower priority from network traffic based on a type of request.
However, Haider teaches wherein the load-shedding process comprises intentionally dropping at least one request of lower priority from network traffic based on a type of request (Haider, page 9, paragraphs 84-86).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Bhooma in view of Haider in order to enable wherein the load-shedding process comprises intentionally dropping at least one request of lower priority from network traffic based on a type of request. One would be motivated to do so in order to reduce medium contention, reduce energy and power consumption, and improve the power efficiency of the devices (Haider, page 2, paragraph 29); to conserve computational resources and/or achieve bandwidth efficiency (Haider, page 2, paragraph 32); and to prioritize and/or reserve time periods for latency sensitive traffic (Haider, page 3, paragraph 33).
The combination of Bhooma and Haider does not explicitly teach the system wherein the prioritization module comprises a concurrency limiter that determines a concurrency limit for executing application requests by the application programming interface of.
However, Chen teaches the system wherein the prioritization module comprises a concurrency limiter that determines a concurrency limit for executing application requests by the application programming interface of (Chen, col. 12, line 44 – col. 13, line 8).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Bhooma and Haider in view of Chen in order to enable the system wherein the prioritization module comprises a concurrency limiter that determines a concurrency limit for executing application requests by the application programming interface of. One would be motivated to do so in order to adjust an allocation of the computing resource to the multi-tiered application based on the actual usage and the modeled setting (Chen, col. 1, lines 62-64).
With respect to claim 17, the combination of Bhooma, Haider, and Chen teaches the invention described in claim 16, including the system wherein the prioritization module prioritizes the first type of request over the second type of request in response to (Bhooma, col. 6, lines 10-19) the application programming interface of the server group (Bhooma, Fig. 1, element 102; col. 3, lines 45-46) reaching the concurrency limit (Chen, col. 12, line 44 – col. 13, line 8).
The combination of references is made under the same rationale as claim 16 above.
Claim 19 is rejected under 35 U.S.C. 103 as being unpatentable over Bhooma in view of Haider and further in view of Sharma et al. (U.S. 2025/0088838).
With respect to claim 19, Bhooma teaches the invention described in claim 12, including a system comprising: a detection module, stored in memory, that detects (Bhooma, col. 5, lines 23-31), by a computing device of a server group (Bhooma, Fig. 1, element 102; col. 3, lines 45-46), a set of in-flight requests (Bhooma, col. 5, lines 9-22) for an application (Bhooma, col. 4, lines 8-13); a determination module, stored in memory, that determines, by the computing device, that the set of in-flight requests (Bhooma, col. 5, lines 9-22) exceeds a predetermined threshold for (Bhooma, col. 5, lines 23-31) the server group (Bhooma, Fig. 1, element 102; col. 3, lines 45-46); an identification module, stored in memory, that identifies, by the computing device, a first type of request (Bhooma, Fig. 3, element 304; col. 5, lines 16-19) and a second type of request (Bhooma, Fig. 3, element 302; col. 5, lines 9-11) in the set of in-flight requests (Bhooma, col. 5, lines 9-22); a prioritization module, stored in memory, that prioritizes, by performing a load-shedding process (Bhooma, col. 5, lines 23-31) for the server group (Bhooma, Fig. 1, element 102; col. 3, lines 45-46), the first type of request over the second type of request (Bhooma, col. 5, lines 23-31); an execution module, stored in memory, that executes a remaining set of requests of (Bhooma, col. 5, lines 34-40) the set of in-flight requests (Bhooma, col. 5, lines 9-22) for the application (Bhooma, col. 4, lines 8-13); and at least one processor that executes the detection module, the determination module, the identification module, the prioritization module, and the execution module (Bhooma, col. 2, line 66 – col. 3, line 8).
Bhooma does not explicitly teach wherein the load-shedding process comprises intentionally dropping at least one request of lower priority from network traffic based on a type of request.
However, Haider teaches wherein the load-shedding process comprises intentionally dropping at least one request of lower priority from network traffic based on a type of request (Haider, page 9, paragraphs 84-86).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Bhooma in view of Haider in order to enable wherein the load-shedding process comprises intentionally dropping at least one request of lower priority from network traffic based on a type of request. One would be motivated to do so in order to reduce medium contention, reduce energy and power consumption, and improve the power efficiency of the devices (Haider, page 2, paragraph 29); to conserve computational resources and/or achieve bandwidth efficiency (Haider, page 2, paragraph 32); and to prioritize and/or reserve time periods for latency sensitive traffic (Haider, page 3, paragraph 33).
The combination of Bhooma and Haider does not explicitly teach the system wherein: the identification module further identifies a third type of request and a fourth type of request in the set of in-flight requests; and the prioritization module further prioritizes, by performing the load-shedding process for the server group, the third type of request over the fourth type of request.
However, Sharma teaches the system wherein: the identification module further identifies a third type of request (Sharma, page 2, paragraph 16, lines 9-26) and a fourth type of request (Sharma, page 2, paragraph 17, lines 1-5) in the set of in-flight requests; and the prioritization module further prioritizes, by performing the load-shedding process for the server group, the third type of request over the fourth type of request (Sharma, page 2, paragraph 16).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Bhooma and Haider in view of Sharma in order to enable the system wherein: the identification module further identifies a third type of request and a fourth type of request in the set of in-flight requests; and the prioritization module further prioritizes, by performing the load-shedding process for the server group, the third type of request over the fourth type of request. One would be motivated to do so in order to prioritize active users over largely dormant devices (Sharma, pages 2-3, paragraph 21).
Conclusion
Applicants' amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. Applicants are reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Alicia Baturay whose telephone number is (571) 272-3981. The examiner can normally be reached at 7am – 4pm, Mondays – Thursdays, Eastern Time.
Examiner interviews are available via telephone, in person, or video conferencing using a USPTO-supplied, web-based collaboration tool. To schedule an interview, Applicants are encouraged to use the USPTO Automated Interview Request (AIR) form at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kamal Divecha can be reached at (571) 272-5863. The fax number for the organization where this application or proceeding is assigned is (571) 273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in .docx format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (in USA or Canada) or 571-272-1000.
/Alicia Baturay/
Primary Examiner, Art Unit 2441
February 24, 2026