Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted on July 05, 2024 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement has been considered by the examiner
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1 – 3, 5 – 7, 14 and 15 are rejected under 35 U.S.C 103 as being unpatentable over Cupersmith et al. US Patent Publication No. US-20210304559-A1 (hereinafter Cupersmith) in view of Chen Patent Application Publication No. WO-2024059179-A1 (hereinafter Chen).
Regarding claim 1, Cupersmith discloses about a robotic system comprising: a robot operable in an environment, the environment comprising a plurality of objects, the plurality of objects including a first object and a second object (Cupersmith in [0038] discloses, “a robot management system server is configured to manage a fleet of service robots that are deployed within a casino property ... dynamic data captured by on-board sensors for position determination and pathing, as well as on-board sensors for object detection and avoidance”. Furthermore, Cupersmith in [0114] discloses about detecting plurality of objects, “robot 300 to detect collisions between the robot 300 and other stationary or moving objects as the robot 300 moves through the operations venue or to detect other impacts to the robot 300”); an object recognition subsystem communicatively coupled to the robot ((Cupersmith in [0038] discloses, “dynamic data captured by on-board sensors for position determination and pathing, as well as on-board sensors for object detection and avoidance” wherein ‘on-board’ sensors with robot implies coupled to the robot); wherein the object recognition subsystem comprises at least one processor and at least one non-transitory processor-readable storage medium communicatively coupled to the at least one processor, the at least one non-transitory processor-readable storage medium storing processor-executable instructions and/or data that, when executed by the at least one processor, cause the robotic system to perform a method for recognizing objects in the environment (Cupersmith in [0040] discloses, “The service robots are configured with various hardware components that enable their intended operations, also referred to herein as the “service role” of the robot. For example, the robots may include hardware and sensors that enable movement of the robot and navigation throughout the venue (e.g., position determination, obstacle avoidance) ... The service robots also include central processing components that include one or more central processing units (“CPUs”), volatile and non-volatile memory, and a rechargeable power supply system configured to provide power for components of the robot”. Furthermore, Cupersmith in [0038] discloses about service robots detecting objects).
Cupersmith doesn’t disclose about the following limitation as further recited in the claim.
Chen discloses an interface to a large language model (LLM) communicatively coupled to the object recognition system (Chen in [0021] discloses, “a large language model (LLM) can be utilized for the planning, and the determined object descriptor(s) can be processed, using the LLM and along with the FF NL instruction”), the method which includes: assigning a first label to the first object (Chen in [0033] discloses, “Further, FIG. 1B2 illustrates that the candidate object descriptor "pear" has been determined to be relevant to the region of interest 184A”); sending, by the interface, a query to the LLM, the query comprising the first label; receiving, by the interface, a response from the LLM, the response in reply to the query, the response comprising a second label; and assigning the second label to the second object (Chen in [0032] discloses, “for instance, "sink" and "brush" can be generated based on prompting a large language model (LLM) using the FF NL instruction 105. Notably, even though the FF NL instruction 105 doesn't mention "sink", "brush", or any synonyms, prompting the LLM and analyzing resulting LLM output can still result in those object descriptors being determined to be potentially relevant to the task of the FF NL instruction 105”).
It would have been obvious to one of ordinary skill in art before the effective filling date of the claimed invention to integrate the technique of Chen into the system of Cupersmith because it would allow the system to label a new object meaningfully even when there is no perfect match of the object in the database.
Summary of Citations (Cupersmith)
Paragraph [0038]; “a robot management system server is configured to manage a fleet of service robots that are deployed within a casino property ... dynamic data captured by on-board sensors for position determination and pathing, as well as on-board sensors for object detection and avoidance”.
Paragraph [0040]; “The service robots are configured with various hardware components that enable their intended operations, also referred to herein as the “service role” of the robot. For example, the robots may include hardware and sensors that enable movement of the robot and navigation throughout the venue (e.g., position determination, obstacle avoidance) ... The service robots also include central processing components that include one or more central processing units (“CPUs”), volatile and non-volatile memory, and a rechargeable power supply system configured to provide power for components of the robot”.
Paragraph [0114]; “the robot 300 includes one or more collision detection (or “impact”) sensors 368 . Impact sensors 368 are used by the robot 300 to detect collisions between the robot 300 and other stationary or moving objects as the robot 300 moves through the operations venue or to detect other impacts to the robot 300”.
Summary of Citations (Chen)
Paragraph [0021]; “a large language model (LLM) can be utilized for the planning, and the determined object descriptor(s) can be processed, using the LLM and along with the FF NL instruction”.
Paragraph [0032]; “for instance, "sink" and "brush" can be generated based on prompting a large language model (LLM) using the FF NL instruction 105. Notably, even though the FF NL instruction 105 doesn't mention "sink", "brush", or any synonyms, prompting the LLM and analyzing resulting LLM output can still result in those object descriptors being determined to be potentially relevant to the task of the FF NL instruction 105.
Paragraph [0033]; “Further, FIG. 1B2 illustrates that the candidate object descriptor "pear" has been determined to be relevant to the region of interest 184A”.
Regarding claim 2, Chen in the combination discloses the robotic system of claim 1, the object recognition subsystem comprising a plurality of sensors and a sensor data processor, the method further comprising: scanning the environment, by the plurality of sensors, to generate sensor data (Chen in [0025] discloses, “Robot 110 also includes a vision component 111 ... The vision component 111 can be, for example, a monocular camera, a stereographic camera (active or passive), and/or a 3D laser scanner. A 3D laser scanner can include one or more lasers that emit light and one or more sensors that collect data related to reflections of the emitted light”); detecting, by the sensor data processor, the presence of the first object and the second object, wherein the detecting, by the sensor data processor, the presence of the first object and the second object is based at least in part on the sensor data (Chen in [0028] discloses, “The vision data instance 180 captures a pear and keys that are both present on the round table represented by feature 194”).
Summary of Citations (Chen)
Paragraph [0025]; “Robot 110 also includes a vision component 111 ... The vision component 111 can be, for example, a monocular camera, a stereographic camera (active or passive), and/or a 3D laser scanner. A 3D laser scanner can include one or more lasers that emit light and one or more sensors that collect data related to reflections of the emitted light.
Paragraph [0028]; “The vision data instance 180 captures a pear and keys that are both present on the round table represented by feature 194”.
Regarding claim 3, Chen in the combination discloses the robotic system of claim 2, wherein the assigning, by the object recognition subsystem, a first label to the first object includes: identifying the first object based at least in part on the sensor data; and assigning a natural language label to the first object (Chen in [0032] discloses, “FIG. 1B2 illustrates candidate object descriptors 107 that each describe a corresponding object that is potentially relevant to the task of the FF NL instruction 105 of FIG. 1A ... For instance, "apple", "pear", and "banana" of the object descriptors 107 can be generated based on being determined to be members of a "fruit" class, and "fruit" being included in the FF NL instruction 105. Also, for instance, "sink" and "brush" can be generated based on prompting a large language model (LLM) using the FF NL instruction 105”).
Summary of Citations (Chen)
Paragraph [0032]; “FIG. 1B2 illustrates candidate object descriptors 107 that each describe a corresponding object that is potentially relevant to the task of the FF NL instruction 105 of FIG. 1A ... For instance, "apple", "pear", and "banana" of the object descriptors 107 can be generated based on being determined to be members of a "fruit" class, and "fruit" being included in the FF NL instruction 105. Also, for instance, "sink" and "brush" can be generated based on prompting a large language model (LLM) using the FF NL instruction 105”.
Regarding claim 5, Chen in the combination discloses the robotic system of claim 3, the method further comprising determining a degree of confidence in the identifying of the first object exceeds a determined confidence threshold, wherein the determining a degree of confidence in the identifying of the first object exceeds a determined confidence threshold includes determining a probability (Chen in [0106] discloses, “based on the LLM output and the skill description that is the natural language description of the robotic skill, to implement the robotic skill, includes: determining that the probability distribution, of the LLM output, indicates the skill description with a probability that satisfies a threshold degree of probability and that the probability is greater than other probabilities determined for other candidate skill descriptions of other candidate robotic skills performable by the robot ... selecting only the robotic skill and the other candidate robotic skills is based on comparing the skill descriptor and the other skill descriptors to the subset of object descriptors and/or to the region embeddings for the regions of interest”).
Summary of Citations (Chen)
Paragraph [0106]; “In some implementations, determining, based on the LLM output and the skill description that is the natural language description of the robotic skill, to implement the robotic skill, includes: determining that the probability distribution, of the LLM output, indicates the skill description with a probability that satisfies a threshold degree of probability and that the probability is greater than other probabilities determined for other candidate skill descriptions of other candidate robotic skills performable by the robot ... selecting only the robotic skill and the other candidate robotic skills is based on comparing the skill descriptor and the other skill descriptors to the subset of object descriptors and/or to the region embeddings for the regions of interest”.
Regarding claim 6, Chen in the combination discloses the robotic system of claim 2, wherein the scanning the environment, by the plurality of sensors, to generate sensor data includes generating at least one of image data, video data, audio data, or haptic data (Chen in [0025] discloses, “the stereographic camera generates, based on characteristics sensed by the two sensors, images that each includes a plurality of data points defining depth values and color values and/or grayscale values. For example, the stereographic camera can generate images that include a depth channel and red, blue, and/or green channels”).
Summary of Citations (Chen)
Paragraph [0025]; “A stereographic camera can include two or more sensors, each at a different vantage point. In some of those implementations, the stereographic camera generates, based on characteristics sensed by the two sensors, images that each includes a plurality of data points defining depth values and color values and/or grayscale values. For example, the stereographic camera can generate images that include a depth channel and red, blue, and/or green channels”.
Regarding claim 7, Cupersmith in the combination discloses the robotic system of claim 2, wherein the detecting, by the sensor data processor, the presence of the first object and the second object includes detecting, by the sensor data processor, the presence of the first object and the second object in real time (Cupersmith in [0099] discloses, “Robot navigation may be supported by an array of proximity sensors 380 (e.g., range sensors, camera devices, thermal cameras) which the robot uses to detect and avoid nearby obstacles while moving (e.g., walls, gaming devices, patrons, or the like)” wherein detecting object while moving implies to determining presence of first and second object in real time).
Summary of Citations (Cupersmith)
Paragraph [0099]; “Robot navigation may be supported by an array of proximity sensors 380 (e.g., range sensors, camera devices, thermal cameras) which the robot uses to detect and avoid nearby obstacles while moving (e.g., walls, gaming devices, patrons, or the like)”.
Regarding claim 14, Chen in the combination discloses the robotic system of claim 1, wherein the assigning, by the object recognition subsystem, a first label to the first object includes: identifying the first object; and assigning a natural language label to the first object (Chen in [0032] discloses, “For instance, "apple", "pear", and "banana" of the object descriptors 107 can be generated based on being determined to be members of a "fruit" class, and "fruit" being included in the FF NL instruction 105. .... even though the FF NL instruction 105 doesn't mention "sink", "brush", or any synonyms, prompting the LLM and analyzing resulting LLM output can still result in those object descriptors being determined to be potentially relevant to the task of the FF NL instruction 105”).
Summary of Citations (Chen)
Paragraph [0032]; “For instance, "apple", "pear", and "banana" of the object descriptors 107 can be generated based on being determined to be members of a "fruit" class, and "fruit" being included in the FF NL instruction 105. .... even though the FF NL instruction 105 doesn't mention "sink", "brush", or any synonyms, prompting the LLM and analyzing resulting LLM output can still result in those object descriptors being determined to be potentially relevant to the task of the FF NL instruction 105”.
Regarding claim 15, claim is essentially claim 5 with different dependency. Therefore, the rejection analysis and motivation to combine in claim 5 is applied in claim 15.
Claims 4, 8 – 13, 16 – 20 are rejected under 35 U.S.C 103 as being unpatentable over Cupersmith et al. in view of Chen and further in view of Fairfield US Patent Publication No. US-10761542-B1 (hereinafter Fairfield).
Regarding claim 4, Chen in the combination discloses the robotic system of claim 3, wherein the sending, by the interface, a query to the LLM includes formulating a natural language statement (Chen in [0064] discloses, “At sub-block 454B, the system prompts an LLM, based on the FF NL instruction, to generate object descriptor(s)”).
Cupersmith and Chen in the combination doesn’t disclose about the following limitation as further recited in the claim.
Fairfield discloses about the natural language statement comprising the natural language label assigned to the first object (Fairfield in [Column – 27, Line 32 -36] discloses, “the computing system or other computing entity may generate a natural-language question, statement, and/or any other alertness data information based on a result of the vehicle's object detection of the object”).
It would have been obvious to one of ordinary skill in art before the effective filling date of the claimed invention to integrate the technique of Fairfield into the system of Cupersmith in view of Chen because it will improve the relevance and accuracy of LLM response. Without the label the response would be ambiguous.
Summary of Citations (Chen)
Paragraph [0064]; “At sub-block 454B, the system prompts an LLM, based on the FF NL instruction, to generate object descriptor(s)”.
Summary of Citations (Fairfield)
[Column – 27, Line 32 -36]; “the computing system or other computing entity may generate a natural-language question, statement, and/or any other alertness data information based on a result of the vehicle's object detection of the object”.
Regarding claim 8, Chen in the combination discloses the robotic system of claim 2.
Cupersmith and Chen in the combination doesn’t disclose about the following limitation as further recited in the claim.
Fairfield discloses about the method further comprising assigning, by the object recognition subsystem, a third label to the second object, wherein the assigning, by the object recognition subsystem, a third label to the second object includes: identifying the second object based at least in part on the sensor data; and determining a degree of confidence in the identifying of the second object fails to exceed a determined confidence threshold (Fairfield in [Column – 16, Line 66 – 67 & Column – 17, Line 1 – 8] discloses, “the natural-language question may be based on the preliminary identification of the object, so as to ask the human operator to confirm whether the preliminary identification is correct. In either case, the natural-language question may not include the correct identity of the object in some scenarios. For instance, if the vehicle has threshold low confidence that the object is a traffic signal with a green light, even though the object in reality is a traffic signal with a red light, the natural-language question may read “Is the light in this traffic signal green?””. Furthermore, Fairfield discloses about identifying object in sensors in [Column – 14, Line 31 – 33]; “A processor in the vehicle may be configured to detect various objects of the environment based on environment data from various sensors”).
Summary of Citations (Fairfield)
[Column – 16, Line 66 – 67 & Column – 17, Line 1 – 8]; “the natural-language question may be based on the preliminary identification of the object, so as to ask the human operator to confirm whether the preliminary identification is correct. In either case, the natural-language question may not include the correct identity of the object in some scenarios. For instance, if the vehicle has threshold low confidence that the object is a traffic signal with a green light, even though the object in reality is a traffic signal with a red light, the natural-language question may read “Is the light in this traffic signal green?””
[Column – 14, Line 31 – 33]; “A processor in the vehicle may be configured to detect various objects of the environment based on environment data from various sensors”.
Regarding claim 9, Fairfield in the combination discloses the robotic system of claim 8, wherein the assigning, by the object recognition subsystem, the second label to the second object includes updating the degree of confidence in the identifying of the second object (Fairfield in [Column – 16, Line 66 – 67 & Column – 17, Line 1 – 8] discloses about asking for human intervention if the confidence score is below threshold. Additionally, [Column – 21, Line 33 – 36] discloses about confirming the identification implies to updating the degree of confidence for identifying the object).
Summary of Citations (Fairfield)
[Column – 16, Line 66 – 67 & Column – 17, Line 1 – 8]; “the natural-language question may be based on the preliminary identification of the object, so as to ask the human operator to confirm whether the preliminary identification is correct. In either case, the natural-language question may not include the correct identity of the object in some scenarios. For instance, if the vehicle has threshold low confidence that the object is a traffic signal with a green light, even though the object in reality is a traffic signal with a red light, the natural-language question may read “Is the light in this traffic signal green?””
[Column – 21, Line 33 -36]; “In the example depicted in FIG. 4E, the human operator may indicate a natural-language question 420 to identify the object identified as the temporary stop sign 404. Additionally, when an identification is confirmed”.
Regarding claim 10, Chen in the combination discloses the robotic system of claim 1, wherein the sending, by the interface, a query to the LLM includes formulating a natural language statement (Chen in [0064] discloses, “At sub-block 454B, the system prompts an LLM, based on the FF NL instruction, to generate object descriptor(s)”).
Cupersmith and Chen in the combination doesn’t disclose about the following limitation as further recited in the claim.
Fairfield discloses about the natural language statement comprising the first label (Fairfield in [Column – 27, Line 32 -36] discloses, “the computing system or other computing entity may generate a natural-language question, statement, and/or any other alertness data information based on a result of the vehicle's object detection of the object”).
Summary of Citations (Chen)
Paragraph [0064]; “At sub-block 454B, the system prompts an LLM, based on the FF NL instruction, to generate object descriptor(s)”.
Summary of Citations (Fairfield)
[Column – 27, Line 32 -36]; “the computing system or other computing entity may generate a natural-language question, statement, and/or any other alertness data information based on a result of the vehicle's object detection of the object”.
Regarding claim 11, Chen in the combination discloses the robotic system of claim 10, wherein the formulating a natural language statement includes structuring the natural language statement to cause the response from the LLM to follow a defined structure (Chen in [0064] discloses, “processing NL input using the LLM model can generate LLM output that includes a probability distribution, over candidate word compositions, where the probability distribution can be utilized to select word composition(s) and, due to training of the LLM, the selected word composition(s) will be relevant to the NL input”).
Summary of Citations (Chen)
Paragraph [0064]; “processing NL input using the LLM model can generate LLM output that includes a probability distribution, over candidate word compositions, where the probability distribution can be utilized to select word composition(s) and, due to training of the LLM, the selected word composition(s) will be relevant to the NL input”.
Regarding claim 12, Chen in the combination discloses, the robotic system of claim 11, wherein the receiving, by the interface, a response from the LLM includes: receiving a natural language statement, the natural language statement comprising a natural language label; and parsing the natural language statement to extract the natural language label (In [0063] Chen discloses that the system can extract (parse) object label, “the system can extract noun(s) and/or adjective(s) from the FF NL instruction directly. For instance, if the NL instruction is "give me some first-aid items", "first-aid items" can be extracted”. Furthermore, In [0064] Chen discloses about extracting the object label switch from the output of a LLM, “if the FF NL instruction is "light up the room", the system can prompt the LLM, based on the FF NL instruction, to generate LLM output that indicates object descriptor(s) that include "switch". It is noted that the LLM output can indicate "switch" despite the FF NL instruction not including that term or any synonyms of that term”).
Summary of Citations (Chen)
Paragraph [0063]; “the system can extract noun(s) and/or adjective(s) from the FF NL instruction directly. For instance, if the NL instruction is "give me some first-aid items", "first-aid items" can be extracted”.
Paragraph [0064]; “if the FF NL instruction is "light up the room", the system can prompt the LLM, based on the FF NL instruction, to generate LLM output that indicates object descriptor(s) that include "switch". It is noted that the LLM output can indicate "switch" despite the FF NL instruction not including that term or any synonyms of that term”.
Regarding claim 13, Chen in the combination disclose the robotic system of claim 12, wherein the assigning, by the object recognition subsystem, a second label to the second object includes assigning the natural language label to the second object (Chen in [0032] discloses, “For instance, "apple", "pear", and "banana" of the object descriptors 107 can be generated based on being determined to be members of a "fruit" class, and "fruit" being included in the FF NL instruction 105. .... even though the FF NL instruction 105 doesn't mention "sink", "brush", or any synonyms, prompting the LLM and analyzing resulting LLM output can still result in those object descriptors being determined to be potentially relevant to the task of the FF NL instruction 105”).
Summary of Citations (Chen)
Paragraph [0032]; “For instance, "apple", "pear", and "banana" of the object descriptors 107 can be generated based on being determined to be members of a "fruit" class, and "fruit" being included in the FF NL instruction 105. .... even though the FF NL instruction 105 doesn't mention "sink", "brush", or any synonyms, prompting the LLM and analyzing resulting LLM output can still result in those object descriptors being determined to be potentially relevant to the task of the FF NL instruction 105”.
Regarding claim 16, Chen in the combination discloses the robotic system of claim 14, wherein the sending, by the interface, a query to the LLM includes formulating a natural language statement (Chen in [0064] discloses, “At sub-block 454B, the system prompts an LLM, based on the FF NL instruction, to generate object descriptor(s)”).
Cupersmith and Chen in the combination doesn’t disclose about the following limitation as further recited in the claim.
Fairfield discloses about the natural language statement comprising the natural language label (Fairfield in [Column – 27, Line 32 -36] discloses, “the computing system or other computing entity may generate a natural-language question, statement, and/or any other alertness data information based on a result of the vehicle's object detection of the object”).
Summary of Citations (Chen)
Paragraph [0064]; “At sub-block 454B, the system prompts an LLM, based on the FF NL instruction, to generate object descriptor(s)”.
Summary of Citations (Fairfield)
[Column – 27, Line 32 -36]; “the computing system or other computing entity may generate a natural-language question, statement, and/or any other alertness data information based on a result of the vehicle's object detection of the object”.
Regarding claim 17, claim is essentially claim 11 with different dependency. Therefore, the rejection analysis and motivation to combine in claim 11 is applied in claim 17.
Regarding claim 18, claim is essentially claim 12 with different dependency. Therefore, the rejection analysis and motivation to combine in claim 12 is applied in claim 18.
Regarding claim 19, Chen in the combination discloses the robotic system of claim 1.
Cupersmith and Chen in the combination doesn’t disclose about the following limitation as further recited in the claim.
Fairfield discloses about the method further comprising assigning, by the object recognition subsystem, a third label to the second object, wherein the assigning, by the object recognition subsystem, a second label to the second object includes comparing the second label with the third label (Fairfield in [Column – 16, Line 66 – 67 & Column – 17, Line 1 – 8] discloses, “the natural-language question may be based on the preliminary identification of the object, so as to ask the human operator to confirm whether the preliminary identification is correct. In either case, the natural-language question may not include the correct identity of the object in some scenarios. For instance, if the vehicle has threshold low confidence that the object is a traffic signal with a green light, even though the object in reality is a traffic signal with a red light, the natural-language question may read “Is the light in this traffic signal green?””).
Summary of Citations (Fairfield)
[Column – 16, Line 66 – 67 & Column – 17, Line 1 – 8]; “the natural-language question may be based on the preliminary identification of the object, so as to ask the human operator to confirm whether the preliminary identification is correct. In either case, the natural-language question may not include the correct identity of the object in some scenarios. For instance, if the vehicle has threshold low confidence that the object is a traffic signal with a green light, even though the object in reality is a traffic signal with a red light, the natural-language question may read “Is the light in this traffic signal green?””
Regarding claim 20, claim is essentially claim 9 with different dependency. Therefore, the rejection analysis and motivation to combine in claim 9 is applied in claim 20.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner
should be directed to ZAID MUHAMMAD SALEH whose telephone number is (703)756-1684.
The examiner can normally be reached M-F 8 am - 5 pm ET. Examiner interviews are available
via telephone, in-person, and video conferencing using a USPTO supplied web-based
collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO
Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.If attempts to
reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vu Le can be
reached on (571)272-7332. The fax phone number for the organization where this application or
proceeding is assigned is 571-273-8300. Information regarding the status of published or
unpublished applications may be obtained from Patent Center. Unpublished application
information in Patent Center is available to registered users. To file and manage patent
submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit
https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center
and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For
additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free).
If you would like assistance from a USPTO Customer Service Representative, call 800-786
9199 (IN USA OR CANADA) or 571-272-1000.
/ZAID MUHAMMAD SALEH/
Examiner, Art Unit 2668
12/13/2025
/VU LE/Supervisory Patent Examiner, Art Unit 2668