Prosecution Insights
Last updated: April 19, 2026
Application No. 18/902,759

Self-Moving Device, Server, And Automatic Working System Thereof

Non-Final OA §103§DP
Filed
Sep 30, 2024
Examiner
NGUYEN, BAO LONG T
Art Unit
3656
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Positec Power Tools (Suzhou) Co., Ltd.
OA Round
1 (Non-Final)
83%
Grant Probability
Favorable
1-2
OA Rounds
3y 0m
To Grant
90%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
447 granted / 540 resolved
+30.8% vs TC avg
Moderate +7% lift
Without
With
+7.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
26 currently pending
Career history
566
Total Applications
across all art units

Statute-Specific Performance

§101
5.3%
-34.7% vs TC avg
§103
38.9%
-1.1% vs TC avg
§102
18.9%
-21.1% vs TC avg
§112
30.2%
-9.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 540 resolved cases

Office Action

§103 §DP
DETAILED ACTION This is a non-final office action on the merits. Claims 1-20 are pending and addressed below. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on 9/30/2024 is being considered by the examiner. The information disclosure statement (IDS) submitted on 12/31/2024 is being considered by the examiner. Some documents listed but not submitted were found in files of parent application 17048566. Listed documents that cannot be found are lined through and not considered. Non-English documents have been considered in as much as the drawings and translated portions provided therein (See MPEP 609). Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: Self-moving device, image detection module, first recognition module, first communication module, control module, second recognition module, second communication module in claim 1, Software update module, communication module in claim 10, image detection module, first recognition module, first communication module, control module, in claim 11. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 11, 14-15, 17, 19-20 rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-3, 5-8 of U.S. Patent No. 12135562. Although the claims at issue are not identical, they are not patentably distinct from each other because claims 1-3, 5-8 of U.S. Patent No. 12135562 contain all limitations of claims 11, 14-15, 17, 19-20. Claims 1, 5, 6, 8 rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-3, 5-7 of U.S. Patent No. 12135562 in view of YAN (CN 104575489 a reference listed in IDS 10/16/2020, and translation has been provided and cited). Regarding claim 1, claims 1-3, 5-7 of U.S. Patent No. 12135562 teach all limitations except: the server includes: a second recognition module configured to recognize the specific object in the image based on the environmental image to generate a second recognition signal; and a second communication module, communicatively connected to the first communication module, configured to receive the environmental image and/or the first recognition signal, and send the second recognition signal; However, YAN teaches: Also a server configured to communicate with the self-moving device; the server includes: a second recognition module configured to recognize the specific object in the image based on the environmental image to generate a second recognition signal; and a second communication module, communicatively connected to the first communication module, configured to receive the environmental image and/or the first recognition signal, and send the second recognition signal; (at least figs. 1-3, pages 1-3 discuss robot 101, data center 102, page 2 discuss “The image signal is processed from the processor 2. When the robot walks on the road, road surface information, including red and green signal lights of the crossroad, pedestrians and vehicles on the road, and information such as vehicles on the road are obtained through the camera, the information is quickly processed and analyzed from the processor 2, and the objects on the road are quickly classified and recognized from the processor 2, so that the robot can efficiently and accurately perform patrol and other tasks. In addition, when executing the security task, the robot can obtain the face information of the person by means of the camera, and then quickly process the face information from the processor 2 for face recognition, so as to quickly distinguish whether there is a prisoner”; discuss “The processor 4 can also quickly process and analyze the obstacle information collected by the robot through the camera and the ultrasonic module, and after the obstacle is analyzed, the robot is helped to perform obstacle avoidance or obstacle cleaning; page 3 discuss “For the identification of the target image signal, the signal processing and analysis module in the robot performs image feature extraction and selection on the target image signal, and then performs classification and recognition on the image according to the decision rule”; page 3 discuss “When the target signal cannot be identified by the robot itself due to blurring or obstacle interference or coverage or no corresponding target library of the robot itself, the signal acquired by the robot will be transmitted to the data center for processing and analysis”; discuss “After receiving the unrecognizable target signal sent by the robot, the data center analyzes the signal”; discuss “For the recognition of the target image signal, the data center performs image feature extraction and selection on the target image signal, and then performs classification and recognition on the image according to the decision rule. The data center returns the identified result related to the target information to the robot for the next operation.”; page 1-2 discuss “The robot 101 includes at least a signal acquisition module, a signal analysis module, and a first wireless communication module. The data center 102 includes a second wireless communication module and a data analysis module”, discuss “Specifically, the signal collection module is configured to collect at least one form of signal corresponding to the target. The signal analysis module is configured to analyze the signal acquired by the signal acquisition module to obtain an identification result. The first wireless communication module is configured to forward the signal acquired by the signal acquisition module to the data center after the signal analysis module cannot obtain the recognition result. The data analysis module is configured to receive the signal through the second wireless communication module, analyze the signal to obtain an identification result, and return the identification result to the robot”) for recognition (pages 1-3); It would have been obvious to one of ordinary skill in the art at the time of filing and at the time of the invention to modify the system and method of claims 1-3, 5-7 of U.S. Patent No. 12135562 with a server configured to communicate with the self-moving device; the server includes: a second recognition module configured to recognize the specific object in the image based on the environmental image to generate a second recognition signal; and a second communication module, communicatively connected to the first communication module, configured to receive the environmental image and/or the first recognition signal, and send the second recognition signal; as taught by YAN for recognition. Regarding claims 5, 6, 8, claims 1-3, 5-7 of U.S. Patent No. 12135562 teach these claims. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 5-6, 10, 11, 14-15, 19, 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over YAN (CN 104575489 a reference listed in IDS 10/16/2020, and translation has been provided and cited) in view of Gibbon et al. (US 20160163029). Regarding claims 1, 11 YAN teaches: An automatic working system comprising: a self-moving device; and a server configured to communicate with the self-moving device, wherein: the self-moving device includes: an image detection module configured to detect an environment around the self-moving device to generate an environmental image; a first recognition module configured to recognize a specific object in an image based on the environmental image to generate a first recognition signal; a first communication module communicatively connected to the server; and a control module configured to selectively control the first communication module to send the environmental image and/or the first recognition signal to the server; the server includes: a second recognition module configured to recognize the specific object in the image based on the environmental image to generate a second recognition signal; and a second communication module, communicatively connected to the first communication module, configured to receive the environmental image and/or the first recognition signal, and send the second recognition signal; the control module controls an action of the self-moving device based on the first recognition signal and/or the second recognition signal; and when the control module determines that the first recognition signal does not meet a preset condition, the control module controls the first communication module to send the environmental image to the server and receive the second recognition signal, (at least figs. 1-3, pages 1-3 discuss robot 101, data center 102, page 2 discuss “The image signal is processed from the processor 2. When the robot walks on the road, road surface information, including red and green signal lights of the crossroad, pedestrians and vehicles on the road, and information such as vehicles on the road are obtained through the camera, the information is quickly processed and analyzed from the processor 2, and the objects on the road are quickly classified and recognized from the processor 2, so that the robot can efficiently and accurately perform patrol and other tasks. In addition, when executing the security task, the robot can obtain the face information of the person by means of the camera, and then quickly process the face information from the processor 2 for face recognition, so as to quickly distinguish whether there is a prisoner”; discuss “The processor 4 can also quickly process and analyze the obstacle information collected by the robot through the camera and the ultrasonic module, and after the obstacle is analyzed, the robot is helped to perform obstacle avoidance or obstacle cleaning; page 3 discuss “For the identification of the target image signal, the signal processing and analysis module in the robot performs image feature extraction and selection on the target image signal, and then performs classification and recognition on the image according to the decision rule”; page 3 discuss “When the target signal cannot be identified by the robot itself due to blurring or obstacle interference or coverage or no corresponding target library of the robot itself, the signal acquired by the robot will be transmitted to the data center for processing and analysis”; discuss “After receiving the unrecognizable target signal sent by the robot, the data center analyzes the signal”; discuss “For the recognition of the target image signal, the data center performs image feature extraction and selection on the target image signal, and then performs classification and recognition on the image according to the decision rule. The data center returns the identified result related to the target information to the robot for the next operation.”; page 1-2 discuss “The robot 101 includes at least a signal acquisition module, a signal analysis module, and a first wireless communication module. The data center 102 includes a second wireless communication module and a data analysis module”, discuss “Specifically, the signal collection module is configured to collect at least one form of signal corresponding to the target. The signal analysis module is configured to analyze the signal acquired by the signal acquisition module to obtain an identification result. The first wireless communication module is configured to forward the signal acquired by the signal acquisition module to the data center after the signal analysis module cannot obtain the recognition result. The data analysis module is configured to receive the signal through the second wireless communication module, analyze the signal to obtain an identification result, and return the identification result to the robot); YAN does not explicitly teach:. the preset condition including that the first recognition signal is generated within a first preset time; However, Gibbon et al. teaches: the preset condition including that the first recognition signal is generated within a first preset time (at least fig. 1 [0012]-[0042] discussed image recognition/facial recognition/object recognition, confidence threshold 130, temporal component/time, discussed “As another example, the confidence threshold 130 and/or an associated determination of whether the confidence threshold 130 is satisfied may change over time (e.g., a “decaying” confidence level). To illustrate, a confidence level may decrease over time following an event (e.g., a “last user action”). If a threshold time period has elapsed following the event, the confidence level may no longer satisfy the confidence threshold 130”; in particular [0025]-[0026]) for recognition ([0012]-[0042] [0025]-[0026]); It would have been obvious to one of ordinary skill in the art at the time of filing and at the time of the invention to modify the system and method of YAN with the preset condition including that the first recognition signal is generated within a first preset time; as taught by Gibbon et al. for recognition. Regarding claim 5, YAN teaches: wherein when the first communication module is connected to the second communication module, the control module sends the environmental image and/or the first recognition signal to the server, and receives the second recognition signal; (at least figs. 1-3, pages 1-3 discuss robot 101, data center 102, page 2 discuss “The image signal is processed from the processor 2. When the robot walks on the road, road surface information, including red and green signal lights of the crossroad, pedestrians and vehicles on the road, and information such as vehicles on the road are obtained through the camera, the information is quickly processed and analyzed from the processor 2, and the objects on the road are quickly classified and recognized from the processor 2, so that the robot can efficiently and accurately perform patrol and other tasks. In addition, when executing the security task, the robot can obtain the face information of the person by means of the camera, and then quickly process the face information from the processor 2 for face recognition, so as to quickly distinguish whether there is a prisoner”; discuss “The processor 4 can also quickly process and analyze the obstacle information collected by the robot through the camera and the ultrasonic module, and after the obstacle is analyzed, the robot is helped to perform obstacle avoidance or obstacle cleaning; page 3 discuss “For the identification of the target image signal, the signal processing and analysis module in the robot performs image feature extraction and selection on the target image signal, and then performs classification and recognition on the image according to the decision rule”; page 3 discuss “When the target signal cannot be identified by the robot itself due to blurring or obstacle interference or coverage or no corresponding target library of the robot itself, the signal acquired by the robot will be transmitted to the data center for processing and analysis”; discuss “After receiving the unrecognizable target signal sent by the robot, the data center analyzes the signal”; discuss “For the recognition of the target image signal, the data center performs image feature extraction and selection on the target image signal, and then performs classification and recognition on the image according to the decision rule. The data center returns the identified result related to the target information to the robot for the next operation.”; page 1-2 discuss “The robot 101 includes at least a signal acquisition module, a signal analysis module, and a first wireless communication module. The data center 102 includes a second wireless communication module and a data analysis module”, discuss “Specifically, the signal collection module is configured to collect at least one form of signal corresponding to the target. The signal analysis module is configured to analyze the signal acquired by the signal acquisition module to obtain an identification result. The first wireless communication module is configured to forward the signal acquired by the signal acquisition module to the data center after the signal analysis module cannot obtain the recognition result. The data analysis module is configured to receive the signal through the second wireless communication module, analyze the signal to obtain an identification result, and return the identification result to the robot); Regarding claim 6, YAN teaches: wherein if the second recognition signal is received within a second preset time, the control module controls a movement pattern of the self-moving device based on the second recognition signal; (at least figs. 1-3, pages 1-3 discuss robot 101, data center 102, page 2 discuss “The image signal is processed from the processor 2. When the robot walks on the road, road surface information, including red and green signal lights of the crossroad, pedestrians and vehicles on the road, and information such as vehicles on the road are obtained through the camera, the information is quickly processed and analyzed from the processor 2, and the objects on the road are quickly classified and recognized from the processor 2, so that the robot can efficiently and accurately perform patrol and other tasks. In addition, when executing the security task, the robot can obtain the face information of the person by means of the camera, and then quickly process the face information from the processor 2 for face recognition, so as to quickly distinguish whether there is a prisoner”; discuss “The processor 4 can also quickly process and analyze the obstacle information collected by the robot through the camera and the ultrasonic module, and after the obstacle is analyzed, the robot is helped to perform obstacle avoidance or obstacle cleaning; page 3 discuss “For the identification of the target image signal, the signal processing and analysis module in the robot performs image feature extraction and selection on the target image signal, and then performs classification and recognition on the image according to the decision rule”; page 3 discuss “When the target signal cannot be identified by the robot itself due to blurring or obstacle interference or coverage or no corresponding target library of the robot itself, the signal acquired by the robot will be transmitted to the data center for processing and analysis”; discuss “After receiving the unrecognizable target signal sent by the robot, the data center analyzes the signal”; discuss “For the recognition of the target image signal, the data center performs image feature extraction and selection on the target image signal, and then performs classification and recognition on the image according to the decision rule. The data center returns the identified result related to the target information to the robot for the next operation.”; page 1-2 discuss “The robot 101 includes at least a signal acquisition module, a signal analysis module, and a first wireless communication module. The data center 102 includes a second wireless communication module and a data analysis module”, discuss “Specifically, the signal collection module is configured to collect at least one form of signal corresponding to the target. The signal analysis module is configured to analyze the signal acquired by the signal acquisition module to obtain an identification result. The first wireless communication module is configured to forward the signal acquired by the signal acquisition module to the data center after the signal analysis module cannot obtain the recognition result. The data analysis module is configured to receive the signal through the second wireless communication module, analyze the signal to obtain an identification result, and return the identification result to the robot; the identification result/second recognition signal would take a certain amount of time to be generated and transmitted back to the robot, this certain amount of time would be more than instantaneous and less than infinite); Regarding claim 10, YAN teaches: wherein the server further comprises a update module configured to generate an update data packet based on the environmental image and/or the first recognition signal, and a communication module configured to send the update data packet to the self-moving device (at least figs. 1-3, pages 1-3 discuss robot 101, data center 102, page 2 discuss “The image signal is processed from the processor 2. When the robot walks on the road, road surface information, including red and green signal lights of the crossroad, pedestrians and vehicles on the road, and information such as vehicles on the road are obtained through the camera, the information is quickly processed and analyzed from the processor 2, and the objects on the road are quickly classified and recognized from the processor 2, so that the robot can efficiently and accurately perform patrol and other tasks. In addition, when executing the security task, the robot can obtain the face information of the person by means of the camera, and then quickly process the face information from the processor 2 for face recognition, so as to quickly distinguish whether there is a prisoner”; discuss “The processor 4 can also quickly process and analyze the obstacle information collected by the robot through the camera and the ultrasonic module, and after the obstacle is analyzed, the robot is helped to perform obstacle avoidance or obstacle cleaning; page 3 discuss “For the identification of the target image signal, the signal processing and analysis module in the robot performs image feature extraction and selection on the target image signal, and then performs classification and recognition on the image according to the decision rule”; page 3 discuss “When the target signal cannot be identified by the robot itself due to blurring or obstacle interference or coverage or no corresponding target library of the robot itself, the signal acquired by the robot will be transmitted to the data center for processing and analysis”; discuss “After receiving the unrecognizable target signal sent by the robot, the data center analyzes the signal”; discuss “For the recognition of the target image signal, the data center performs image feature extraction and selection on the target image signal, and then performs classification and recognition on the image according to the decision rule. The data center returns the identified result related to the target information to the robot for the next operation.”; page 1-2 discuss “The robot 101 includes at least a signal acquisition module, a signal analysis module, and a first wireless communication module. The data center 102 includes a second wireless communication module and a data analysis module”, discuss “Specifically, the signal collection module is configured to collect at least one form of signal corresponding to the target. The signal analysis module is configured to analyze the signal acquired by the signal acquisition module to obtain an identification result. The first wireless communication module is configured to forward the signal acquired by the signal acquisition module to the data center after the signal analysis module cannot obtain the recognition result. The data analysis module is configured to receive the signal through the second wireless communication module, analyze the signal to obtain an identification result, and return the identification result to the robot); YAN does not explicitly teach: update module includes software update module, and the control module is configured to update the first recognition module based on the update data packet; However, Gibbon et al. teaches: update module includes software update module, and the control module is configured to update the first recognition module based on the update data packet; (figs. 1 [0012]-[0041] fig. 2 [0042]-[0051] fig. 3 [0052]-[0060] fig. 4 [0061]-0067] fig. 5 [0068]-0074]; fig. 6 [0075]-[0083] fig. 7 [0084]-[0088] fig. 8 [0089]-[0096] discuss server/model update module sending model update information to electronic device, and electronic device receives the model update information from the server and may store the model update information; in particular [0033] discuss “the facial recognition model update module 142 of the server 134 may determine the image recognition model update information 170 to be provided to the electronic device 102”, [0035] discuss “The electronic device 102 may receive the image recognition model update information 170 from the server 134 (via the network interface 110) and may store the image recognition model update information 170 in the memory 106 (e.g., as the second data 172). In this example, the second data 172 may include a second set of faces to be used when performing facial recognition operations. As memory resources at the electronic device 102 may be limited, the image data 120 stored in the memory 106 may be updated based on the image recognition model update information 170”; [0047]-[0049]; [0056]-[0058]; [0064]-[0066]; [0071]-[0072]) to be dynamic ([0012]-[0096]); It would have been obvious to one of ordinary skill in the art at the time of filing and at the time of the invention to modify the system and method of YAN with update module includes software update module, and the control module is configured to update the first recognition module based on the update data packet as taught by Gibbon et al. to be dynamic. Regarding claim 14, YAN teaches: wherein when the first communication module works normally, the control module sends the environmental image and/or the first recognition signal to the server, and receives the second recognition signal; (at least figs. 1-3, pages 1-3 discuss robot 101, data center 102, page 2 discuss “The image signal is processed from the processor 2. When the robot walks on the road, road surface information, including red and green signal lights of the crossroad, pedestrians and vehicles on the road, and information such as vehicles on the road are obtained through the camera, the information is quickly processed and analyzed from the processor 2, and the objects on the road are quickly classified and recognized from the processor 2, so that the robot can efficiently and accurately perform patrol and other tasks. In addition, when executing the security task, the robot can obtain the face information of the person by means of the camera, and then quickly process the face information from the processor 2 for face recognition, so as to quickly distinguish whether there is a prisoner”; discuss “The processor 4 can also quickly process and analyze the obstacle information collected by the robot through the camera and the ultrasonic module, and after the obstacle is analyzed, the robot is helped to perform obstacle avoidance or obstacle cleaning; page 3 discuss “For the identification of the target image signal, the signal processing and analysis module in the robot performs image feature extraction and selection on the target image signal, and then performs classification and recognition on the image according to the decision rule”; page 3 discuss “When the target signal cannot be identified by the robot itself due to blurring or obstacle interference or coverage or no corresponding target library of the robot itself, the signal acquired by the robot will be transmitted to the data center for processing and analysis”; discuss “After receiving the unrecognizable target signal sent by the robot, the data center analyzes the signal”; discuss “For the recognition of the target image signal, the data center performs image feature extraction and selection on the target image signal, and then performs classification and recognition on the image according to the decision rule. The data center returns the identified result related to the target information to the robot for the next operation.”; page 1-2 discuss “The robot 101 includes at least a signal acquisition module, a signal analysis module, and a first wireless communication module. The data center 102 includes a second wireless communication module and a data analysis module”, discuss “Specifically, the signal collection module is configured to collect at least one form of signal corresponding to the target. The signal analysis module is configured to analyze the signal acquired by the signal acquisition module to obtain an identification result. The first wireless communication module is configured to forward the signal acquired by the signal acquisition module to the data center after the signal analysis module cannot obtain the recognition result. The data analysis module is configured to receive the signal through the second wireless communication module, analyze the signal to obtain an identification result, and return the identification result to the robot); Regarding claim 15, the cite portions and rationale of rejection to claim 6 read on this claim. Regarding claim 19, YAN teaches: wherein the environmental image comprises an original image or a processed image (at least figs. 1-3, pages 1-3 discuss robot 101, data center 102, page 2 discuss “The image signal is processed from the processor 2. When the robot walks on the road, road surface information, including red and green signal lights of the crossroad, pedestrians and vehicles on the road, and information such as vehicles on the road are obtained through the camera, the information is quickly processed and analyzed from the processor 2, and the objects on the road are quickly classified and recognized from the processor 2, so that the robot can efficiently and accurately perform patrol and other tasks. In addition, when executing the security task, the robot can obtain the face information of the person by means of the camera, and then quickly process the face information from the processor 2 for face recognition, so as to quickly distinguish whether there is a prisoner”; discuss “The processor 4 can also quickly process and analyze the obstacle information collected by the robot through the camera and the ultrasonic module, and after the obstacle is analyzed, the robot is helped to perform obstacle avoidance or obstacle cleaning; page 3 discuss “For the identification of the target image signal, the signal processing and analysis module in the robot performs image feature extraction and selection on the target image signal, and then performs classification and recognition on the image according to the decision rule”; page 3 discuss “When the target signal cannot be identified by the robot itself due to blurring or obstacle interference or coverage or no corresponding target library of the robot itself, the signal acquired by the robot will be transmitted to the data center for processing and analysis”; discuss “After receiving the unrecognizable target signal sent by the robot, the data center analyzes the signal”; discuss “For the recognition of the target image signal, the data center performs image feature extraction and selection on the target image signal, and then performs classification and recognition on the image according to the decision rule. The data center returns the identified result related to the target information to the robot for the next operation.”; page 1-2 discuss “The robot 101 includes at least a signal acquisition module, a signal analysis module, and a first wireless communication module. The data center 102 includes a second wireless communication module and a data analysis module”, discuss “Specifically, the signal collection module is configured to collect at least one form of signal corresponding to the target. The signal analysis module is configured to analyze the signal acquired by the signal acquisition module to obtain an identification result. The first wireless communication module is configured to forward the signal acquired by the signal acquisition module to the data center after the signal analysis module cannot obtain the recognition result. The data analysis module is configured to receive the signal through the second wireless communication module, analyze the signal to obtain an identification result, and return the identification result to the robot); Regarding claim 20, YAN teaches: A method comprising: detecting an environment around a self-moving device to generate an environmental image; recognizing a specific object in an image based on the environmental image to generate a first recognition signal; and s ending the environmental image to a server in response to determining that the first recognition signal is not generated, (at least figs. 1-3, pages 1-3 discuss robot 101, data center 102, page 2 discuss “The image signal is processed from the processor 2. When the robot walks on the road, road surface information, including red and green signal lights of the crossroad, pedestrians and vehicles on the road, and information such as vehicles on the road are obtained through the camera, the information is quickly processed and analyzed from the processor 2, and the objects on the road are quickly classified and recognized from the processor 2, so that the robot can efficiently and accurately perform patrol and other tasks. In addition, when executing the security task, the robot can obtain the face information of the person by means of the camera, and then quickly process the face information from the processor 2 for face recognition, so as to quickly distinguish whether there is a prisoner”; discuss “The processor 4 can also quickly process and analyze the obstacle information collected by the robot through the camera and the ultrasonic module, and after the obstacle is analyzed, the robot is helped to perform obstacle avoidance or obstacle cleaning; page 3 discuss “For the identification of the target image signal, the signal processing and analysis module in the robot performs image feature extraction and selection on the target image signal, and then performs classification and recognition on the image according to the decision rule”; page 3 discuss “When the target signal cannot be identified by the robot itself due to blurring or obstacle interference or coverage or no corresponding target library of the robot itself, the signal acquired by the robot will be transmitted to the data center for processing and analysis”; discuss “After receiving the unrecognizable target signal sent by the robot, the data center analyzes the signal”; discuss “For the recognition of the target image signal, the data center performs image feature extraction and selection on the target image signal, and then performs classification and recognition on the image according to the decision rule. The data center returns the identified result related to the target information to the robot for the next operation.”; page 1-2 discuss “The robot 101 includes at least a signal acquisition module, a signal analysis module, and a first wireless communication module. The data center 102 includes a second wireless communication module and a data analysis module”, discuss “Specifically, the signal collection module is configured to collect at least one form of signal corresponding to the target. The signal analysis module is configured to analyze the signal acquired by the signal acquisition module to obtain an identification result. The first wireless communication module is configured to forward the signal acquired by the signal acquisition module to the data center after the signal analysis module cannot obtain the recognition result. The data analysis module is configured to receive the signal through the second wireless communication module, analyze the signal to obtain an identification result, and return the identification result to the robot); YAN does not explicitly teach:. within a first preset time; However, Gibbon et al. teaches: within a first preset time (at least fig. 1 [0012]-[0042] discussed image recognition/facial recognition/object recognition, confidence threshold 130, temporal component/time, discussed “As another example, the confidence threshold 130 and/or an associated determination of whether the confidence threshold 130 is satisfied may change over time (e.g., a “decaying” confidence level). To illustrate, a confidence level may decrease over time following an event (e.g., a “last user action”). If a threshold time period has elapsed following the event, the confidence level may no longer satisfy the confidence threshold 130”; in particular [0025]-[0026]) for recognition ([0012]-[0042] [0025]-[0026]); It would have been obvious to one of ordinary skill in the art at the time of filing and at the time of the invention to modify the system and method of YAN within a first preset time; as taught by Gibbon et al. for recognition. Claim(s) 8, 17, 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over YAN (CN 104575489 a reference listed in IDS 10/16/2020, and translation has been provided and cited) in view of Gibbon et al. (US 20160163029) as applied to claim 5, 11, 14 above, and further in view of Chew (US 20180181868). Regarding claim 8, YAN teaches: wherein based on server sends the second recognition signal, the control module controls a movement pattern of the self-moving device based on the second recognition signal; (at least figs. 1-3, pages 1-3 discuss robot 101, data center 102, page 2 discuss “The image signal is processed from the processor 2. When the robot walks on the road, road surface information, including red and green signal lights of the crossroad, pedestrians and vehicles on the road, and information such as vehicles on the road are obtained through the camera, the information is quickly processed and analyzed from the processor 2, and the objects on the road are quickly classified and recognized from the processor 2, so that the robot can efficiently and accurately perform patrol and other tasks. In addition, when executing the security task, the robot can obtain the face information of the person by means of the camera, and then quickly process the face information from the processor 2 for face recognition, so as to quickly distinguish whether there is a prisoner”; discuss “The processor 4 can also quickly process and analyze the obstacle information collected by the robot through the camera and the ultrasonic module, and after the obstacle is analyzed, the robot is helped to perform obstacle avoidance or obstacle cleaning; page 3 discuss “For the identification of the target image signal, the signal processing and analysis module in the robot performs image feature extraction and selection on the target image signal, and then performs classification and recognition on the image according to the decision rule”; page 3 discuss “When the target signal cannot be identified by the robot itself due to blurring or obstacle interference or coverage or no corresponding target library of the robot itself, the signal acquired by the robot will be transmitted to the data center for processing and analysis”; discuss “After receiving the unrecognizable target signal sent by the robot, the data center analyzes the signal”; discuss “For the recognition of the target image signal, the data center performs image feature extraction and selection on the target image signal, and then performs classification and recognition on the image according to the decision rule. The data center returns the identified result related to the target information to the robot for the next operation.”; page 1-2 discuss “The robot 101 includes at least a signal acquisition module, a signal analysis module, and a first wireless communication module. The data center 102 includes a second wireless communication module and a data analysis module”, discuss “Specifically, the signal collection module is configured to collect at least one form of signal corresponding to the target. The signal analysis module is configured to analyze the signal acquired by the signal acquisition module to obtain an identification result. The first wireless communication module is configured to forward the signal acquired by the signal acquisition module to the data center after the signal analysis module cannot obtain the recognition result. The data analysis module is configured to receive the signal through the second wireless communication module, analyze the signal to obtain an identification result, and return the identification result to the robot); YAN does not explicitly teach: server sends the second recognition signal includes if a confidence level of the second recognition signal is greater than a second preset value; However, Chew teaches: server sends the second recognition signal includes if a confidence level of the second recognition signal is greater than a second preset value; (at least fig. 1 [0019]-[0024] figs.4- 5 [0033]-[0041] discuss sending to cloud-hosted analytics determiners based on confidence level, discuss cloud-hosted analytics determiner sends analytics result 128 to client computing device based on confidence level, in particular at least [0020] discuss “the other hand, if the overall confidence level 122 is below the user-defined threshold 124, the client-hosted analytics determiner 102 may send the sensor data 110 associated with the below confidence level analytics result 114 to a cloud-hosted analytics determiner 106”; [0021] discuss “if the overall confidence level 132 associated with analytics result 128 is above the user-defined threshold 134, the analytics result 128 may be accepted by the client computing device 104 and the client computing device 104 may initiate an action based on the analytics results 128”; [0038] discuss “If the confidence level of the re-dispositioned analytics results is above the threshold, the client computing device may initiate an action at block 422 in response to the analytics results of the cloud-hosted analytics determiner”; [0041] discuss “The cloud-hosted analytics determiner re-processes the data and may determine that there is a 90% confidence level that the voice data may be translated to a command to play video XYZ. If the user-defined threshold for the cloud-hosted analytics determiner is 80%, then the cloud-hosted analytics determiner may invoke the client-hosted analytics determiner to play video XYZ, Alternatively, the cloud-hosted analytics determiner may send the updated analytics result to the client-hosted analytics determiner”) to improve accuracy ([0019]-[0024] [0033]-[0041]); It would have been obvious to one of ordinary skill in the art at the time of filing and at the time of the invention to modify the system and method of YAN with server sends the second recognition signal includes if a confidence level of the second recognition signal is greater than a second preset value as taught by Chew to improve accuracy. Regarding claim 17, the cited portions and rationale of rejection to claim 8 read on this claim Regarding claim 18, YAN teaches: the first communication module includes wireless technology; (at least figs. 1-3, pages 1-3 discuss robot 101, data center 102, page 2 discuss “The image signal is processed from the processor 2. When the robot walks on the road, road surface information, including red and green signal lights of the crossroad, pedestrians and vehicles on the road, and information such as vehicles on the road are obtained through the camera, the information is quickly processed and analyzed from the processor 2, and the objects on the road are quickly classified and recognized from the processor 2, so that the robot can efficiently and accurately perform patrol and other tasks. In addition, when executing the security task, the robot can obtain the face information of the person by means of the camera, and then quickly process the face information from the processor 2 for face recognition, so as to quickly distinguish whether there is a prisoner”; discuss “The processor 4 can also quickly process and analyze the obstacle information collected by the robot through the camera and the ultrasonic module, and after the obstacle is analyzed, the robot is helped to perform obstacle avoidance or obstacle cleaning; page 3 discuss “For the identification of the target image signal, the signal processing and analysis module in the robot performs image feature extraction and selection on the target image signal, and then performs classification and recognition on the image according to the decision rule”; page 3 discuss “When the target signal cannot be identified by the robot itself due to blurring or obstacle interference or coverage or no corresponding target library of the robot itself, the signal acquired by the robot will be transmitted to the data center for processing and analysis”; discuss “After receiving the unrecognizable target signal sent by the robot, the data center analyzes the signal”; discuss “For the recognition of the target image signal, the data center performs image feature extraction and selection on the target image signal, and then performs classification and recognition on the image according to the decision rule. The data center returns the identified result related to the target information to the robot for the next operation.”; page 1-2 discuss “The robot 101 includes at least a signal acquisition module, a signal analysis module, and a first wireless communication module. The data center 102 includes a second wireless communication module and a data analysis module”, discuss “Specifically, the signal collection module is configured to collect at least one form of signal corresponding to the target. The signal analysis module is configured to analyze the signal acquired by the signal acquisition module to obtain an identification result. The first wireless communication module is configured to forward the signal acquired by the signal acquisition module to the data center after the signal analysis module cannot obtain the recognition result. The data analysis module is configured to receive the signal through the second wireless communication module, analyze the signal to obtain an identification result, and return the identification result to the robot; page 3 discuss “The robot transmits the collected signal to the data center in a wireless manner, and the wireless transmission includes WiFi, Bluetooth, ZigBee, 2.4 G, etc.); YAN does not explicitly teach: wherein the first communication module/wireless technology comprises a 5th generation mobile communication module or a mobile communication module with a maximum transmission speed greater than 1 Gbps; However, Chew teaches: wherein the first communication module/wireless technology comprises a 5th generation mobile communication module or a mobile communication module with a maximum transmission speed greater than 1 Gbps; (at least fig. 1 [0019]-[0024] figs.4- 5 [0033]-[0041] discuss sending to cloud-hosted analytics determiners based on confidence level, discuss cloud-hosted analytics determiner sends analytics result 128 to client computing device based on confidence level, in particular at least [0020] discuss “the other hand, if the overall confidence level 122 is below the user-defined threshold 124, the client-hosted analytics determiner 102 may send the sensor data 110 associated with the below confidence level analytics result 114 to a cloud-hosted analytics determiner 106”; [0021] discuss “if the overall confidence level 132 associated with analytics result 128 is above the user-defined threshold 134, the analytics result 128 may be accepted by the client computing device 104 and the client computing device 104 may initiate an action based on the analytics results 128”; [0038] discuss “If the confidence level of the re-dispositioned analytics results is above the threshold, the client computing device may initiate an action at block 422 in response to the analytics results of the cloud-hosted analytics determiner”; [0041] discuss “The cloud-hosted analytics determiner re-processes the data and may determine that there is a 90% confidence level that the voice data may be translated to a command to play video XYZ. If the user-defined threshold for the cloud-hosted analytics determiner is 80%, then the cloud-hosted analytics determiner may invoke the client-hosted analytics determiner to play video XYZ, Alternatively, the cloud-hosted analytics determiner may send the updated analytics result to the client-hosted analytics determiner”; [0018] discuss “analytics determiners on cloud computing devices may be trained using deep-learning neural networks with large sets of training data. Indeed, a cloud server may be more likely to employ machine learning”; [0033] discuss “Predictor functions of client-hosted analytics determiner 102 may be pre-trained using a supervised machine learning procedure”; [0038] discuss “In embodiments, the confidence level calculated by the cloud-hosted analytics determiner may be higher or substantially higher than the confidence level previously calculated by the client-hosted analytics determiner”; fig. 9 [0067]-[0071], in particular [0070] discuss 5G) to communicate ([0067]-[0071]); It would have been obvious to one of ordinary skill in the art at the time of filing and at the time of the invention to modify the system and method of YAN with wherein the first communication module/wireless technology comprises a 5th generation mobile communication module or a mobile communication module with a maximum transmission speed greater than 1 Gbps as taught by Chew to communicate. Claim(s) 9 is/are rejected under 35 U.S.C. 103 as being unpatentable over YAN (CN 104575489 a reference listed in IDS 10/16/2020, and translation has been provided and cited) in view of Gibbon et al. (US 20160163029) as applied to claim 1, above, and further in view of Chew (US 20180181868) and GONG et al. (US 20180204562). Regarding claim 9, YAN does not explicitly teach: the first recognition module configured to invoke a preset first machine learning model, the second recognition module configured to invoke a preset second machine learning model, a quantity of model parameters of the preset second machine learning model being greater than that of the preset first machine learning model; machine learning includes deep learning; However, Chew teaches: the first recognition module configured to invoke a preset first machine learning model, the second recognition module configured to invoke a preset second machine learning model, a quantity of model parameters of the preset second machine learning model being greater than that of the preset first machine learning model; (at least fig. 1 [0019]-[0024] figs.4- 5 [0033]-[0041] discuss sending to cloud-hosted analytics determiners based on confidence level, discuss cloud-hosted analytics determiner sends analytics result 128 to client computing device based on confidence level, in particular at least [0020] discuss “the other hand, if the overall confidence level 122 is below the user-defined threshold 124, the client-hosted analytics determiner 102 may send the sensor data 110 associated with the below confidence level analytics result 114 to a cloud-hosted analytics determiner 106”; [0021] discuss “if the overall confidence level 132 associated with analytics result 128 is above the user-defined threshold 134, the analytics result 128 may be accepted by the client computing device 104 and the client computing device 104 may initiate an action based on the analytics results 128”; [0038] discuss “If the confidence level of the re-dispositioned analytics results is above the threshold, the client computing device may initiate an action at block 422 in response to the analytics results of the cloud-hosted analytics determiner”; [0041] discuss “The cloud-hosted analytics determiner re-processes the data and may determine that there is a 90% confidence level that the voice data may be translated to a command to play video XYZ. If the user-defined threshold for the cloud-hosted analytics determiner is 80%, then the cloud-hosted analytics determiner may invoke the client-hosted analytics determiner to play video XYZ, Alternatively, the cloud-hosted analytics determiner may send the updated analytics result to the client-hosted analytics determiner”; [0018] discuss “analytics determiners on cloud computing devices may be trained using deep-learning neural networks with large sets of training data. Indeed, a cloud server may be more likely to employ machine learning”; [0033] discuss “Predictor functions of client-hosted analytics determiner 102 may be pre-trained using a supervised machine learning procedure”; [0038] discuss “In embodiments, the confidence level calculated by the cloud-hosted analytics determiner may be higher or substantially higher than the confidence level previously calculated by the client-hosted analytics determiner”) to improve accuracy ([0019]-[0024] [0033]-[0041]); machine learning includes deep learning; (at least fig. 1 [0019]-[0024] figs.4- 5 [0033]-[0041] discuss sending to cloud-hosted analytics determiners based on confidence level, discuss cloud-hosted analytics determiner sends analytics result 128 to client computing device based on confidence level, in particular at least [0020] discuss “the other hand, if the overall confidence level 122 is below the user-defined threshold 124, the client-hosted analytics determiner 102 may send the sensor data 110 associated with the below confidence level analytics result 114 to a cloud-hosted analytics determiner 106”; [0021] discuss “if the overall confidence level 132 associated with analytics result 128 is above the user-defined threshold 134, the analytics result 128 may be accepted by the client computing device 104 and the client computing device 104 may initiate an action based on the analytics results 128”; [0038] discuss “If the confidence level of the re-dispositioned analytics results is above the threshold, the client computing device may initiate an action at block 422 in response to the analytics results of the cloud-hosted analytics determiner”; [0041] discuss “The cloud-hosted analytics determiner re-processes the data and may determine that there is a 90% confidence level that the voice data may be translated to a command to play video XYZ. If the user-defined threshold for the cloud-hosted analytics determiner is 80%, then the cloud-hosted analytics determiner may invoke the client-hosted analytics determiner to play video XYZ, Alternatively, the cloud-hosted analytics determiner may send the updated analytics result to the client-hosted analytics determiner”; [0018] discuss “analytics determiners on cloud computing devices may be trained using deep-learning neural networks with large sets of training data. Indeed, a cloud server may be more likely to employ machine learning”; [0033] discuss “Predictor functions of client-hosted analytics determiner 102 may be pre-trained using a supervised machine learning procedure”; [0038] discuss “In embodiments, the confidence level calculated by the cloud-hosted analytics determiner may be higher or substantially higher than the confidence level previously calculated by the client-hosted analytics determiner”) to improve accuracy ([0019]-[0024] [0033]-[0041]); It would have been obvious to one of ordinary skill in the art at the time of filing and at the time of the invention to modify the system and method of YAN the first recognition module configured to invoke a preset first machine learning model, the second recognition module configured to invoke a preset second machine learning model, a quantity of model parameters of the preset second machine learning model being greater than that of the preset first machine learning model; machine learning includes deep learning; as taught by Chew to improve accuracy. In addition and in the alternative, GONG et al. teaches: machine learning includes deep learning; (at least [0043]] discuss “An optional implementation for recognizing the to-be-recognized object is a machine learning pattern. The machine learning pattern may include but is not limited to an auto encoder, sparse coding and deep belief networks. The machine learning pattern also may be referred to as deep learning”) for recognizing object ([0043]); It would have been obvious to one of ordinary skill in the art at the time of filing and at the time of the invention to modify the system and method of YAN with machine learning includes deep learning as taught by GONG et al. for recognizing object. Allowable Subject Matter Claim 2-4, 7, 12-13, 16, objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to BAO LONG T NGUYEN whose telephone number is (571)270-7768. The examiner can normally be reached M-F 8:30-4:30. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Khoi Tran can be reached at (571) 272-6919. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. BAO LONG T. NGUYEN Examiner Art Unit 3664 /BAO LONG T NGUYEN/Primary Examiner, Art Unit 3656
Read full office action

Prosecution Timeline

Sep 30, 2024
Application Filed
Feb 14, 2026
Non-Final Rejection — §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12600042
CONTROL DEVICE AND ROBOT SYSTEM
2y 5m to grant Granted Apr 14, 2026
Patent 12589950
OBJECT RECOGNITION SYSTEM FOR PICKING UP ITEMS
2y 5m to grant Granted Mar 31, 2026
Patent 12588960
MEDICAL ROBOT FOR PLACEMENT OF MEDICAL INSTRUMENTS UNDER ULTRASOUND GUIDANCE
2y 5m to grant Granted Mar 31, 2026
Patent 12585277
OFF-ROAD MACHINE-LEARNED OBSTACLE NAVIGATION IN AN AUTONOMOUS VEHICLE ENVIRONMENT
2y 5m to grant Granted Mar 24, 2026
Patent 12575473
Route Generation Method, Route Generation System, And Route Generation Program
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
83%
Grant Probability
90%
With Interview (+7.0%)
3y 0m
Median Time to Grant
Low
PTA Risk
Based on 540 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month