Prosecution Insights
Last updated: April 17, 2026
Application No. 18/432,901

SYSTEM AND METHOD FOR CONFINING ROBOTIC DEVICES

Final Rejection §103§112§DP
Filed
Feb 05, 2024
Examiner
BROSH, BENJAMIN J
Art Unit
3658
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
unknown
OA Round
2 (Final)
73%
Grant Probability
Favorable
3-4
OA Rounds
2y 7m
To Grant
99%
With Interview

Examiner Intelligence

Grants 73% — above average
73%
Career Allow Rate
56 granted / 77 resolved
+20.7% vs TC avg
Strong +30% interview lift
Without
With
+29.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
40 currently pending
Career history
117
Total Applications
across all art units

Statute-Specific Performance

§101
13.6%
-26.4% vs TC avg
§103
39.6%
-0.4% vs TC avg
§102
20.9%
-19.1% vs TC avg
§112
20.5%
-19.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 77 resolved cases

Office Action

§103 §112 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Priority The examiner acknowledges that applicant claims the priority of application 18/120,775 filed 13 March 2023, which is a continuation of application 17/071,424 filed 15 October 2020, which is a continuation of application 15/674,310 filed 10 August 2017, which claims benefit of application 62/373,512 filed 11 August 2016 in accordance with paragraph [1] of the specification of the instant application. In determining effective filing date of the claims, the examiner consulted the parent documents of note to verify support and the MPEP. Response to Remarks/Amendment The examiner received amendments to the claims in addition to Terminal Disclaimers dated 23 December 2025 in response to the non-final rejection office action dated 26 September 2025 (hereinafter the document of concern when referencing “outstanding rejections”, “outstanding objections”, “prior office action”, and the like). No new matter was entered by way of amendment. Regarding priority and outstanding rejections under 35 U.S.C. 112(a), the examiner notes that applicant has amended the claim set to comply with the written description requirement of 35 U.S.C. 112(a). Applicant has specified an image sensor, which has support from the specification. Therefore, the priority date reflects the note above and the claims correspond with the written description requirement of 35 U.S.C. 112(a); therefore, all outstanding 35 U.S.C. 112(a) rejections are withdrawn. Regarding outstanding claim objections, the examiner notes that claim 19 was mistakenly omitted from amendments. The objection regarding claim 19 is maintained. Further, a new claim objection (necessitated by amendment) for claim 4 is presented. Regarding outstanding Non-Statutory Double Patenting rejections, the examiner notes that applicant has addressed these rejections by filing of terminal disclaimers covering US Patents US 12,117,840, US 10,845,817, US 11,625,047, US 11,921,515, US 10,496,262, US 12,093,520, US 12,293,068, and US 11,989,021. Therefore, all outstanding Non-Statutory Double Patenting rejections are withdrawn. The examiner thanks the applicant for their efforts and will continue to monitor for other Double Patenting situations during the course of case prosecution. Regarding outstanding 35 U.S.C. 112(b) rejections of claims 7 and 18, the examiner notes that applicant has amended “a particular cleaning task” to “a particular floor cleaning task”; however, the examiner is still unsure of the metes and bounds encompassed in the claim language. As a “task” is generic, merely avoiding the second object may be a “task”, or movement to a cleaning site, recharging, alerting, merely saving the location of the object, switching a generic binary signal 1 to a 0, among an infinite number of possibilities. Put plainly, if the examiner is unable to understand what may be encompassed by “task” without some form of requisite knowledge from the specification, or a well-understood meaning, then the examiner is unsure what may be included in the definition of “task”. While the examiner appreciates the further attempt at clarification, the claim language remains indefinite and the claims remain rejected as described below. Regarding outstanding prior art (35 U.S.C. 103) rejections using Park et al. (US 2007/0267570 A1; hereinafter Park) in view of Williams et al. (US 2016/0297072 A1; hereinafter Williams), the examiner notes that applicant argues that the prior art is silent regarding the step of "generating…a virtual boundary adjacent to a location of the at least the first object" and "actuating the robot to avoid crossing locations within the environment corresponding with the virtual boundary." The examiner respectfully disagrees with the applicant's assertions; while the examiner agrees that Figures [11A-11B] show tracking paths of the robot (1110 and 1120, for example), this does not necessarily constitute the "virtual boundary" as understood by the examiner. Instead, per the examiner's explanation on page 18 (for example) of the prior office action, the object area is detected by the robot and the occupancy map is updated to reflect the object location. The area of the object (seen as piece 10) has borders that encompass the "virtual boundary" in this case. Per Merriam-Webster's definition of "adjacent": PNG media_image1.png 780 675 media_image1.png Greyscale The applicant may intend to claim a boundary that has a particular offset from the detected object, but the examiner is not limiting the meaning of "adjacent" to this, as noted on page [3] of the prior office action, and further justified by the first definition provided above (common border). Further, in the event that a boundary that is not limited to the outer edges/dimensions of the object is intended, the examiner notes that reference US 2009/0043440 A1 (Matsukawa et al., noted in the prior office action as a relevant reference) also discusses formation of virtual boundaries around objects (both stationary and dynamic) for object avoidance with boundaries that do not necessarily directly coincide with edges of the detected objects. Regardless of intent, the examiner insists that the combination of prior art presented continues to read upon the claim language, contrary to applicant's arguments against applicability of Park on Pages [12-13] of the remarks. Further, regarding "It does not restrict the movements of the robot, nor is it described as a non-traversable boundary", the examiner notes that Park makes a number of references to "movable" areas, such as paragraphs [0013, 0034, 0049] (implying the existence of not-movable areas). Per the examiner's discussion on page [18], Park describes that some floor objects may be sucked into a brush of the cleaning robot and result in an abnormal stoppage in addition to describing separate modes of cleaning that imply that the robot may not cross over onto the boundary area in certain situations. As this is implicitly taught, the examiner relied upon support from Williams, as Williams teaches avoidance into prohibited areas in an analogous art/field of endeavor, with the rational to combine as noted on page [19]. Therefore, the examiner has fully considered applicant's arguments but respectfully finds them unpersuasive at this time. The examiner recommends consideration of further amendments which delineate what applicant may mean by "adjacent" if this is the point of difference, but kindly notes that consideration of reference US 2009/0043440 A1 (Matsukawa et al.) may also be provided. The examiner notes that the basis for rejection has largely remained unchanged, but new grounds of rejection are presented, necessitated by amendment where applicable. Status of Claims The most recent revision of the claim set is dated 23 December 2025. Claims 1-20 are pending, and are rejected as noted below. Claims 1 and 12 are independent claims. Claim Objections Claims 4 and 19 are objected to because of the following informalities: Claim 4 states “the image sensor data”, but “the image sensor data” lacks antecedent basis. The parent claim and all other dependent claims in the claim tree reference the data as “the sensor data”, leading to concern if there is additional data presented. The examiner recommends removal of the word “image” before “sensor data” in this case to utilize consistent terminology with the other claims. Claim 19 states “…the processor is further configured to at least a second object…” and is missing the word “identify” before “a second object”. Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 7 and 18 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claims 7 and 18 recite “a particular floor cleaning task”. The examiner consulted the specification in order to understand what may be intended or encompassed by the language “particular floor cleaning task” (or “task” in general) but was unable to locate anywhere in the specification that this is discussed. The terminology “a particular cleaning task” contains an infinite number of possibilities as it is not necessarily limited to actions such as wiping, spraying, disinfecting, etc., but may also include actions associated with cleaning such as movement to a cleaning site, recharging to support further cleaning operations, alerting users of a mess, etc. As the examiner is unsure of the metes and bounds of the claim language, the claims are rendered indefinite. Therefore, the examiner notes that this phrase is indefinite and fails to particularly point out and distinctly claim the invention of the instant application. Consistent with USPTO examination practices, for purposes of compact prosecution, the claim limitations will be treated as best understood by the Examiner, which according to broadest reasonable interpretation (BRI), would mean that the examiner could follow any one or more of the interpretations discussed above. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Park et al. (US 2007/0267570 A1; hereinafter Park) in view of Williams et al. (US 2016/0297072 A1; Filed 09 April 2015, hereinafter Williams). Regarding independent claims 1 (method) and 12 (apparatus): Park discloses A method for operating a robot, comprising: (per claim 1) (Paragraph [0013, 0075] and Figure [7], Park discloses a method) / A robot, comprising: a chassis; a set of wheels coupled to the chassis; and a plurality of sensors; wherein: (per claim 12) (Paragraph [0034, 0038-0041], Park discloses a robot such as a cleaning robot with a chassis, wheels, and sensors such as an imaging unit/camera detector (the “imaging unit”) and position detectors (gyroscopes, etc., as the “detection unit”)) capturing, with an image sensor disposed on the robot, sensor data of objects within an environment of the robot as the robot moves within the environment; (per claim 1) / an image sensor among the plurality of sensors is configured to capture sensor data of objects within an environment of the robot as the robot moves within the environment; (per claim 12) (Paragraph [0013, 0041, 0043, 0076-0079] and Figure [2A-2B], Park discloses an imaging unit (containing sensors such as a CMOS or SMPD) contained within the robot to detect objects in the environment as the robot navigates the environment) identifying, with a processor, at least a first object among a plurality of objects within the environment based on the sensor data; (per claim 1) / a processor is configured to identify at least a first object based on the sensor data; (per claim 12) (Paragraph [0015, 0076-0080, 0099] and Figure [4A] and Claims [33-35], Park discloses detecting an object (in Figure [4A], one can see that the object 10 is one of among a plurality, wherein the walls of the room shown may also be considered “objects”, also acknowledged in paragraph [0099]) by determining if there is a shift in the position of the structured light pattern and categorizing the type of floor object based on the size. Further, a processor may execute the method (reference to accomplishment of a task via a “processor” going forward refers to Paragraph [0015] and Claims [33-35] of Park and will not be repeated for brevity)) generating, with the processor, a virtual boundary adjacent to a location of the at least the first object; and (per claim 1) / the processor is further configured to generate a virtual boundary adjacent to a location of the at least the first object based on identifying the at least the first object among a plurality of objects within the environment; and (per claim 12) (Paragraph [0008, 0012-0013, 0034, 0037, 0049, 0081] and Figure [7], Park discloses that the occupancy information of the map is updated to reflect the presence of the object in the detected area. The map is a digital product representing the arrangement of obstacles that obstruct or hinder the movement of the robot. Figures [11A-11B], for example, show a plurality of areas updated in the map to reflect the position of the object, the boundaries recorded constituting a “virtual boundary”) Park only implicitly discloses that the robot avoids crossing the boundary for a certain time/task (Paragraph [0008, 0037], Park discloses that some floor objects may be sucked into a brush of the cleaning robot and result in an abnormal stoppage. Additionally, Park discloses that a separate cleaning mode for the carpet may be chosen, implying that the robot does not cross over into the carpet boundary while conducting the first cleaning mode). However, Williams, in a similar field of endeavor of robot cleaner navigation, teaches actuating the robot to avoid crossing locations within the environment corresponding with the virtual boundary. (per claim 1) / the robot is configured to avoid crossing locations within the environment corresponding with the virtual boundary. (per claim 12) (Paragraph [0003-0004, 0008, 0030, 0032, 0045, 0055-0056, 0069-0070, 0077, 0094, 0097], Williams teaches virtual barriers that prevent the robot from entering regions for a variety of reasons, such as the area being hard to clean or containing fragile items (obstacles). As the robot travels, the map/occupancy grid is updated with traversable/non-traversable areas, including walls and other obstacles with a determination of “non-traversable”. The robot is precluded from crossing the virtual boundary) Park and Williams are in a similar field of endeavor of robot cleaner navigation. It would have been obvious to one having ordinary skill in the art at the time of effective filing, with a reasonable expectation of success, to have modified the disclosure of Park to explicitly state that the boundaries established prevent the robot from crossing the boundary (for at least a certain time period/task) as taught by Williams for the benefit of preventing movement of the robot into non-traversable areas to prevent damaging fragile items, for instance (Williams, Paragraph [0003, 0015, 0077, 0094]). As noted above, Park implicitly discloses that the robot avoids crossing the boundary for a certain time/task (Paragraph [0008, 0037], Park discloses that some floor objects may be sucked into a brush of the cleaning robot and result in an abnormal stoppage. Additionally, Park discloses that a separate cleaning mode for the carpet may be chosen, implying that the robot does not cross over into the carpet boundary while conducting the first cleaning mode); merely explicitly stating that the robot does not cross the established boundary for a certain period of time/task is an obvious variant of Park that is explicitly taught by Williams. Regarding claims 2 and 13: Park and Williams render parent claims 1/12 unpatentable. Park further discloses further comprising: marking, with the processor, a location of the at least the first object in a map of the environment. (per claim 2) / wherein : the processor is further configured to mark a location of the at least the first object in a map of the environment. (per claim 13) (Paragraph [0013, 0049, 0052, 0080-0081], Park discloses that the map is updated with the occupancy/presence of the object including location) Regarding claims 3 and 14: Park and Williams render parent claims 1/12 unpatentable. Park further discloses wherein the processor generates a virtual boundary adjacent to a location of an object based on identifying the object as the at least the first object. (per claim 3) / wherein the processor is configured to generate a virtual boundary adjacent to a location of an object based on identifying the object as the at least the first object. (per claim 14) (Paragraph [0076-0081], Park discloses identifying a boundary of an object based on identifying the object due to difference in light pattern/height) Regarding claims 4 and 15: Park and Williams render parent claims 1/12 unpatentable. Park further discloses wherein identifying the at least the first object further comprises: comparing, with the processor, the image sensor data with at least one sensor data saved in a memory; and identifying, with the processor, a match between the sensor data and the at least one sensor data saved in the memory. (per claim 4) / wherein identifying the at least the first object further comprises: comparing the sensor data with at least one sensor data saved in a memory; and identifying a match between the sensor data and the at least one sensor data saved in the memory. (per claim 15) (Paragraph [0044-0046, 0049, 0052, 0080, 0103, 0111], Park discloses comparing a plurality of types of data (size of the object detected, color information of the object, etc.) received by the sensor(s) against stored data to determine a match between saved and sensed data to determine a type of object, for instance. Alternatively or additionally, the reference position where the structured light is located in the image is stored in advance and compared against the sensed position) Regarding claims 5 and 16: Park and Williams render parent claims 4/15 unpatentable. Park further discloses wherein: the sensor data of the objects within the environment comprises images of the objects within the environment; and (per claim 5) / wherein: the sensor data of the objects within the environment comprises images of the objects within the environment; and (per claim 16) (Paragraph [0041-0042, 0051], Park discloses that the imaging unit takes an image of the environment, including for objects detected) the at least one sensor data saved in the memory comprises at least one image saved in the memory. (per claim 5) / the at least one sensor data saved in the memory comprises at least one image saved in the memory. (per claim 16) (Paragraph [0013, 0052, 0103], Park discloses that the reference image comprising the location of the structured light is stored in advance) Alternatively or additionally, Williams, in a similar field of endeavor of robot cleaner navigation, teaches the at least one sensor data saved in the memory comprises at least one image saved in the memory. (per claim 5) / the at least one sensor data saved in the memory comprises at least one image saved in the memory. (per claim 16) (Paragraph [0096], Williams discloses comparing images to stored reference images in a reference library to confirm detection) Park and Williams are in a similar field of endeavor of robot cleaner navigation. It would have been obvious to one having ordinary skill in the art at the time of effective filing, with a reasonable expectation of success, to have modified the disclosure of Park to include further explicit use of a reference image as taught by Williams, for the benefit of confirming the detected objects (Williams, Paragraph [0096]). Regarding claims 6 and 17: Park and Williams render parent claims 1/12 unpatentable. Park further discloses further comprising: identifying, with the processor, at least a second object among the plurality of objects within the environment based on the sensor data; and actuating the robot to modify a movement path of the robot based on identifying an object as the at least the second object. (per claim 6) / wherein: the processor is further configured to identify at least a second object among the plurality of objects within the environment based on the sensor data; and the robot is further configured to modify a movement path of the robot based on identifying an object as the at least the second object. (per claim 17) (Paragraph [0086, 0092-0095], Park discloses that if a second obstacle exists in front of the robot while moving along the boundary of the first obstacle/object, and performs a control action such as changing the moving direction of the robot) Regarding claims 7 and 18: Park and Williams render parent claims 1/12 unpatentable. Park further discloses further comprising: identifying, with the processor, at least a second object among the plurality of objects within the environment based on the sensor data; and actuating the robot to execute a particular floor cleaning task based on identifying an object as the at least the second object. (per claim 7) / wherein: the processor is further configured to identify at least a second object among the plurality of objects within the environment based on the sensor data; and the robot is further configured to execute a particular floor cleaning task based on identifying an object as the at least the second object. (per claim 18) (Paragraph [0044, 0086, 0092-0095], Park discloses that if a second obstacle exists in front of the robot while moving along the boundary of the first obstacle/object, the robot performs a control action such as changing the moving direction of the robot. The examiner notes that “a particular floor cleaning task” is not particularly limiting, as any form of task may support cleaning, such as merely moving the robot to another position. Therefore, merely changing direction of the cleaning robot reasonably includes “a particular floor cleaning task”. Further/alternatively, Park discloses that the work type of the robot can be categorized depending on the type of the floor object detected) Regarding claims 8 and 19: Park and Williams render parent claims 1/12 unpatentable. Park further discloses further comprising: identifying, with the processor, at least a second object among the plurality of objects within the environment based on the sensor data; and actuating the robot to execute a first task in a first area of the environment and then a second task in a second area of the environment based on identifying an object as the at least the second object. (per claim 8) / wherein: the processor is further configured to at least a second object among the plurality of objects within the environment based on the sensor data; and the robot is further configured to execute a first task in a first area of the environment and then a second task in a second area of the environment based on identifying an object as the at least the second object. (per claim 19) (Paragraph [0044, 0086, 0092-0095], Park discloses that if a second obstacle exists in front of the robot while moving along the boundary of the first obstacle/object, the robot performs a control action such as changing the moving direction of the robot. Thus, movement in the first direction constitutes “execute a first task in a first area” and the turn when coming upon the second object constitutes “a second task in a second area of the environment”. Further/alternatively, Park discloses that the work type of the robot can be categorized depending on the type of the floor object detected) Regarding claim 9: Park and Williams render parent claim 1 unpatentable. Park further discloses further comprising: dividing, with the processor, the environment into two or more zones. (Paragraph [0037, 0049, 0080-0081] and Figure [9-11D], Park discloses a reference map having moveable areas for the robot and areas of presence of floor objects (after detection). Thus, the presence of just one floor object mapped into the reference map constitutes at least a second “zone”) Regarding claim 10: Park and Williams render parent claim 1 unpatentable. Park further discloses further comprising: emitting, with a light emitter disposed on the robot, a light on surfaces of the objects within the environment, wherein a projection of the light on the surfaces of the objects falls within a field of view of the image sensor. (Paragraph [0035-0037, 0040-0043], Park discloses a projection unit that projects structured light in a certain pattern onto the environment (and objects) to be imaged by the imaging unit; the imaging unit takes photos of the structured light pattern (and is therefore in a FoV of the image sensor)) Regarding claim 11: Park and Williams render parent claim 10 unpatentable. Park further discloses wherein: the sensor data of objects within the environment comprises images of the objects within the environment; (Paragraph [0035-0037, 0041-0044], Park discloses that the imaging unit collects images of the environment including objects within the environment) the images of the objects comprise the projection of the light on the surfaces of the objects; and (Paragraph [0035-0037, 0041-0044], Park discloses that the imaging unit collects images of the environment including the projected pattern onto the surface of the objects in the environment) the method further comprises: determining, with the processor, a distance of the surfaces of the objects relative to the robot based on a position or size of the projected light on the surfaces of the objects in the images of the objects. (Paragraph [0035, 0051], Park discloses determining the distance from the current position of the object relative to the robot based on the position of the structured light in the image frame) Regarding claim 20: Park and Williams render parent claim 12 unpatentable. Park further discloses wherein: the processor is further configured to divide the environment into two or more zones; and (Paragraph [0037, 0049, 0080-0081] and Figure [9-11D], Park discloses a reference map having moveable areas for the robot and areas of presence of floor objects (after detection). Thus, the presence of just one floor object mapped into the reference map constitutes at least a second “zone”) a light emitter disposed on the robot is configured to emit a light on surfaces of the objects within the environment, wherein a projection of the light on the surfaces of the objects falls within a field of view of the image sensor. (Paragraph [0035-0037, 0040-0043], Park discloses a projection unit that projects structured light in a certain pattern onto the environment (and objects) to be imaged by the imaging unit; the imaging unit takes photos of the structured light pattern (and is therefore in a FoV of the sensor)) Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to BENJAMIN J BROSH whose telephone number is (571)270-0105. The examiner can normally be reached M-F 0730-1700. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, THOMAS WORDEN can be reached at (571)272-4876. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /B.J.B./Examiner, Art Unit 3658 /JASON HOLLOWAY/Primary Examiner, Art Unit 3658
Read full office action

Prosecution Timeline

Feb 05, 2024
Application Filed
Sep 22, 2025
Non-Final Rejection — §103, §112, §DP
Dec 23, 2025
Response Filed
Feb 09, 2026
Final Rejection — §103, §112, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12576850
LANE CHANGE SYSTEM OF AUTONOMOUS VEHICLE
2y 5m to grant Granted Mar 17, 2026
Patent 12575997
EXOSUIT CONTROL USING MOVEMENT PRIMITIVES FROM EMBEDDINGS OF UNSTRUCTURED MOVEMENTS
2y 5m to grant Granted Mar 17, 2026
Patent 12552017
OPTIMIZING ROBOTIC DEVICE PERFORMANCE
2y 5m to grant Granted Feb 17, 2026
Patent 12552533
INFORMATION PROCESSING SYSTEM, NOTIFICATION METHOD, AND UNMANNED AERIAL VEHICLE
2y 5m to grant Granted Feb 17, 2026
Patent 12536918
AIRSPACE TRAFFIC PREDICTION METHOD BASED ON ENSEMBLE LEARNING ALGORITHM
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
73%
Grant Probability
99%
With Interview (+29.5%)
2y 7m
Median Time to Grant
Moderate
PTA Risk
Based on 77 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in for Full Analysis

Enter your email to receive a magic link. No password needed.

Free tier: 3 strategy analyses per month