Prosecution Insights
Last updated: April 19, 2026
Application No. 17/807,258

DETECTION SYSTEM, PROCESSING APPARATUS, MOVEMENT OBJECT, DETECTION METHOD, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM

Non-Final OA §103
Filed
Jun 16, 2022
Examiner
SHERALI, ISHRAT I
Art Unit
2667
Tech Center
2600 — Communications
Assignee
Kabushiki Kaisha Toshiba
OA Round
1 (Non-Final)
93%
Grant Probability
Favorable
1-2
OA Rounds
2y 4m
To Grant
99%
With Interview

Examiner Intelligence

Grants 93% — above average
93%
Career Allow Rate
710 granted / 761 resolved
+31.3% vs TC avg
Moderate +6% lift
Without
With
+5.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 4m
Avg Prosecution
11 currently pending
Career history
772
Total Applications
across all art units

Statute-Specific Performance

§101
20.6%
-19.4% vs TC avg
§103
30.1%
-9.9% vs TC avg
§102
12.4%
-27.6% vs TC avg
§112
8.7%
-31.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 761 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . CLAIM INTERPRETATION The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following claim limitations have been interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because they use a generic placeholder “portion” coupled with functional language and without reciting sufficient structure to achieve the function. Furthermore, the generic placeholder is not preceded by a structural modifier: Claims 1 and 8 (and dependent claims 2-7 and 9-17): an acquisition portion scanning light to acquire point-cloud information corresponding to a plurality of positions of a detection target object; an estimation portion using consistency with an outer shape model of the detection target object to estimate a location and an attitude of the detection target object based on the point-cloud information; and an output portion outputting information relating to a movement target location based on an estimation result, the estimation portion fits an outer shape model indicating an outer shape of the detection target object to a point cloud according to the point-cloud information, and uses point-cloud information existing outside the outer shape model to estimate the location and the attitude of the detection target object. If Applicant asserts that the claim element “unit” is a limitation that does not invoke 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, 6th paragraph. If applicant does not wish to have the claim limitation treated under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, 6th Paragraph applicant may: (a) Amend the claim to add structure, material or acts that are sufficient to perform the claimed function; or (b) Present a sufficient showing that the claim limitation recites sufficient structure, material, or acts for performing the claimed function to preclude application of 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. For more information, see MPEP § 2181. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-6, 8-12 and 19-20 are rejected under 35 USC 103 as being unpatentable over of AKINOBU et al. (JP 2020190814, IDS) in view of HIROSHI et al. (JP 2019196961, IDS). Regarding claims 1, 8 and 19-20 AKINOBU disclose detection system, processing apparatus, detection method and non-transitory computer readable medium storing a program (AKINOBU Abstract, paragraphs 0007-0008, 0017 and 0037-0039) an acquisition portion scanning light to acquire point-cloud information corresponding to a plurality of positions of a detection target object (AKINOBU; paragraphs 0016, and 0061, environment sensor LIDAR AKINOBU disclose The environment sensor 40 is installed on the vehicle body 20. In this embodiment, the environmental sensor 40 is a 2D-LiDAR. 2D-LiDAR is a range sensor that scans laser light in the horizontal direction. The environment sensor 40 measures the distance between the vehicle body 20 and an object in the set space set in front of the vehicle body 20. As a result, the distance data in front of the towing vehicle 10 is acquired. The height of the environment sensor 40 is set so that the side surface of the boarding / alighting equipment 15 can be measured. The distance data acquired by the environment sensor 40 is input to the measurement data acquisition unit 32 and the position / orientation estimation unit 35 of the arithmetic unit 30. The camera 41 takes an image of the front of the vehicle body 20. The image data captured by the camera 41 is input to the determination unit 31. This correspond to an acquisition portion scanning light to acquire point-cloud information corresponding to a plurality of positions of a detection target object ) an estimation portion using consistency with an outer shape model of the detection target object to estimate a location and an attitude of the detection target object based on the point-cloud information (AKINOBU paragraphs 0037-0038, disclose the position / orientation estimation unit 35 performs matching processing between the external model data MD and the measurement point cloud by the environment sensor 40. For the matching process, for example, ICP (Iterative Closest Points) can be used, but other methods may be used. By the matching process, the position and orientation of the external model data MD shown in FIG. 8 are finely adjusted. As a result, the measurement data by the environment sensor 40 and the external model data MD can be superposed so as to be completely matched. In other words, the coordinate system of the external model data MD can be converted from the arbitrary coordinate system to the start point position coordinate system CS1. The position and orientation of the external model data MD on the start point position coordinate system CS1 after the completion of the matching process is the estimation result of the position and orientation of the boarding / alighting equipment 15. This obviously corresponds to an estimation portion using consistency with an outer shape model of the detection target object to estimate a location and an attitude of the detection target object based on the point-cloud information); and an output portion outputting information relating to a movement target location based on an estimation result (AKINOBU paragraphs 0038-0039 disclose the trajectory generation unit 36 generates the target trajectory TT. As shown in FIGS. 2 and 3, the target trajectory TT starts at the start point position P1 in the posture indicated by the start point position coordinate system CS1 and stops at the stop position P3 in the stop posture indicated by the stop position coordinate system CS3. is there. As shown in FIGS. 2 and 3, the stopped posture is such that the traveling direction x3 of the towing vehicle 10 at the stopped position P3 coincides with the direction x0 parallel to the luggage loading / unloading portion 15a. and the velocity pattern generation unit 37 designs the velocity pattern based on the target trajectory TT. In step S100, the control command generation unit 38 calculates a control command value for moving the towing vehicle 10 to the stop position P3 based on the target track TT and the speed pattern. The generated control command value is input to the steering control unit 21 and the drive control unit 22, and the towing vehicle 10 is moved toward the stop position P3. All this obviously corresponds to an output portion outputting information relating to a movement target location based on an estimation result), As discussed above AKINOBU disclose the estimation portion fits an outer shape model indicating an outer shape of the detection target object to a point cloud according to the point-cloud information, and uses point-cloud information l to estimate the location and the attitude of the detection target object (AKINOBU paragraphs 0037-0038, disclose the position / orientation estimation unit 35 performs matching processing between the external model data MD and the measurement point cloud by the environment sensor 40. For the matching process, for example, ICP (Iterative Closest Points) can be used, but other methods may be used. By the matching process, the position and orientation of the external model data MD shown in FIG. 8 are finely adjusted. As a result, the measurement data by the environment sensor 40 and the external model data MD can be superposed so as to be completely matched. In other words, the coordinate system of the external model data MD can be converted from the arbitrary coordinate system to the start point position coordinate system CS1. The position and orientation of the external model data MD on the start point position coordinate system CS1 after the completion of the matching process is the estimation result of the position and orientation of the boarding / alighting equipment 15. This obviously corresponds to an estimation portion using consistency with an outer shape model of the detection target object to estimate a location and an attitude of the detection target object based on the point-cloud information). However AKINOBU has not explicitly disclose the process of using the point cloud information to existing outside the outer shape model to estimate the location and the attitude of the detection target object. In the same field of endeavor HIROSHI disclose using the point cloud information to existing outside the outer shape model to estimate the location and the attitude of the detection target object (HIROSHI Abstract, Fig. 5, paragraphs 0005, 0031-0033 and 0037 disclose an external model [s sign template] showing an outer form of the object to be detected [the sign] is fitted to a point cloud group information. A technique is described in which the position/posture of the object to be detected is estimated by using the point cloud group information outside the outer contour model [sign template] the position evaluation score is calculated by subtracting the number of the outer specific measurement points from the number of the specific measurement points as disclosed in paragraph 0033 by HIROSHI “the position evaluation score is a value obtained based on the specific measurement points constituting the specific line segments belonging to the same line segment group, and increases as the number of specific measurement points inside the marker template increases. Specifically, a specific measurement point in the same line segment group is set to a specific measurement point inside the sign template (hereinafter referred to as “internal specific measurement point” for convenience) and a specific measurement point outside the sign template. It is divided into points (hereinafter referred to as “outer specific measurement points” for convenience), and the number of inner specific measurement points can be used as a position evaluation score as it is. Alternatively, a value obtained by subtracting the number of outer specific measurement points from the number of inner specific measurement points may be used as the position evaluation score. For example, in the case of FIG. 5, the position evaluation score in FIG. 5 (a) is 8 points [or 8-2 = 6 points], and the position evaluation score in FIG. 5 (b) is 10 points (or 10-0 = The position evaluation score in FIG. 5C is 8 points [or 8-2 = 6 points]. All this obviously corresponds to using the point cloud information to existing outside the outer shape model to estimate the location and the attitude of the detection target object). Therefore it would have been obvious to one of ordinary skill in the art, before the claimed invention was filed to using the point cloud information to existing outside the outer shape model to estimate the location and the attitude of the detection target object as shown by HIROSHI in the system of AKINOBU because such a system provides accurate/optimum estimate of the position and posture. Regarding claim 2 HIROSHI disclose the estimation portion estimates the location and attitude of the outer shape model so as to make points determined to be outside the outer shape model whose location and attitude are estimated based on the point-cloud information to become less (HIROSHI Fig. 5 paragraph 0033 by HIROSHI “the position evaluation score is a value obtained based on the specific measurement points constituting the specific line segments belonging to the same line segment group, and increases as the number of specific measurement points inside the marker template increases. Specifically, a specific measurement point in the same line segment group is set to a specific measurement point inside the sign template (hereinafter referred to as “internal specific measurement point” for convenience) and a specific measurement point outside the sign template. It is divided into points (hereinafter referred to as “outer specific measurement points” for convenience), and the number of inner specific measurement points can be used as a position evaluation score as it is. Alternatively, a value obtained by subtracting the number of outer specific measurement points from the number of inner specific measurement points may be used as the position evaluation score. For example, in the case of FIG. 5, the position evaluation score in FIG. 5 (a) is 8 points [or 8-2 = 6 points], and the position evaluation score in FIG. 5 (b) is 10 points (or 10-0 = The position evaluation score in FIG. 5C is 8 points [or 8-2 = 6 points]). Regarding claim 3 HIROSHI disclose the estimation portion estimates the location and attitude of the detection target object by adjusting an arrangement of the outer shape model so as to make points determined to be at the outside of the outer shape model whose location and attitude are estimated based on the point-cloud information to become less (HIROSHI Fig. 5 paragraph 0033 by HIROSHI “the position evaluation score is a value obtained based on the specific measurement points constituting the specific line segments belonging to the same line segment group, and increases as the number of specific measurement points inside the marker template increases. Specifically, a specific measurement point in the same line segment group is set to a specific measurement point inside the sign template (hereinafter referred to as “internal specific measurement point” for convenience) and a specific measurement point outside the sign template. It is divided into points (hereinafter referred to as “outer specific measurement points” for convenience), and the number of inner specific measurement points can be used as a position evaluation score as it is. Alternatively, a value obtained by subtracting the number of outer specific measurement points from the number of inner specific measurement points may be used as the position evaluation score. For example, in the case of FIG. 5, the position evaluation score in FIG. 5 (a) is 8 points [or 8-2 = 6 points], and the position evaluation score in FIG. 5 (b) is 10 points (or 10-0 = The position evaluation score in FIG. 5C is 8 points [or 8-2 = 6 points]) Regarding claim 4 HIROSHI disclose the estimation portion evaluates the estimation result of the location and attitude of the outer shape model by using the point cloud existing outside the outer shape of the outer shape model whose location and attitude are estimated (HIROSHI Fig. 5 paragraph 0033 by HIROSHI “the position evaluation score is a value obtained based on the specific measurement points constituting the specific line segments belonging to the same line segment group, and increases as the number of specific measurement points inside the marker template increases. Specifically, a specific measurement point in the same line segment group is set to a specific measurement point inside the sign template (hereinafter referred to as “internal specific measurement point” for convenience) and a specific measurement point outside the sign template. It is divided into points (hereinafter referred to as “outer specific measurement points” for convenience), and the number of inner specific measurement points can be used as a position evaluation score as it is. Alternatively, a value obtained by subtracting the number of outer specific measurement points from the number of inner specific measurement points may be used as the position evaluation score. For example, in the case of FIG. 5, the position evaluation score in FIG. 5 (a) is 8 points [or 8-2 = 6 points], and the position evaluation score in FIG. 5 (b) is 10 points (or 10-0 = The position evaluation score in FIG. 5C is 8 points [or 8-2 = 6 points]) Regarding claim 5 AKINOBU disclose a lateral surface of the detection target object includes a member for reflecting the light, and a cross section of the member receiving the light for scanning is discretely arranged in a direction along the lateral surface of the detection target object (AKINOBU Fig. 1, paragraph 0016 The environment sensor 40 is installed on the vehicle body 20. In this embodiment, the environmental sensor 40 is a 2D-LiDAR. 2D-LiDAR is a range sensor that scans laser light in the horizontal direction. The environment sensor 40 measures the distance between the vehicle body 20 and an object in the set space set in front of the vehicle body 20. As a result, the distance data in front of the towing vehicle 10 is acquired. The height of the environment sensor 40 is set so that the side surface of the boarding / alighting equipment 15 can be measured. The distance data acquired by the environment sensor 40 is input to the measurement data acquisition unit 32 and the position / orientation estimation unit 35 of the arithmetic unit 30. The camera 41 takes an image of the front of the vehicle body 20. The image data captured by the camera 41 is input to the determination unit 31.). Regarding claim 6 AKINOBU disclose the point cloud corresponding to the location of the member reflecting the light is discretely arranged in a direction along the outer shape of the outer shape model (AKINOBU Fig. 1, paragraph 0016The environment sensor 40 is installed on the vehicle body 20. In this embodiment, the environmental sensor 40 is a 2D-LiDAR. 2D-LiDAR is a range sensor that scans laser light in the horizontal direction. The environment sensor 40 measures the distance between the vehicle body 20 and an object in the set space set in front of the vehicle body 20. As a result, the distance data in front of the towing vehicle 10 is acquired. The height of the environment sensor 40 is set so that the side surface of the boarding / alighting equipment 15 can be measured. The distance data acquired by the environment sensor 40 is input to the measurement data acquisition unit 32 and the position / orientation estimation unit 35 of the arithmetic unit 30. The camera 41 takes an image of the front of the vehicle body 20. The image data captured by the camera 41 is input to the determination unit 31). Regarding claim 9 AKINOBU disclose a movement object, a detection system according to claim 1 and a movement mechanism driving the movement object based on an estimation result of the location and attitude of the detection target object (AKINOBU Fig. 1, Abstract, claim 1 and paragraphs 0037 and 0039 disclose “the position / orientation estimation unit 35 performs matching processing between the external model data MD and the measurement point cloud by the environment sensor 40. For the matching process, for example, ICP (Iterative Closest Points) can be used, but other methods may be used. By the matching process, the position and orientation of the external model data MD shown in FIG. 8 are finely adjusted. As a result, the measurement data by the environment sensor 40 and the external model data MD can be superposed so as to be completely matched. In other words, the coordinate system of the external model data MD can be converted from the arbitrary coordinate system to the start point position coordinate system CS1. The position and orientation of the external model data MD on the start point position coordinate system CS1 after the completion of the matching process is the estimation result of the position and orientation of the boarding / alighting equipment 15” and “the velocity pattern generation unit 37 designs the velocity pattern based on the target trajectory TT. In step S100, the control command generation unit 38 calculates a control command value for moving the towing vehicle 10 to the stop position P3 based on the target track TT and the speed pattern. The generated control command value is input to the steering control unit 21 and the drive control unit 22, and the towing vehicle 10 is moved toward the stop position P3”). Regarding claim 10 AKINOBU disclose includes a distance sensor generating the point-cloud information, wherein the estimation result in which the location and attitude of the detection target object is estimated is acquired by a detection result of the distance sensor. (AKINOBU paragraph 0016 “ The environment sensor 40 is installed on the vehicle body 20. In this embodiment, the environmental sensor 40 is a 2D-LiDAR. 2D-LiDAR is a range sensor that scans laser light in the horizontal direction. The environment sensor 40 measures the distance between the vehicle body 20 and an object in the set space set in front of the vehicle body 20. As a result, the distance data in front of the towing vehicle 10 is acquired. The height of the environment sensor 40 is set so that the side surface of the boarding / alighting equipment 15 can be measured. The distance data acquired by the environment sensor 40 is input to the measurement data acquisition unit 32 and the position / orientation estimation unit 35 of the arithmetic unit 30. The camera 41 takes an image of the front of the vehicle body 20. The image data captured by the camera 41 is input to the determination unit 31“ i.e. LIDAR). Regarding claim 11 AKINOBU disclose a point-cloud information extraction portion referring to a table in which an extraction target region of the detection target object is defined to extract the point-cloud information in the extraction target region (AKINOBU paragraph 0016 “The environment sensor 40 is installed on the vehicle body 20. In this embodiment, the environmental sensor 40 is a 2D-LiDAR. 2D-LiDAR is a range sensor that scans laser light in the horizontal direction. The environment sensor 40 measures the distance between the vehicle body 20 and an object in the set space set in front of the vehicle body 20. As a result, the distance data in front of the towing vehicle 10 is acquired. The height of the environment sensor 40 is set so that the side surface of the boarding / alighting equipment 15 can be measured. The distance data acquired by the environment sensor 40 is input to the measurement data acquisition unit 32 and the position / orientation estimation unit 35 of the arithmetic unit 30. The camera 41 takes an image of the front of the vehicle body 20. The image data captured by the camera 41 is input to the determination unit 31” and paragraph 0017 disclose “The arithmetic unit 30 is composed of a microprocessor including a CPU and the like. The arithmetic unit 30 includes a determination unit 31, a measurement data acquisition unit 32, an external model storage unit 33, a coordinate conversion unit 34, a position / orientation estimation unit 35, a trajectory generation unit 36, a speed pattern generation unit 37, and a control command generation unit 38. Be prepared. The contents of the determination unit 31 to the control command generation unit 38”. In the system of AKINOBU LIDAR generate the point cloud information, microprocessor , CPU obviously can store the LIDAR/point cloud information data in the form of table) ; and a control portion controlling the movement mechanism based on the information of the location and attitude of the detection target object, the estimation portion estimating the location and attitude of the detection target object using the extracted point-cloud information as the point-cloud information (AKINOBU paragraph 0017 disclose “The arithmetic unit 30 is composed of a microprocessor including a CPU and the like. The arithmetic unit 30 includes a determination unit 31, a measurement data acquisition unit 32, an external model storage unit 33, a coordinate conversion unit 34, a position / orientation estimation unit 35, a trajectory generation unit 36, a speed pattern generation unit 37, and a control command generation unit 38. Be prepared. The contents of the determination unit 31 to the control command generation unit 38” and paraph 0037 the position / orientation estimation unit 35 performs matching processing between the external model data MD and the measurement point cloud by the environment sensor 40. For the matching process, for example, ICP (Iterative Closest Points) can be used, but other methods may be used. By the matching process, the position and orientation of the external model data MD shown in FIG. 8 are finely adjusted. As a result, the measurement data by the environment sensor 40 and the external model data MD can be superposed so as to be completely matched. In other words, the coordinate system of the external model data MD can be converted from the arbitrary coordinate system to the start point position coordinate system CS1. The position and orientation of the external model data MD on the start point position coordinate system CS1 after the completion of the matching process is the estimation result of the position and orientation of the boarding / alighting equipment 15” and also note: paragraphs 0038-0039. This obviously corresponds to a control portion controlling the movement mechanism based on the information of the location and attitude of the detection target object, the estimation portion estimating the location and attitude of the detection target object using the extracted point-cloud information as the point-cloud information). Regarding claim 12 AKINOBU disclose an extraction target region of the detection target object is designated as a relative location with respect to a self-moving object or a relative location with respect to a surrounding environment of the self-moving object (AKINOBU paragraph 0016 “The environment sensor 40 is installed on the vehicle body 20. In this embodiment, the environmental sensor 40 is a 2D-LiDAR. 2D-LiDAR is a range sensor that scans laser light in the horizontal direction. The environment sensor 40 measures the distance between the vehicle body 20 and an object in the set space set in front of the vehicle body 20. As a result, the distance data in front of the towing vehicle 10 is acquired. The height of the environment sensor 40 is set so that the side surface of the boarding / alighting equipment 15 can be measured. The distance data acquired by the environment sensor 40 is input to the measurement data acquisition unit 32 and the position / orientation estimation unit 35 of the arithmetic unit 30. The camera 41 takes an image of the front of the vehicle body 20. The image data captured by the camera 41 is input to the determination unit 31” and paragraph 0017 disclose “The arithmetic unit 30 is composed of a microprocessor including a CPU and the like. The arithmetic unit 30 includes a determination unit 31, a measurement data acquisition unit 32, an external model storage unit 33, a coordinate conversion unit 34, a position / orientation estimation unit 35, a trajectory generation unit 36, a speed pattern generation unit 37, and a control command generation unit 38. Be prepared. The contents of the determination unit 31 to the control command generation unit 38”. All this obviously corresponds to an extraction target region of the detection target object is designated as a relative location with respect to a self-moving object or a relative location with respect to a surrounding environment of the self-moving object) Allowable Subject Matter Claims 7, 13-16 and 17-18 are objected as being dependent on the rejected claim but would be allowable if rewritten as the independent claim including limitation of the base claim and any intervening claims . Communication Any inquiry concerning this communication or earlier communications from the examiner should be directed to ISHRAT I SHERALI whose telephone number is (571)272-7398. The examiner can normally be reached Monday-Friday 8:00AM -5:00 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Matthew Bella can be reached at 571-272-7778. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. ISHRAT I. SHERALI Examiner Art Unit 2667 /ISHRAT I SHERALI/Primary Examiner, Art Unit 2667
Read full office action

Prosecution Timeline

Jun 16, 2022
Application Filed
Jun 16, 2022
Response after Non-Final Action
Jan 24, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12592150
METHOD FOR WARNING COLLISION OF VEHICLE, SYSTEM, VEHICLE, AND COMPUTER READABLE STORAGE MEDIUM
2y 5m to grant Granted Mar 31, 2026
Patent 12586209
MECHANISM CAPABLE OF DETECTING MOTIONS OF DIFFERENT SURFACE TEXTURES WITHOUT NEEDING TO PERFORM OBJECT IDENTIFICATION OPERATION
2y 5m to grant Granted Mar 24, 2026
Patent 12579820
LEARNING APPARATUS AND LEARNING METHOD
2y 5m to grant Granted Mar 17, 2026
Patent 12548308
METHOD AND SYSTEM FOR FUSING DATA FROM LIDAR AND CAMERA
2y 5m to grant Granted Feb 10, 2026
Patent 12542874
Methods and Systems for Person Detection in a Video Feed
2y 5m to grant Granted Feb 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
93%
Grant Probability
99%
With Interview (+5.8%)
2y 4m
Median Time to Grant
Low
PTA Risk
Based on 761 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month