Prosecution Insights
Last updated: April 19, 2026
Application No. 18/850,969

SYSTEM AND METHOD FOR DETECTING DYNAMIC EVENTS

Non-Final OA §101§103
Filed
Sep 25, 2024
Examiner
JHA, ABDHESH K
Art Unit
3668
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
The University of Hong Kong
OA Round
1 (Non-Final)
80%
Grant Probability
Favorable
1-2
OA Rounds
2y 5m
To Grant
99%
With Interview

Examiner Intelligence

Grants 80% — above average
80%
Career Allow Rate
328 granted / 408 resolved
+28.4% vs TC avg
Strong +18% interview lift
Without
With
+18.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
24 currently pending
Career history
432
Total Applications
across all art units

Statute-Specific Performance

§101
10.0%
-30.0% vs TC avg
§103
47.2%
+7.2% vs TC avg
§102
20.4%
-19.6% vs TC avg
§112
13.4%
-26.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 408 resolved cases

Office Action

§101 §103
DETAILED ACTION Claims 1-33 are considered in this office action. Claims 1-33 are pending examination. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Drawings Figures 4 and 7 are objected. The drawings are objected to because it is not clear. Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. The figure or figure number of an amended drawing should not be labeled as “amended.” If a drawing figure is to be canceled, the appropriate figure must be removed from the replacement sheet, and where necessary, the remaining figures must be renumbered and appropriate changes made to the brief description of the several views of the drawings for consistency. Additional replacement sheets may be necessary to show the renumbering of the remaining figures. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: an input module configured to capture and a detection module configured to receive in claims 1 and 33. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. The examiner is interpreting these limitation as claimed in claims 30 and 31: the input module comprises at least one of a light detection and ranging (LiDAR) sensor, a laser scanner, an ultrasonic sensor, a radar, or any suitable sensor that captures the three-dimensional (3-D) structure of a moving object or a stationary object from the viewpoint of the sensor AND the detection module is one or more of a programmed computer or microcontroller, an application-specific integrated circuit (ASIC), a programable gate array or other analog or digital logic circuit. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim 29 is not in one of the four statutory categories of invention. Claim 29 recites a computer-readable storage medium. The broadest reasonable interpretation of a claim drawn to a computer readable medium typically covers forms of non-transitory tangible media and transitory propagating signals per se in view of the ordinary and customary meaning of “computer readable medium”. In this instance, the specification provides no special definition with respect to the “computer readable device” limiting the broadest reasonable interpretation to non-transitory media. Although claim 8 does recite that the instructions are “tangible”, the broadest reasonable interpretation of “tangible” encompasses light and sound in that both are perceivable by the senses. As a result, claim 29 encompasses within its scope signals per se and is thus not statutory. See In re. Nuijten, 500 F.3rd 1346, 1356-57. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-32 are rejected under 35 U.S.C. 103 as being unpatentable over Rogan (US9110163) in view Divakaran et al. (US20140347475) and herein after will be referred as Rogan and Divakaran. Regarding Claim 1, Rogan teaches a moving object detection system (Fig.5), comprising: an input module configured to capture a point cloud comprising measurements of distances to points on one or more objects (Col.4 Line 11-17: “The first vehicle 102 may comprise a lidar emitter 202 that emits a lidar signal 204 ahead of the first vehicle 102. The lidar reflection 206 of the lidar signal 204 may be detected by a lidar detector 208, and captured as a sequence of lidar point clouds 210 representing, at respective time points 212, the lidar points 214 detected by the lidar detector 208 within the environment 100.”); and a detection module configured to receive the point cloud captured by the input module and configured to determine whether the objects are moving objects (Col.4 Line 51-Col.5 Line 8: “In this exemplary scenario 300, for respective lidar point clouds 210, the lidar points 214 are mapped 302 to a voxel 306 in a three-dimensional voxel space 304. Next, the voxels 306 of the three-dimensional voxel space 304 may be evaluated to detect one or more voxel clusters of voxels 306 (e.g., voxels 306 that are occupied by one or more lidar points 214 in the lidar point cloud 210, and that share an adjacency with other occupied voxels 306 of the three-dimensional voxel space 304, such as within a specified number of voxels 306 of another occupied voxel 306), resulting in the identification 308 of one or more objects 312 within an object space 310 corresponding to the three-dimensional voxel space 304. Next, for the respective lidar points 214 in the lidar point cloud 210, the lidar point 214 may be associated with a selected object 312. The movement of the lidar points 214 may then be classified according to the selected object 312 (e.g., the objects may be identified as moving or, stationary with the object 312 in the three-dimensional voxel space 304). According to the classified movements of the lidar points 214 associated with the object 312 (e.g., added for the object spaces 310 at respective time points 212), a projection 314 of the lidar points 214 and an evaluation of the movements of the lidar points 214 associated with respective objects 312, the movement of the respective objects 312 may be classified.) and controlling the movement of a vehicle on the basis of the object detection (Col.3 Line 35-38: “The results of this analysis, if performed in near-real time, may assist in the navigation of the vehicle 102 (such as matching speed with other nearby vehicles 102 and applying brakes and steering to avoid sudden velocity changes). Rogan may not expressly teach by determining whether currently measured points occlude any previously measured points. Divakaran teaches determining whether currently measured points occlude any previously measured points (Para [0069]: “Longer term occlusions caused by static occluders (trees, buildings, etc.) or dynamic occluders (vehicles, other people, etc.) can be dealt with by the system 100 using the reacquisition module 610 to maintain track IDs notwithstanding the presence of long-term occlusions. In one illustrative example, a group of trees is detected in the video stream 216. The scene-awareness module 210 generates an occlusion zone (OZ) caused by the group of trees. The generating of occlusion zones by the scene awareness module may be performed online (e.g., in real time) or offline, according to the requirements of a particular design of the system 100. If a tracked person A enters the OZ, the tracking module 222 associates the person's track A with the OZ. The track A still may be partially visible in the OZ and may be updated for a short period until it is completely occluded. When this occurs, the tracking manager 612 alerts the reacquisition module 610 and the reacquisition module 610 creates and maintains information relating to the occlusion event, e.g., that “track A has entered is now occluded by OZ.” When track A reappears on the boundary of the OZ as a new track, the tracking manager 612 triggers the reacquisition module 610 to recover the new track's identity from all possible tracks that have been occluded by the OZ. The reacquisition module 610 makes the decision to link the new track with the occluded track by checking the appearance models 616 and motion model (e.g., a viable kinematics model) 618 for each tracked object. It should be understood that the OZ described in this example may be created by a static occluder, or may be created dynamically, e.g., by a crowd of people or a vehicle. The reacquisition module 610 maintains track IDs across dynamic (e.g., inter-personal) and static occlusions, and also maintains the track IDs during handoff when there are no overlaps in the FOVs, as described above. To do this, the reacquisition module 610 leverages the information in the appearance models 616, which may be continuously or periodically updated (e.g., through an online or offline process) with descriptive information about the tracked persons or objects.”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Rogan to incorporate the teachings of Divakaran to include determining whether currently measured points occlude any previously measured points. Doing so would optimize tracking occluded objects. Similarly Claim 15 and 29 are rejected on the similar rational. Regarding Claim 2, Rogan in view Divakaran teaches the moving object detection system of claim 1. Rogan also teaches wherein whether the objects are moving objects is determined either sequentially or simultaneously, the system being configured with other processing modules for performance enhancements (Col.4 Line 51-Col.5 Line 8: “In this exemplary scenario 300, for respective lidar point clouds 210, the lidar points 214 are mapped 302 to a voxel 306 in a three-dimensional voxel space 304. Next, the voxels 306 of the three-dimensional voxel space 304 may be evaluated to detect one or more voxel clusters of voxels 306 (e.g., voxels 306 that are occupied by one or more lidar points 214 in the lidar point cloud 210, and that share an adjacency with other occupied voxels 306 of the three-dimensional voxel space 304, such as within a specified number of voxels 306 of another occupied voxel 306), resulting in the identification 308 of one or more objects 312 within an object space 310 corresponding to the three-dimensional voxel space 304. Next, for the respective lidar points 214 in the lidar point cloud 210, the lidar point 214 may be associated with a selected object 312. The movement of the lidar points 214 may then be classified according to the selected object 312 (e.g., the objects may be identified as moving or, stationary with the object 312 in the three-dimensional voxel space 304). According to the classified movements of the lidar points 214 associated with the object 312 (e.g., added for the object spaces 310 at respective time points 212), a projection 314 of the lidar points 214 and an evaluation of the movements of the lidar points 214 associated with respective objects 312, the movement of the respective objects 312 may be classified.) Similarly Claim 16 is rejected on the similar rational. Regarding Claim 3, Rogan in view Divakaran teaches the moving object detection system of claim 1. Divakaran teaches wherein the previously measured points of the moving objects are partially or all completely excluded in the determination of occlusion for currently measured points (Para [0069]). Similarly Claim 17 is rejected on the similar rational. Regarding Claim 4, Rogan in view Divakaran teaches moving object detection system of claim 1. Divakaran also teaches wherein the determination of occlusion is performed based on a depth image by comparing the depth of the currently measured points with previously measured ones that projecting to the same or adjacent pixels of the depth image to determine the occlusion, the occlusion results being corrected by additional tests for performance enhancements (Para [0055] and 0101]). Similarly Claim 18 is rejected on the similar rational. Regarding Claim 5, Rogan in view Divakaran teaches the moving object detection system of claim 4. Divakaran teaches wherein the points are projected to the depth image by a spherical projection, a perspective projection, or a projection that projects points lying on neighboring lines of sight to neighboring pixels (Para [0055] : “In operation, the depth map computation module 322 receives the video stream 216 as input and computes a stereo depth map from the video stream 216, where in this case the video stream 216 includes a pair of images from a vertical stereo camera. The module 322 performs depth-based change detection by executing a change detection algorithm, which also includes background depth modeling, similar to that which is used for monocular cameras, to identify significant foreground pixels. The foreground pixels identified by the change detection algorithm are passed to a geo-space based human segmentation submodule, which computes the detections in the geo-space (including the regions of interest and geo-position of each region of interest). As a result of this process, foreground detection is not affected by shadows and/or other illumination artifacts. The depth of the foreground pixels along with the camera calibration information is used to locate the foreground pixels in 3D. The reconstructed 3D points are then projected and subsequently accumulated on to the ground plane, where an efficient mode-seeking algorithm (e.g., the mean-shift algorithm) using human sized kernels locates local peaks corresponding to human forms. These positive detections of human forms on the ground plane, when mapped back into the image, provide detection regions of interest and segmentations of the pixels that correspond to the detected individuals. The detection stream generated by the geo-space based human segmentation is output to the occlusion reasoning engine 316 and incorporated into the occlusion reasoning as described above.”). Similarly Claim 19 is rejected on the similar rational. Regarding Claim 6, Rogan in view Divakaran teaches the moving object detection system of claim 5. wherein in a moving platform, the depth image is attached with to a pose read from an external motion sensing module, indicating under which pose the depth image is constructed and points are configured to be transformed to this pose before projection to the depth image. Similarly Claim 20 is rejected on the similar rational. Regarding Claim 7, Rogan in view Divakaran teaches the moving object detection system of claim 5. Divakaran teaches wherein for each pixel of the depth image, the detection module is configured to save all or a selected number of points projected therein, and/or all or a select number of the depths of points projected therein, and/or the statistical information comprising a minimum value, a maximum value, or a variance of depths of all or a selected number of points projected therein, and/or other information of the occluded points attached to points projected therein (Para [0055]). Similarly Claim 21 is rejected on the similar rational. Regarding Claim 8, Rogan in view Divakaran teaches the moving object detection system of claim 5. Divakaran teaches wherein multiple depth images are constructed at multiple prior each is constructed from points starting from the respective pose and accumulating for a certain period of time (Para [0049] : “The occlusion reasoning engine 316 applies the static and dynamic occlusion maps 212, 214 and the track stream 224 fed back from the tracking module 222 to the output of the part based human detector(s) 314 and the output of the stereo component 320, if available. With these inputs, the occlusion reasoning engine 316 explicitly reasons about the occlusion of the various body parts detected by the part-based detectors 314 by jointly considering all of the part detections produced by all of the detectors of persons and/or body parts. The occlusion reasoning engine 316 may utilize, for example, a joint image likelihood function that is defined for multiple, possibly inter-occluded humans. The occlusion reasoning engine 316 may then formulate the multiple human detection problem as, for example, a Maximum A Posteriori (MAP) problem, and then search the solution space to find the best interpretation of the image observations. The occlusion reasoning engine 316 performs this reasoning by, for example, estimating the “Z-buffer” of the responses obtained by estimating the head location according to the relative position of the other detected body parts."). Similarly Claim 22 is rejected on the similar rational. Regarding Claim 9, Rogan in view Divakaran teaches the moving object detection system of claim 8. Divakaran teaches wherein for each point of a pixel, the detection module is configured to save the points in a previous depth image that occludes the point or are occluded by the point (Para [0063]: “When a track enters the POZ of another track, an occlusion in the image is typically imminent. When a person is occluded, detections may not be available depending on the extent of the occlusion. In addition, the appearance and motion models 616, 618 may start to deteriorate. Therefore, to prevent the occluded track from abruptly terminating or going astray, the occlusion reasoning engine 316 ties the occluded track to the track of the occluder when no detections are found, as shown in FIG. 7. In FIG. 7, a person A located at a position 710 has an occlusion zone 712. Another person B located at a position 714 has been detected as moving along a track 716. The track 716 intersects with an edge of the occlusion zone 712 when person B reaches the location 714. The track 716 is linked with the occluding track of person A for the duration of the occlusion, that is, while person B is within the personal occlusion zone 712 of person A. This allows the tracking module 222 to maintain tracks for heavily occluded persons as long as the tracks can be assumed to be in the occlusion zone of another tracked person. In this way, the tracking module 222 can prevent the uncertainty of the occluded track from disrupting the tracking. Rather, the uncertainty of the occluded track is limited to the size of the POZ of the occluding track.”). Similarly Claim 23 is rejected on the similar rational. Regarding Claim 10, Rogan in view Divakaran teaches the moving object detection system of claim 8. Divakaran teaches wherein the occlusion of current points is determined against all or a selected number of depth images previously constructed (Para [0101] : “An example 28 includes the subject matter of any of examples 23-27, wherein the human detection module includes a stereo-based human detector to receive a second video stream from a second camera, compute a depth map using images from the video stream and the second video stream, and use the depth map to determine the geo-location of the temporarily and/or partially occluded person in three dimensions.”). Similarly Claim 24 is rejected on the similar rational. Regarding Claim 11, Rogan in view Divakaran teaches the moving object detection system of claim 10. Divakaran teaches depth image concept to determine occluded objects and hence would be obvious an ordinary person skilled in the art to further determine occluded point based on the depth measurements. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Rogan and Divakaran to incorporate the teachings of obviousness to include herein a current point is determined to occlude previous points if its depth is smaller than all or any points contained in adjacent pixels of any depth image to which it projects. Doing so would optimize the occlusion determination of the object. Similarly Claims 12-14 and 25-28 are rejected on the similar rational. Regarding Claim 30, Rogan in view Divakaran teaches the moving object detection system of claim 1. Rogan teaches wherein the input module comprises at least one of a light detection and ranging (LiDAR) sensor, a laser scanner, an ultrasonic sensor, a radar, or any suitable sensor that captures the three-dimensional (3-D) structure of a moving object or a stationary object from the viewpoint of the sensor (Col.3 Line 58-63). Regarding Claim 31, Rogan in view Divakaran teaches the moving object detection system of claim 1. Rogan teaches wherein the detection module is one or more of a programmed computer or microcontroller, an application-specific integrated circuit (ASIC), a programable gate array or other analog or digital logic circuit (Fig.5). Allowable Subject Matter Claim 32 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Claim 33 is allowed. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Vathauer et al. (US11442591) teaches a computer-implemented method comprising determining, by the processor, a set of objects of the plurality of objects having the shared coordinates (XS, YS) and at a location along the depth direction (ZS); and prioritizing, by the processor, an object from the set of objects based on at least two of metadata of the set of objects, screen areas of the set of objects, transparency of the set of objects, and opaqueness of at least one object of the set of objects currently displayed to improve the selection of at least one of mutually occluded objects and mutually partially occluded objects in the virtual environment. The method includes associating the prioritized object with the viewer input device for detecting interactions with the prioritized object displayed on the display device by the viewer input device. The prioritized object is updated on the screen of the display device based on the interactions. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ABDHESH K JHA whose telephone number is (571)272-6218. The examiner can normally be reached M-F:0800-1700. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, James J Lee can be reached at 571-270-5965. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ABDHESH K JHA/Primary Examiner, Art Unit 3668
Read full office action

Prosecution Timeline

Sep 25, 2024
Application Filed
Feb 19, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602959
VEHICLE STORAGE MANAGEMENT SYSTEM, STORAGE MEDIUM, AND STORAGE MANAGEMENT METHOD
2y 5m to grant Granted Apr 14, 2026
Patent 12592100
VEHICLE-BASED DATA OPTIMIZATION
2y 5m to grant Granted Mar 31, 2026
Patent 12572156
SYSTEMS AND METHODS FOR LANDING SITE SELECTION AND FLIGHT PATH PLANNING FOR AN AIRCRAFT USING SOARING WEATHER
2y 5m to grant Granted Mar 10, 2026
Patent 12573250
Used car AI performance inspection system based on acoustic data analysis, and processing method therefor
2y 5m to grant Granted Mar 10, 2026
Patent 12555419
METHOD FOR REAL-TIME ECU CRASH REPORTING AND RECOVERY
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
80%
Grant Probability
99%
With Interview (+18.3%)
2y 5m
Median Time to Grant
Low
PTA Risk
Based on 408 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month