Prosecution Insights
Last updated: April 19, 2026
Application No. 18/385,881

OBJECT TRACKING METHOD AND DEVICE

Non-Final OA §103
Filed
Oct 31, 2023
Examiner
ANSARI, TAHMINA N
Art Unit
2674
Tech Center
2600 — Communications
Assignee
Institute For Information Industry
OA Round
1 (Non-Final)
86%
Grant Probability
Favorable
1-2
OA Rounds
2y 8m
To Grant
99%
With Interview

Examiner Intelligence

Grants 86% — above average
86%
Career Allow Rate
743 granted / 868 resolved
+23.6% vs TC avg
Strong +18% interview lift
Without
With
+17.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
33 currently pending
Career history
901
Total Applications
across all art units

Statute-Specific Performance

§101
12.2%
-27.8% vs TC avg
§103
40.4%
+0.4% vs TC avg
§102
22.6%
-17.4% vs TC avg
§112
10.5%
-29.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 868 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status Claims 1-18 are pending in this application. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Priority Acknowledgment is made of applicant's claim for foreign priority based on an application filed in Taiwan on August 30, 2023. Applicant has filed a certified copy of the TW112132754 application as required by 37 CFR 1.55 on November 20, 2023, and the priority is duly noted. Claim Objections Claim 2 is objected to for being dependent upon itself, leading to indefiniteness. The preamble of the claim recites “2. The object tracking method according to claim 2…” and requires correction, as a claim cannot be dependent upon itself. For purposes of examination, this claim is being examined as being dependent upon independent Claim 1. Appropriate correction is required. Specification The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent may not be obtained though the invention is not identically disclosed or described as set forth in section 102 of this title, if the differences between the subject matter sought to be patented and the prior art are such that the subject matter as a whole would have been obvious at the time the invention was made to a person having ordinary skill in the art to which said subject matter pertains. Patentability shall not be negatived by the manner in which the invention was made. Claims 1-18 are rejected under 35 U.S.C. 103 as being unpatentable over Najafi et al. (US PGPub 2023/0137904, filed on October 27, 2022, with provisional priority dated October 28, 2021, hereby referred to as “Najafi”), in view of Lambert et al. (US Patent 12,183,200 filed on November 22, 2022, hereby referred to as “Lambert”). Consider Claims 1 and 10. Najafi teaches 1. An object tracking method, performed by a processor, comprising: / 10. An object tracking device, comprising: (Najafi: abstract, A weight support device includes a sensor grid that measures pressure data while a user is on the weight support device. The weight support device is connected to a computer that analyzes the pressure data and generates a virtual figure to represent the user. Based on the pressure data, the computer determines how the user moves and adjusts relative positions of segments in the virtual figure that represent various body parts corresponding to the movements of the user. The relative positions of the segments may be determined based on a kinematic model. The virtual figure is presented on a display (e.g., in a video) to illustrate how the user moved. [0033]-[0040], Figure 1,) 10. a camera element configured to obtain a plurality of consecutive images over a continuous period of time, wherein the plurality of consecutive images comprise a current image and a plurality of historical images; (Najafi: [0079] FIGS. 5C-5H illustrate a user 580 and a virtual figure 570 that is updated as the user 580 moves, in accordance with some embodiments. The virtual figure 570 may be updated in real-time in synchronization with the user 580 based on pressure data collected by the weight support device 110. Alternatively, or additionally, the user movement may be recorded by a camera and the virtual figure 570 may be replayed along with the video of the user movement. When pressure data is not available to determine key point locations of a body part, the key point locations are predicted for the user 580 to be in a neutral or comfortable position. [0033]-[0040], Figure 1) 10. a memory configured to store a plurality of historical objects, a plurality of historical moving traces and the plurality of consecutive images; and a processor connected to the camera element and the memory, and configured to perform: (Najafi: [0033], Fig. 1, Fig. 3, [0050] The training engine module 146 trains various machine learning models of the computing server 140 applied by the side prediction engine module 148, the key point location prediction engine module 150, the fall outcome engine module 154, the pressure injury prediction engine module 156, and the sleep quality prediction engine module 158. The training techniques for a machine learning model may be supervised, semi-supervised, or unsupervised. In supervised learning, the machine learning models may be iteratively trained with a set of training samples that are labeled. [0051] In some embodiments, a machine learning model used by the side prediction engine module 148 receives pressure data collected by the weight support device 110 as input and outputs a side label that identifies which side of the user was in contact with the weight support device 110 at the time the pressure data was collected. In one embodiment, the side label may be one of the following: prone, supine, left side, right side. Each training sample for training the machine learning model may include historical pressure data collected by the weight support device 110 while a historical user is lying on the weight support device 110 and a corresponding side label identifying which side of the historical user's body was in contact with the weight support device 110 at the time the historical pressure data was collected.) 1. extracting a plurality of historical moving traces corresponding to a plurality of historical objects from a plurality of historical images, and predicting a plurality of predicted locations and a plurality of predicted object boxes of the plurality of historical objects; / 10. extracting the plurality of historical moving traces corresponding to the plurality of historical objects from the plurality of historical images, and predicting a plurality of predicted object boxes of the plurality of historical objects; (Najafi: [0046] The figure generation engine module 144 generates a virtual figure representative of the user, and may include a machine learning model or alternative processing mechanism in some embodiments. A virtual figure is a graphical representation of the user. The virtual figure may be rendered (e.g., on the local computer 120, the user device 170) to illustrate the movements of the user determined by the sensor data collected by the weight support device 110. Fig. 3, [0051] In some embodiments, a machine learning model used by the side prediction engine module 148 receives pressure data collected by the weight support device 110 as input and outputs a side label that identifies which side of the user was in contact with the weight support device 110 at the time the pressure data was collected. In one embodiment, the side label may be one of the following: prone, supine, left side, right side. Each training sample for training the machine learning model may include historical pressure data collected by the weight support device 110 while a historical user is lying on the weight support device 110 and a corresponding side label identifying which side of the historical user's body was in contact with the weight support device 110 at the time the historical pressure data was collected. In one embodiment, to determine which side label is associated with the training sample, an image of the historical user lying on the weight support device 110 is presented to an annotator that reviews the image and provides the side label. In another embodiment, the image of the historical user may be provided to an image recognition mode (which is another machine learning model) that is trained to determine which side of the user is in contact with the weight support device 110. In yet another embodiment, an image of the historical user may not be used, and the historical user or an observer may provide the side label in association with the pressure data. Details on applying the trained machine learning model is described below with respect to FIG. 3 . [0042]-[0066], Figure 1) 1. determining a state of each of the plurality of historical objects is one of a static state and a moving state according to a heat map, wherein the heat map is generated according to the plurality of historical images; / 10. determining a state of each of the plurality of historical objects is one of a static state and a moving state according to a heat map, wherein the heat map is generated according to the plurality of historical images; (Najafi: [0037] In some embodiments, a weight support device 110 (or a computer that processes the raw data of the weight support device 110) may transmit data in a secure network environment to a caretaker (e.g., a healthcare professional) via a management dashboard (e.g., at a nursing station, front management desk in a retirement home, etc.) of a user device 170 to highlight the state and status of the user being monitored. The user device 170 may also provide an alert system alerting the caretaker that a user is at risk of developing a pressure injury and/or at risk of falling off the weight support device 110, that an adjustment of a positioning of the patient is needed, when the adjustment is needed, etc. The user device 170 may be communicatively coupled to more than one weight support device 110 and provide the status of multiple users to the caretaker. The user device 170 may prioritize the users and provide their status accordingly based on each user's risk of developing a pressure injury and/or risk of falling, when each user needs their positioning adjusted, and/or how long an area or areas of each user has been experiencing high-pressure.[0052]…In one embodiment, sensor data from the weight support device 110 may be converted to various colors or greyscales to illustrate pressure distribution on the weight support device 110 for a historical user and presented to an annotator. For example, the pressure heatmap may be a greyscale heatmap in which higher pressure values are associated with darker colors. Because the area of the weight support device 110 without a user should detect significantly less pressure than the area on which the user is currently positioned on, the area without the user may be represented in white. The pressure heat map may show the shape of the user's body, and the annotator may interact with the pressure heat map to indicate where the different joints are located with respect to the pressure heat map (e.g., click on a first point on the heatmap and label the point as “left knee, click on a second point on the heatmap and label the point as “effector head”). In some embodiments, the heatmap may also be in a color scheme. For example, low-pressure areas will be shown using cooler colors and high-pressure areas will be shown using warmer colors. In some embodiments, the pressure heatmap may be presented to the annotator with an image of the user lying on the weight support device 110 to help the annotator determine which part of the heatmap corresponds to the joints. [0042]-[0066], Figure 1) 1. extracting a plurality of current bounding boxes corresponding to a plurality of current objects from a current image, wherein the plurality of historical images and the current image are consecutive images obtained over a continuous period of time; / 10. extracting a plurality of current bounding boxes corresponding to a plurality of current objects from the current image; (Najafi: [0052]…In one embodiment, sensor data from the weight support device 110 may be converted to various colors or greyscales to illustrate pressure distribution on the weight support device 110 for a historical user and presented to an annotator. For example, the pressure heatmap may be a greyscale heatmap in which higher pressure values are associated with darker colors. Because the area of the weight support device 110 without a user should detect significantly less pressure than the area on which the user is currently positioned on, the area without the user may be represented in white. The pressure heat map may show the shape of the user's body, and the annotator may interact with the pressure heat map to indicate where the different joints are located with respect to the pressure heat map (e.g., click on a first point on the heatmap and label the point as “left knee, click on a second point on the heatmap and label the point as “effector head”). In some embodiments, the heatmap may also be in a color scheme. For example, low-pressure areas will be shown using cooler colors and high-pressure areas will be shown using warmer colors. In some embodiments, the pressure heatmap may be presented to the annotator with an image of the user lying on the weight support device 110 to help the annotator determine which part of the heatmap corresponds to the joints. [0042]-[0066], Figure 1 [0042]-[0066], Figure 1) 1. comparing and calculating a similarity value between the plurality of predicted object boxes and the plurality of current bounding boxes respectively, and when the similarity value is higher than a threshold value, corresponding one of the plurality of current objects to one of the plurality of historical objects, and generating at least one labelled object box; / 10. comparing and calculating a similarity value between the plurality of predicted object boxes and the plurality of current bounding boxes respectively, and when the similarity value is higher than a threshold value, corresponding one of the plurality of current objects to one of the plurality of historical objects, and generating at least one labelled object box; (Examiner note: key point location prediction engine model is analogous in scope to predicted object boxes for a plurality of history objects, while the observed key points is analogous in scope to current objects and the joints are labeled objects; Najafi: [0071] Similarly, the key point location prediction engine module 150 may apply a trained machine learning model that receives pressure data and outputs key point locations of a user at different instances. Depending on the virtual model, there can be a predetermined number of joints (e.g., 14 joints), and the key point location prediction engine module 150 may output the key point locations as two-dimensional coordinates and a confidence probability between 0 and 1 for each key point location. The computer may then generate a skeleton of the user and determine the 2D spatial coordinates for the joints (e.g., the 14 joints) along with their probability. The example 14 joints may be hip, effector head, right shoulder, right forearm, right hand, left shoulder, left forearm, left hand, right thigh, right shin, right foot, left thigh, left shin, and left foot. [0072] In some embodiments, the machine learning model may be unable to determine key point locations for one or more of the joints with a confidence probability greater than or equal to a confidence threshold. For example, when a user is lying on their left side with their right arm resting on the torso, the right side of the body is not in contact with the weight support device 110, so the key point location of the user's right shoulder, the right elbow, and the right wrist cannot be determined based on the pressure data. Accordingly, the output probabilities of the machine learning model for key point locations of the right shoulder, the right elbow, and the right wrist may be less than the confidence threshold. When one or more key point locations have a confidence probability that is less than the confidence threshold, the key point location prediction engine module 150 may predict the one or more key point locations for the user to be in a neutral or comfortable position based on observations in poses of historical users. [0073] In one embodiment, the key point location prediction engine module 150 maintains a set of default key point locations for each type of joint. For each of one or more joints that do not satisfy the confidence threshold, the key point location prediction engine module 150 may use the default key point location for that joint. …In another embodiment, the default key point locations are determined by the computing server 140 by analyzing training data for the various machine learning models of the computing server 140 from most frequently occurring poses in the training data) 1. and updating the heat map and at least one of the plurality of historical moving traces using the at least one labelled object box, wherein the at least one labelled object box is in the static state or the moving state. / 10. and updating the heat map and at least one of the plurality of historical moving traces using the at least one labelled object box, wherein the at least one labelled object box is in the static state or the moving state. (Najafi: [0069] FIG. 3 is a conceptual diagram illustrating an example algorithmic pipeline for making side predictions and key point location predictions, in accordance with some embodiments. Each of the side prediction engine module 148 and the key point location prediction engine module 150 may receive a time series of pressure readings 310 (e.g., a time series of pressure data) and perform various analyses on the pressure readings 310. The analyses are related to predicting a time series of side labels 320 and predicting a time series of key point locations 340 at several instances in time during a measurement period, respectively. [0075] FIG. 4 is a conceptual diagram illustrating an example computer-implemented process for generating a virtual figure, in accordance with some embodiments. The time series of side labels 320 and the time series of key point locations 340 may be provided as input to the kinematic engine module 152. The kinematic engine module 152 determines movements of one or more of the head representation, the torso representation, and the limb representation based on changes in the side labels and key point locations between timestamps. The kinematic engine module 152 is configured to iteratively determine how a user moves from one pose (which can be described by the key point locations) at a first timestamp to another pose at a next timestamp. [0080] FIG. 5I illustrates an example graphical user interface used to track pressure exposure over time, in accordance with some embodiments. In the graphical user interface, a heat map 590 illustrating the magnitude of the pressure data measured at various locations on the user's body (e.g., 0.447 at right ankle, 0.264 at left elbow). In some embodiments, the exposure history of pressure may be represented using a virtual figure 595 to highlight locations of the user's body that have a high risk of developing pressure injuries. Although not illustrated in FIG. 5I, the virtual figure 595 may be a three-dimensional model, and a user may interact with the virtual FIG. 595 by rotating the virtual figure 595 , zooming in/out to view portions of the virtual FIG. 595 in more detail. In some embodiments, the heat map 590 of the pressure data may be projected onto the virtual figure 595 and change in real-time as the pressure data updates with user's movements or with previously recorded pressure data.) Even if Najafi does not specifically teach: “extracting the plurality of historical moving traces” or “historical objects is one of a static state and a moving state according to a heat map” or “extracting a plurality of current bounding boxes” Lambert teaches 1. An object tracking method, performed by a processor, comprising: / 10. An object tracking device, comprising: (Lambert: abstract, The present application discloses a method, system, and computer system for providing a unified map and performing an active measure for re-routing transport based on one or more hazards along a current route. The method includes obtaining context information for a managed vehicle, obtaining map data, determining, based at least in part on the context information and the map data, whether the managed vehicle is parked in an unsafe location, in response to determining that the managed vehicle is parked in the unsafe location, determining whether to perform an active measure with respect to the managed vehicle, and in response to determining to perform the active measure, performing the active measure. column 17 lines 14-67, column 18 lines 1-44, FIGS. 1-2) 10. a camera element configured to obtain a plurality of consecutive images over a continuous period of time, wherein the plurality of consecutive images comprise a current image and a plurality of historical images; a memory configured to store a plurality of historical objects, a plurality of historical moving traces and the plurality of consecutive images; and a processor connected to the camera element and the memory, and configured to perform: (Lambert, column 17 lines 14-67, column 18 lines 1-44, FIG. 2 is a block diagram of a fleet management service for managing managed vehicles according to various embodiments of the present application. According to various embodiments, system 200 implements at least part of process 300 of FIG. 3 , process 400 of FIG. 4 , process 500 of FIG. 5 , process 600 of FIG. 6 , process 700 of FIG. 7 , process 800 of FIG. 8 , process 900 of FIG. 9 , process 1000 of FIG. 10 , process 1100 of FIG. 11 , process 1200 of FIG. 12 , process 1300 of FIG. 13 , process 1400 of FIG. 14 , process 1500 of FIG. 15 , and/or process 1600 of FIG. 16 . In the example shown, system 200 implements one or more modules in connection with managing a fleet of managed vehicles, managing parking for the fleet of managed vehicles, detecting that a parked vehicle is a sitting duck, determining parking conditions, classifying parking conditions (e.g., criteria according to which a parking condition is identified, such as a flash flood warning in or around a flood zone, etc.), recommending or implementing an active measure for a managed vehicle, etc. System 200 comprises communication interface 205, one or more processors 210, storage 215, and/or memory 220. One or more processors 210 comprises, or implements, one or more of communication module 225, vehicle information acquisition module 227, map data acquisition module 229, traffic and weather data (e.g., live or current traffic and current weather data) acquisition module 231, context analysis module 233, parking assessment module 235, active measure module 237, and/or user interface module 239.) 1. extracting a plurality of historical moving traces corresponding to a plurality of historical objects from a plurality of historical images, and predicting a plurality of predicted locations and a plurality of predicted object boxes of the plurality of historical objects; / 10. extracting the plurality of historical moving traces corresponding to the plurality of historical objects from the plurality of historical images, and predicting a plurality of predicted object boxes of the plurality of historical objects; (Lambert: column 20 lines 54-67, column 21, column 22 lines 1-3 For example, system 200 uses parking assessment module 235 to assess whether one or more managed vehicles are parked at predetermined intervals, such as in connection with performing active monitoring of the managed vehicle(s). Parking assessment module 235 can monitor a plurality of vehicles (e.g., an entire fleet or a subset of the fleet) to detect when one or more of such vehicles are stopped/parked. Alternatively, parking assessment module 235 can monitor a vehicle individually, such as based on a selection from a fleet manager to monitor or query a status of the vehicle. In some embodiments, parking assessment module 235 determines whether a vehicle is parked based at least in part on whether the vehicle is moving (e.g., whether a vehicle has moved a threshold distance over threshold time, such as whether the vehicle moved more than 2 m over 30 seconds). As an example, a vehicle is deemed parked if the vehicle has remained stationary for a threshold period of time or based on a composite score computed based on whether the vehicle is stationary and/or one or more other heuristics that are indicative of whether the vehicle is parked. In some embodiments, parking assessment module 235 uses various other context information in connection with determining whether the vehicle is parked. In some embodiments, parking assessment module 235 uses the context information to determine whether the vehicle is parked based at least in part on using one or more heuristics to assess whether the vehicle is parked. The heuristics may be configurable to allow a fleet manager to adjust a sensitivity of parked vehicle determinations. As an example, the heuristics are preset by a fleet manager or a subject matter expert. As another example, the heuristics are empirically determined, such as by using a machine learning process to analyze historical information to derive the heuristics. Examples of heuristics include (i) the vehicle having moved a threshold distance over a threshold time and that the vehicle movement patterns prior to this predefined time period is consistent with a vehicle entering a parked state is indicative of the vehicle not being parked (or increases the likelihood that the vehicle is not parked), (ii) the vehicle being located in a permitted parking area is indicative of the vehicle being parked (or increases the likelihood that the vehicle is parked), (iii) the centroid of the vehicle being in the middle (or middle third) of a road is indicative of the vehicle not being parked (e.g., increases the likelihood that the vehicle is not parked), (iv) traffic data for the road along which vehicle is on, or in proximity to, indicates that traffic is very congested or moving slow is indicative of the vehicle not being parked (e.g., or increases the likelihood that the vehicle is not parked), and/or (v) the vehicle being stationary (e.g. has not moved over a threshold period of time and/or that the vehicle movement patterns prior to this period of time are consistent with a vehicle entering a parked state) and not within the middle third of the road or is located at least a threshold distance from the road is indicative of the vehicle being parked. In some embodiments, using the vehicle movement patterns reduces false positives for a parked location being incorrectly assigned a location by the device GPS on a highway, where in reality the vehicle is actually in a nearby parking lot. In some embodiments, detection of an erratic GPS movements caused by GPS jitter (e.g., at start up, due to low signal, due to dead reckoning, etc.) is used to decrement a sitting duck confidence score. Parking assessment module 235 uses the one or more heuristics to determine a prediction of whether the vehicle is parked. In some embodiments, parking assessment module 235 uses a scoring function that is configured based at least in part on the one or more heuristics. The scoring function uses weighted values for the various predefined heuristics to determine a composite score corresponding to a prediction of whether, or likelihood that, the vehicle is parked. In response to obtaining the composite score, parking assessment module 235 uses the composite score to determine whether the vehicle is parked. For example, the composite score is compared to predefined parked scoring threshold, and if the composite score is greater than or equal to the predefined parked scoring threshold, parking assessment module 235 deems the vehicle to be parked. The predefined parked scoring threshold may be configurable to permit the fleet manager or other administrator to adjust the sensitivity of detecting parked vehicles (e.g., to adjust a false positive rate, etc.).) 1. determining a state of each of the plurality of historical objects is one of a static state and a moving state according to a unified map, wherein the unified map is generated according to the plurality of historical images; / 10. determining a state of each of the plurality of historical objects is one of a static state and a moving state according to a unified map, wherein the unified map is generated according to the plurality of historical images; (Lambert: column 21 lines 11-67, column 22 lines 1-3 In some embodiments, parking assessment module 235 determines whether a vehicle is parked based at least in part on whether the vehicle is moving (e.g., whether a vehicle has moved a threshold distance over threshold time, such as whether the vehicle moved more than 2 m over 30 seconds). As an example, a vehicle is deemed parked if the vehicle has remained stationary for a threshold period of time or based on a composite score computed based on whether the vehicle is stationary and/or one or more other heuristics that are indicative of whether the vehicle is parked. In some embodiments, parking assessment module 235 uses various other context information in connection with determining whether the vehicle is parked. In some embodiments, parking assessment module 235 uses the context information to determine whether the vehicle is parked based at least in part on using one or more heuristics to assess whether the vehicle is parked. The heuristics may be configurable to allow a fleet manager to adjust a sensitivity of parked vehicle determinations. As an example, the heuristics are preset by a fleet manager or a subject matter expert. As another example, the heuristics are empirically determined, such as by using a machine learning process to analyze historical information to derive the heuristics. Examples of heuristics include (i) the vehicle having moved a threshold distance over a threshold time and that the vehicle movement patterns prior to this predefined time period is consistent with a vehicle entering a parked state is indicative of the vehicle not being parked (or increases the likelihood that the vehicle is not parked), (ii) the vehicle being located in a permitted parking area is indicative of the vehicle being parked (or increases the likelihood that the vehicle is parked), (iii) the centroid of the vehicle being in the middle (or middle third) of a road is indicative of the vehicle not being parked (e.g., increases the likelihood that the vehicle is not parked), (iv) traffic data for the road along which vehicle is on, or in proximity to, indicates that traffic is very congested or moving slow is indicative of the vehicle not being parked (e.g., or increases the likelihood that the vehicle is not parked), and/or (v) the vehicle being stationary (e.g. has not moved over a threshold period of time and/or that the vehicle movement patterns prior to this period of time are consistent with a vehicle entering a parked state) and not within the middle third of the road or is located at least a threshold distance from the road is indicative of the vehicle being parked. In some embodiments, using the vehicle movement patterns reduces false positives for a parked location being incorrectly assigned a location by the device GPS on a highway, where in reality the vehicle is actually in a nearby parking lot. In some embodiments, detection of an erratic GPS movements caused by GPS jitter (e.g., at start up, due to low signal, due to dead reckoning, etc.) is used to decrement a sitting duck confidence score. Parking assessment module 235 uses the one or more heuristics to determine a prediction of whether the vehicle is parked. In some embodiments, parking assessment module 235 uses a scoring function that is configured based at least in part on the one or more heuristics. The scoring function uses weighted values for the various predefined heuristics to determine a composite score corresponding to a prediction of whether, or likelihood that, the vehicle is parked. In response to obtaining the composite score, parking assessment module 235 uses the composite score to determine whether the vehicle is parked. For example, the composite score is compared to predefined parked scoring threshold, and if the composite score is greater than or equal to the predefined parked scoring threshold, parking assessment module 235 deems the vehicle to be parked. The predefined parked scoring threshold may be configurable to permit the fleet manager or other administrator to adjust the sensitivity of detecting parked vehicles (e.g., to adjust a false positive rate, etc.).) 1. extracting a plurality of current bounding boxes corresponding to a plurality of current objects from a current image, wherein the plurality of historical images and the current image are consecutive images obtained over a continuous period of time; / 10. extracting a plurality of current bounding boxes corresponding to a plurality of current objects from the current image; (Lambert: column 22 lines 26-67, column 23 lines 1-3 In some embodiments, system 200 (e.g., parking assessment module 235) determines (e.g., computes) a bounding box for the vehicle based at least in part on location data for the vehicle. The bounding box for the vehicle or location of a vehicle relative to a bounding box can be used to detect whether the vehicle is moving or stationary. For example, the bounding box is determined based on the centroid of the location data for the vehicle. The system computes the bounding box based at least in part on the location data for the vehicle and (i) vehicle information (e.g., a type of trailer, a type of truck, etc.) mapped to the vehicle, such as in a mapping of vehicle identifiers to vehicle information, or (ii) a predefined standard size/shape of vehicles. The centroid of the vehicle location data can be set to correspond to the centroid of the bounding box. In the case of a fleet of transport trucks, the size/shape of the transport trucks in the fleet may be the same, and thus the predefined standard/size/shape of the transport trucks can be configured (e.g., by a user such as the fleet manager). In some embodiments, the bounding box is sufficiently large to include the vehicle therein and takes into account the jitter of location data over a threshold period of time. For example, location data obtained using a global positioning data may experience jitter/imprecision even though the underlying object has not moved. If the bounding box is larger than the range over which jitter associated with the location data may be observed, system 200 determines that the vehicle has moved if the vehicle location is outside the bounding box (or a threshold percentage of the vehicle has moved outside the bounding box). As an example, the size of the bounding box may be determined empirically based on historical location data and deviations in location data for a stationary object. As another example, the size of the bounding box may be dynamic and based on the system computing a current amount of jitter being observed in the vehicle location (or vehicle locations for a set of vehicles) and the size/shape of the vehicle. In some embodiments, parking assessment module 235 may determine that the vehicle has moved (e.g., is not stationary) if the centroid for the current location data is outside the boundaries of the bounding box. In some embodiments, parking assessment module 235 may determine that the vehicle has moved (e.g., is not stationary) if a threshold percentage of a second bounding box generated based on current location data is outside the boundaries of a first bounding box generated on previous location data.) 1. comparing and calculating a similarity value between the plurality of predicted object boxes and the plurality of current bounding boxes respectively, and when the similarity value is higher than a threshold value, corresponding one of the plurality of current objects to one of the plurality of historical objects, and generating at least one labelled object box; / 10. comparing and calculating a similarity value between the plurality of predicted object boxes and the plurality of current bounding boxes respectively, and when the similarity value is higher than a threshold value, corresponding one of the plurality of current objects to one of the plurality of historical objects, and generating at least one labelled object box; (Examiner Note: when the parking assessment module determines the status of a vehicle it generates a labelled object box; Lambert: column 20 lines 30-67, column 21 lines 1-11 In some embodiments, system 200 comprises parking assessment module 235. System 200 uses parking assessment module 235 in connection with determining whether one or more managed vehicles are parked, and in response to determining that a vehicle is parked, determining whether the parked vehicle is predicted to be a sitting duck (e.g., a pending sitting duck). As an example, parking assessment module 235 uses the context (e.g., context information) obtained by context analysis module 233 to determine whether the vehicle is parked and/or whether the parked vehicle is predicted to be a sitting duck. In some embodiments, in response to determining that the vehicle is predicted to be a sitting duck, parking assessment module 235 confirms that the vehicle is indeed parked. For example, parking assessment module 235 confirms that vehicle is parked by waiting a predefined period of time and determining whether the vehicle is still in a parked state (e.g., that the vehicle has not moved more than a threshold distance over the predefined period of time). As an example, the predefined period of time after which the vehicle is confirmed to be parked is 5 additional minutes (e.g., or 10 minutes since the vehicle was last detected to be moved from which the deemed parked state arose), however, the predefined period of time may be configurable. For example, system 200 uses parking assessment module 235 to assess whether one or more managed vehicles are parked at predetermined intervals, such as in connection with performing active monitoring of the managed vehicle(s). Parking assessment module 235 can monitor a plurality of vehicles (e.g., an entire fleet or a subset of the fleet) to detect when one or more of such vehicles are stopped/parked. Alternatively, parking assessment module 235 can monitor a vehicle individually, such as based on a selection from a fleet manager to monitor or query a status of the vehicle. In some embodiments, parking assessment module 235 determines whether a vehicle is parked based at least in part on whether the vehicle is moving (e.g., whether a vehicle has moved a threshold distance over threshold time, such as whether the vehicle moved more than 2 m over 30 seconds). As an example, a vehicle is deemed parked if the vehicle has remained stationary for a threshold period of time or based on a composite score computed based on whether the vehicle is stationary and/or one or more other heuristics that are indicative of whether the vehicle is parked. In some embodiments, parking assessment module 235 uses various other context information in connection with determining whether the vehicle is parked.) 1. and updating the unified map and at least one of the plurality of historical moving traces using the at least one labelled object box, wherein the at least one labelled object box is in the static state or the moving state. / 10. and updating the unified map and at least one of the plurality of historical moving traces using the at least one labelled object box, wherein the at least one labelled object box is in the static state or the moving state. (Lambert: column 12 lines 40-67, column 13 lines 1-40; In response to receiving a request for a unified map (e.g., a newly generated unified map, an update to a unified map, etc.) or to perform a monitoring/assessment of whether a vehicle is safely parked, fleet management service 110 obtains the applicable source data from data store 120, a managed vehicle (e.g., first managed vehicle system 140, second managed vehicle system 150, and/or third managed vehicle system 160), and/or a third party service(s)/system(s) and generates/update the unified map. As an example, the request for the unified map may include, or be associated with, a particular geographic area. As another example, the geographic area is determined based on the one or more managed vehicles for which the unified map is to be generated/updated or that are to be managed or monitored to ensure safe parking. Fleet management service 110 uses the geographic area to obtain the applicable/relevant source data. For example, fleet management service 110 obtains weather data for the geographic area, traffic data for roads within the geographic area, or roads corresponding to a predefined route for a managed vehicle(s), map and road data (e.g., road classifications, road dimensions, number of lanes, posted speed limit, etc.), etc. Fleet management service 110 analyzes the source data to determine locations of a set of managed vehicles, determine whether the managed vehicle(s) are stopped, determine whether the managed vehicle(s) are parked, and determine whether a parked managed vehicle is a sitting duck. Fleet management service 110 may generate a unified map in connection with monitoring a fleet or determining whether a parked managed vehicle is a sitting duck. In response to receiving the source data, fleet management service 110 generates the unified map, including generating one or more layers for the unified map. For example, fleet management service 110 annotates a standard geographic/road map with information pertaining to one or more of identified driving conditions, parking conditions, or other conditions that may impact a parked vehicle (e.g., a flood warning, a flood zone), indicators for predefined permitted parking areas, predefined restricted parking areas, or exclusion zones, hazardous parking conditions (e.g., indicators that vehicles that are unsafely parked are sitting ducks), etc. The annotating of a standard geographic/road map includes generating indicators for the driving conditions or various other conditions or information and configuring one or more layers to include such indicators. The one or more layers for the unified map may be toggled on/off and when toggled on (e.g., to be displayed), the one or more layers are provided as an overlay to the standard geographic/road map. In some embodiments, the standard geographic/road map is predefined (e.g., stored in data store 120) or a service provider for the geographic standard geographic/road map is predefined. Fleet management service 110 uses data layer 112 to obtain the source data to be used in connection with generating/updating a unified map or implementing an active measure. In response to fleet management service 110 determining to generate/update the unified map (e.g., in response to receiving a request from a fleet manager via fleet control layer 114), fleet management service instructs/causes data layer 112 to obtain the applicable source data. Data layer 112 can obtain the applicable source data by querying data store 120, a third-party service/data source, and/or a managed vehicle (e.g., first managed vehicle system 140, second managed vehicle system 150, and/or third managed vehicle system 160). Fleet management service 110 also uses data layer 112 to generate the unified map, such as based on parameters provided by fleet control layer 114 (e.g., parameters that are predefined for a fleet or user or that are received from a fleet manager such as via administrator system 130).) It would have been obvious before the effective filing date of the claimed invention to one of ordinary skill in the art to modify Najafi for the system and method for modeling and visualization of image data using a kinematic model with the mapping algorithm of Lambert, as they are both directed towards leveraging data points in image visualization. The determination of obviousness is predicated upon the following findings: One skilled in the art would have been motivated to modify Najafi in order to improve the overall data point visualization and mapping algorithm to leverage additional historical and contextual information and bounding boxes for target data. Furthermore, the prior art collectively includes each element claimed (though not all in the same reference), and one of ordinary skill in the art could have combined the elements in the manner explained above using known engineering design, interface and programming techniques, without changing a “fundamental” operating principle of Najafi, while the teaching of Lambert continues to perform the same function as originally taught prior to being combined, in order to produce the repeatable and predictable result of leveraging historical and contextual information in the overall mapping process. It is for at least the aforementioned reasons that the examiner has reached a conclusion of obviousness with respect to the claim in question. Consider Claims 2 and 11. The combination of Najafi and Lambert teaches: 2. The object tracking method according to claim 2, wherein updating the heat map comprises: generating a current object distribution map according to the at least one labelled object box; and utilizing the current object distribution map and a plurality of historical object distribution maps to perform average calculation to update the heat map; wherein each of the plurality of historical object distribution maps corresponds to a corresponding historical image of the plurality of historical images, and each of the plurality of historical object distribution maps comprises at least one historical object box extracted from the corresponding historical image. / 11. The object tracking device according to claim 10, wherein the processor is configured to: generate a current object distribution map according to the at least one labelled object box; and utilize the current object distribution map and a plurality of historical object distribution maps to perform average calculation to update the heat map; wherein each of the plurality of historical object distribution maps corresponds to a corresponding historical image of the plurality of historical images, and each of the plurality of historical object distribution maps comprises at least one historical object box extracted from the corresponding historical image. (Najafi: [0069] FIG. 3 is a conceptual diagram illustrating an example algorithmic pipeline for making side predictions and key point location predictions, in accordance with some embodiments. Each of the side prediction engine module 148 and the key point location prediction engine module 150 may receive a time series of pressure readings 310 (e.g., a time series of pressure data) and perform various analyses on the pressure readings 310. The analyses are related to predicting a time series of side labels 320 and predicting a time series of key point locations 340 at several instances in time during a measurement period, respectively. [0075] FIG. 4 is a conceptual diagram illustrating an example computer-implemented process for generating a virtual figure, in accordance with some embodiments. The time series of side labels 320 and the time series of key point locations 340 may be provided as input to the kinematic engine module 152. The kinematic engine module 152 determines movements of one or more of the head representation, the torso representation, and the limb representation based on changes in the side labels and key point locations between timestamps. The kinematic engine module 152 is configured to iteratively determine how a user moves from one pose (which can be described by the key point locations) at a first timestamp to another pose at a next timestamp. [0080] FIG. 5I illustrates an example graphical user interface used to track pressure exposure over time, in accordance with some embodiments. In the graphical user interface, a heat map 590 illustrating the magnitude of the pressure data measured at various locations on the user's body (e.g., 0.447 at right ankle, 0.264 at left elbow). In some embodiments, the exposure history of pressure may be represented using a virtual figure 595 to highlight locations of the user's body that have a high risk of developing pressure injuries. Although not illustrated in FIG. 5I, the virtual figure 595 may be a three-dimensional model, and a user may interact with the virtual FIG. 595 by rotating the virtual figure 595 , zooming in/out to view portions of the virtual FIG. 595 in more detail. In some embodiments, the heat map 590 of the pressure data may be projected onto the virtual figure 595 and change in real-time as the pressure data updates with user's movements or with previously recorded pressure data. Lambert: column 12 lines 40-67, column 13 lines 1-40; In response to receiving a request for a unified map (e.g., a newly generated unified map, an update to a unified map, etc.) or to perform a monitoring/assessment of whether a vehicle is safely parked, fleet management service 110 obtains the applicable source data from data store 120, a managed vehicle (e.g., first managed vehicle system 140, second managed vehicle system 150, and/or third managed vehicle system 160), and/or a third party service(s)/system(s) and generates/update the unified map. As an example, the request for the unified map may include, or be associated with, a particular geographic area. As another example, the geographic area is determined based on the one or more managed vehicles for whic
Read full office action

Prosecution Timeline

Oct 31, 2023
Application Filed
Oct 18, 2025
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586249
PROCESSING APPARATUS, PROCESSING METHOD, AND STORAGE MEDIUM FOR CALIBRATING AN IMAGE CAPTURE APPARATUS
2y 5m to grant Granted Mar 24, 2026
Patent 12586354
TRAINING METHOD, APPARATUS AND NON-TRANSITORY COMPUTER READABLE MEDIUM FOR A MACHINE LEARNING MODEL
2y 5m to grant Granted Mar 24, 2026
Patent 12573083
COMPUTER-READABLE RECORDING MEDIUM STORING OBJECT DETECTION PROGRAM, DEVICE, AND MACHINE LEARNING MODEL GENERATION METHOD OF TRAINING OBJECT DETECTION MODEL TO DETECT CATEGORY AND POSITION OF OBJECT
2y 5m to grant Granted Mar 10, 2026
Patent 12548297
IMAGE PROCESSING METHOD AND APPARATUS, COMPUTER DEVICE, STORAGE MEDIUM, AND PROGRAM PRODUCT BASED ON FEATURE AND DISTRIBUTION CORRELATION
2y 5m to grant Granted Feb 10, 2026
Patent 12524504
METHOD AND DATA PROCESSING SYSTEM FOR PROVIDING EXPLANATORY RADIOMICS-RELATED INFORMATION
2y 5m to grant Granted Jan 13, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
86%
Grant Probability
99%
With Interview (+17.9%)
2y 8m
Median Time to Grant
Low
PTA Risk
Based on 868 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month