Prosecution Insights
Last updated: April 19, 2026
Application No. 18/216,189

AUTOMATED POSITIONING OF INTERNET OF THINGS (IOT) SENSORS IN A WORKSPACE FOR EFFECTIVE PERFORMANCE MONITORING

Non-Final OA §103
Filed
Jun 29, 2023
Examiner
GOLDBERG, IVAN R
Art Unit
3619
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
International Business Machines Corporation
OA Round
3 (Non-Final)
35%
Grant Probability
At Risk
3-4
OA Rounds
4y 8m
To Grant
72%
With Interview

Examiner Intelligence

Grants only 35% of cases
35%
Career Allow Rate
128 granted / 365 resolved
-16.9% vs TC avg
Strong +37% interview lift
Without
With
+36.9%
Interview Lift
resolved cases with interview
Typical timeline
4y 8m
Avg Prosecution
57 currently pending
Career history
422
Total Applications
across all art units

Statute-Specific Performance

§101
27.7%
-12.3% vs TC avg
§103
40.4%
+0.4% vs TC avg
§102
3.4%
-36.6% vs TC avg
§112
20.7%
-19.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 365 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Notice to Applicant The following is a Final Office action. In response to Examiner’s Non-Final Rejection of 4/8/25, Applicant, on 7/1/25, amended claims. Claims 1-20 are pending in this application and have been rejected below. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 11/3/2025 (claims) has been entered, by way of the RCE of 11/21/2025. Response to Amendment Applicant’s amendments are acknowledged. The 35 USC 101 rejections are withdrawn, as the claims now when considered as a whole, are not directed to an abstract idea, have meaningful limitations (MPEP 2106.05e) and are considered eligible. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-4 and 6-20 are rejected under 35 U.S.C. 103 as being unpatentable over Man (US 2019/0236370), in view of Iyengar (US 2021/0383532) and Liu (US 2018/0063422). Concerning claim 1, Man discloses: A computer-implemented method (Man – see par 68 - Computer 110 may be configured to execute a plurality of processes designed to monitor activities, machines, and persons. Computer 110 may also be configured to generate output which may include activity records, activity metrics, and activity-based alerts for an industrial site. Output generated by computer 110 may be in the form of warnings, alerts or reports), comprising: analyzing, by a processor set, historic data of plural activities (Man – see par 54 - Data input devices may be configured to collect information about persons, objects and activities taking place in an industrial site. For example, video cameras 102A, 102B, may be configured or programmed to record video segments depicting persons, trucks, and cranes present in an industrial site, store the recorded video segments, and transmit the recorded video segments to computer 110. Digital cameras 104A, 104B may be configured or programmed to capture digital images) using a pattern recognition machine learning algorithm to determine patterns and associations in the historic data associated with an activity including a plurality of historical tasks included in the activity (Man –see par 55 - digital sensors 106A, 106B may be configured or programmed to detect events indicating entering or leaving the industrial site. The sensors may also associate the events with corresponding timestamps, store the associations between the detected events and the timestamps, and transmit the associations to computer 110. see par 71 - Computer 110 may further include a machine learning processor 110C configured to execute a machine learning program, algorithm, or process. The machine learning process may be executed using data provided by any of the data input devices 102A-B, 104A-B, and 106A-B. The machine learning process may be executed to enhance and improve the content of the received data. For example, machine learning processor 110C may be configured to process the data provided by the video cameras, digital cameras and sensors, and generate output in form of activity records, activity metrics, and activity-based alerts. see par 129 - In an embodiment, a decision support system is configured to detect persons and equipment by employing a machine learning system. The machine learning system may be trained using a sequence of classifiers to label frames of video streams according to the presence or absence of an object. see par 177 - a machine learning model that has been trained on historical data may be used. For example, an equipment-detection algorithm and a regression-based computer vision algorithm may be applied to the collected data. see also Iyengar – see par 60 - as illustrated in FIG. 20, the system may permit a user to create tags and apply a corresponding tag to specific videos or other metric time. The tags may be used to assist in machine learning to identify metrics, issues, and causes in future iterations. The system may be configured to automatically tag the videos by learning from tags associated by users to videos. see par 88 - The system and method may determine a root cause of an inefficiency detected from the analyzed received data. The root cause may be identified by a tag associated with a video section of the one or more data sources. The event may also be identified by tagging associated video clips from the one or more data sources corresponding to the event. The system may also learn from prior tags and determine that a missing resource receive a specific tag and then suggest a tag for video segments having similar states, conditions, and/or attributes.) and a historical shape and a historical size of historical physical boundaries corresponding to a historical object in which the historical tasks are performed (Man – see par 141 - cameras are positioned to capture images within a polygon that is spaced apart by a specified distance 1030 outside of industrial site 1010. The cameras may be positioned to capture images of persons or pieces of equipment passing through a virtual boundary that surrounds industrial site 1010. The virtual boundary may be either outside or inside industrial site 1010. An outside boundary may be plotted, for example, one meter outside of industrial site 1010. In this arrangement, a collective field of view of the cameras defines a virtual fence 1000 or a virtual boundary around industrial site 1010, and cameras capture images of persons and equipment crossing virtual fence 1000 to either enter industrial site 1010 or leave industrial site 1010. see also Iyengar –see par 44 - The computed Process Metrics and other meta information may be captured in a data base for the purpose of displaying and analyzing trends, compute statistics etc. The compute statics block may be used to measure and compute desired statistical trends or other metrics. For example, data from the database may be extracted to measure statistical trends and other metrics like probability density functions of time to completion of a specific activity, percentage of time a resource (e.g. worker) spends at a working location, heat maps of movement resources, etc. see par 82 - Selection of the multiple video segments may be based on the analyzed data and/or in an identity of an event or combinations thereof. For example, if the analyzed data indicates a resource is not identified in a camera image, another camera image that has analyzed data indicating the resource is in the other camera image may be simultaneously displayed to a user to indicate that a resource is out of an expected location and to display where the resource actually is and how the resource is actually being utilized. As another example, an event may be determined such as a transition state, e.g. reloading of a machine, which may implicate multiple camera views to fully review and observe the actions corresponding to the event. Therefore, the user interface may include more than one video segment from one, two, or more cameras based on an identity of the event, meta information, user inputs, processed outputs from one or more signal sources, the states, predictions, aggregated data, areas of interest, analyzed data, object detection, metrics, or combinations thereof see also Liu – see par 20 - the distributed camera devices 14 can be physically deployed (e.g., physically mounted or positioned) at a fixed or movable locations for monitoring at respective detection zones 26 of a detection region 28; for example, the distributed camera device 14a can be physically deployed for monitoring the detection zone 26a, the distributed camera device 14b can be physically deployed for monitoring the detection zone 26b, etc., and the distributed camera device 14n can be physically deployed for monitoring the detection zone 26n. Depending on deployment, the distributed camera devices 14 can be physically deployed for providing overlapping and/or non-overlapping detection zones 26); determining, by the processor set, a monitoring boundary including a shape and a size corresponding to an object for the activity based on the analyzing of the historic data (Man – see par 41 - In an embodiment, the cameras are positioned to capture images within a polygon that is spaced apart by a specified distance 1030 outside of industrial site 1010. The cameras may be positioned to capture images of persons or pieces of equipment passing through a virtual boundary that surrounds industrial site 1010. The virtual boundary may be either outside or inside industrial site 1010. An outside boundary may be plotted, for example, one meter outside of industrial site 1010. In this arrangement, a collective field of view of the cameras defines a virtual fence 1000 or a virtual boundary around industrial site 1010, and cameras capture images of persons; see par 145 - One or more cameras may be installed at corners of industrial site 1010/1110, or along the sides of industrial site 1010/1110. For example, two cameras may be installed at a particular location along a virtual boundary surrounding industrial site 1010/1110: one camera may be installed on a pole one meter above the ground, and another camera may be installed on the same pole two meters above the ground. For example, one camera may be configured to monitor trucks entering and leaving the site, while another camera may be configured to monitor the workers. See par 146 - the cameras are positioned along industrial fence 1100 that is inside industrial site 1110 and separate from industrial site 1110 by a specified distance 1130 from industrial site 1110. In this arrangement, the collective field of view of the cameras defines virtual fence 1100 or a virtual boundary inside industrial site 1110 and cameras capture images of persons and pieces of equipment entering or leaving industrial site 1110 as they cross industrial fence 1100). It is unclear if Man is using historical data to set the boundaries, such as one meter off (See par 41) or the “distance” from the site for positioning of cameras (See par 146). Iyengar discloses: determining, by the processor set, a monitoring boundary including a shape and a size corresponding to an object for the activity “based on the analyzing of the historic data” (Iyengar discloses entire limitation– see par 18 - In an exemplary embodiment, the system according to embodiments described herein include a first camera positioned in a high level location. High level location is understood to include a large scale view of an area or part of the process under observation. The low level location may permit closer perspective with greater detail of a subarea or object of observation. The low level location may be observed with a camera and/or with one or more other sensors; see par 27 - As seen in FIG. 1A, exemplary embodiments of the system described herein include an automated, continuous monitoring and data capture solution comprising one or more cameras 112. The cameras may define a field of view that captures one or more branches of a process path. For example, a camera may observe one or more machines, personnel, stations, supplies, etc. The system may also include one or more focused cameras on a narrower field of view, such as a process step. see par 74 - the one or more data sources includes at least one data stream of sequential images and the analyzing the received data comprises defining a state based on an image of the sequential images. The state based determination may include determining a location of an object within a region of the image. The analysis of the data may also include using the state to predict an area of interest in a second image in the sequence of images, and the second image occurs later in time than the image). Man and Iyengar disclose: determining, by the processor set, a type of information for monitoring the activity based on the analyzing (Man – see par 94 - Functions for outputs may also include generating a set of performance metrics. Performance metrics may be represented using key performance indicators (“KPIs”). A KPI may be used to measure, for example, efficiency of an industrial process in an industrial site over time. A KPI may be specified in terms of an amount of industrial material moved to and/or from an industrial site. see par 97 - Output functions may also include specifications for processing the received data to determine accuracy measures for an industrial site. The specification may provide for detecting objects depicted in, for example, images provided for an industrial site. This may include detecting depictions of persons, materials, and machines in the images provided by video cameras and digital cameras to a decision support system. see also Iyengar - see par 17 - For example, in the case of Internet of Things (IoT) sensors, we may need one type of sensor to monitor pressure changes and another type of sensor to monitor temperature changes. see par 53 - As illustrated in FIGS. 18A-18B, exemplary embodiments may automatically identify epochs of critical events or any event in the process (such as those identified with a metric above or below a threshold, or when processing time exceeds a threshold). An actual camera feed may show the perspective view of a process area in a specific band or detection method (such as audio, visible light, temperature, etc.). As illustrated, the camera feed is provided on the left of the image, and a list of critical events are identified sequentially on the right of the image. see par 75 the system may use the states, predictions, aggregated data, areas of interest, analyzed data, object detection, among other analytic tools to keep track a metric corresponding to the process. Liu – see par 28 - The requirements derivation module 34 can determine the video recognition requirements 60 using, for example, inputs from any one of a context sensing module 56 that detects and processes sensor data from physical sensors in the distributed camera system 10 (e.g., interpreting temperature sensor data as fire alarm data, interpreting lighting changes in image data as person entering a room, motion-sensed data as beginning a manufacturing process, etc.), and/or a user input interface 58 that can receive inputs from the client device 20 for a video recognition application requested by the administrator.); generating, by the processor set, a recommendation of a sensor to capture the type of information (Man - see par 161 - a decision support system is configured to recognize one or more types of events that the system needs to detect or track. The types of the events may be defined by the type of sensors used to provide the event-and-sensor specific measurements). generating, by the processor set, a recommendation of a location of the sensor in the monitoring boundary (Man – See par 141 - In an embodiment, the cameras are positioned to capture images within a polygon that is spaced apart by a specified distance 1030 outside of industrial site 1010. The cameras may be positioned to capture images of persons or pieces of equipment passing through a virtual boundary that surrounds industrial site 1010. see par 145 - One or more cameras may be installed at corners of industrial site 1010/1110, or along the sides of industrial site 1010/1110. For example, two cameras may be installed at a particular location along a virtual boundary surrounding industrial site 1010/1110: one camera may be installed on a pole one meter above the ground, and another camera may be installed on the same pole two meters above the ground; see par 159 - decision support system may be configured to track the industrial equipment). Man discloses having cameras and additional cameras “installed” at particular locations (See par 141, 145, 159) and a decision support system to recognize events the system needs to track, and types of sensor to provide measurements (See par 159, 161). Iyengar discloses having cameras with a field of view of a process path with machines, personnel, stations, supplies, etc., and one or more additional sensors (see par 112). Liu discloses: deploying, by the processor set, the sensor to the location in the monitoring boundary using a robotic system (Liu – see par 20 - The distributed camera devices 14 can be physically deployed (e.g., physically mounted or positioned) at a fixed or movable locations for monitoring at respective detection zones 26 of a detection region 28; for example, the distributed camera device 14a can be physically deployed for monitoring the detection zone 26a, the distributed camera device 14b can be physically deployed for monitoring the detection zone 26b, etc., and the distributed camera device 14n can be physically deployed for monitoring the detection zone 26n. The distributed camera devices 14 can be physically mounted at fixed locations in a fixed detection region 28, for example within a manufacturing facility. The distributed camera devices 14 also can be portable, for example carried by one or more users or mounted on movable vehicles (e.g., unmanned aerial vehicles, i.e., “drones”) that execute coordinated movement (e.g., “swarming”) to provide monitoring of a remote (or movable) detection region 28 that is reachable by the movable vehicles). Man, Iyengar, and Liu are analogous art as they are directed to positioning sensors (see Man Abstract, Iyengar Abstract, par 51-57; Liu Abstract). 1) Man discloses using historical data to set the boundaries, such as one meter off (See par 41) or the “distance” from the site for positioning of cameras (See par 146), and also having a virtual boundary, outside boundary positioning for a field of view of cameras (See par 141). Iyengar improves upon Man by disclosing having cameras focus on a narrower field of view such as a process step (See par 27) and predicting an area of interest in a second image based on sequence of images (See par 74). One of ordinary skill in the art would be motivated to further include analyzing where an area of interest is in images for process steps to efficiently improve upon the boundary camera positioning in Man. 2) Man discloses having cameras and additional cameras “installed” at particular locations (See par 141, 145, 159) and a decision support system to recognize events the system needs to track, and types of sensor to provide measurements (See par 159, 161). Iyengar discloses having cameras with a field of view of a process path with machines, personnel, stations, supplies, etc., and one or more additional sensors (see par 112). Liu improves upon Man and Iyengar by disclosing having drones provide coordinated monitoring at detection zones 26 of a detection region 28 (see par 20). One of ordinary skill in the art would be motivated to further include using a drone for deploying sensors to efficiently improve upon the sensors installed in Man and the cameras installed with a field of view of a process path in Iyengar. Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the monitoring of activities in industrial sites in Man, to further include cameras focus on a narrower field of view such as a process step and predicting an area of interest in a second image based on sequence of images as disclosed in Iyengar (See par 27, 74), and to further include a drone for coordinated monitoring at detection zones of a detection region for a manufacturing process/facility as disclosed in Liu (See par 20, 28), since the claimed invention is merely a combination of old elements, and in combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable and there is a reasonable expectation of success. Concerning independent claim 11, Man, Iyengar, and Liu disclose: A computer program product comprising one or more computer readable storage media having program instructions collectively stored on the one or more computer readable storage media, the program instructions executable to (Man - see par 68 - Computer 110 may be configured to execute a plurality of processes designed to monitor activities, machines, and persons; See par 193 - Computer system 1100 includes one or more units of memory 1106, such as a main memory, which is coupled to I/O subsystem 1102 for electronically digitally storing data and instructions to be executed by processor 1104. Memory 1106 also may be used for storing temporary variables or other intermediate information during execution of instructions to be executed by processor 1104): The remaining limitations are similar to claim 1 above. Claim 11 is rejected for the same reasons. It would be obvious to combine Man, Iyengar, and Liu for the same reasons as claim 1. Concerning independent claim 16, Man, Iyengar, and Liu disclose: A system comprising: a processor set (Man – see par 52 - computer system comprises components that are implemented at least partially in hardware, such as one or more hardware processors executing program instructions stored in one or more memories as described herein.), one or more computer readable storage media, and program instructions collectively stored on the one or more computer readable storage media, the program instructions executable to: (Man – see par 52 - computer system comprises components that are implemented at least partially in hardware, such as one or more hardware processors executing program instructions stored in one or more memories as described herein; See par 193 - Computer system 1100 includes one or more units of memory 1106, such as a main memory, which is coupled to I/O subsystem 1102 for electronically digitally storing data and instructions to be executed by processor 1104. Memory 1106 also may be used for storing temporary variables or other intermediate information during execution of instructions to be executed by processor 1104): The remaining limitations are similar to claim 1 above. Claim 16 is rejected for the same reasons. It would be obvious to combine Man, Iyengar, and Liu for the same reasons as claim 1 and 11. Concerning claim 2, Man, Iyengar, and Liu disclose: The computer-implemented method of claim 1, further comprising determining a time to use the sensor to collect the type of information at the location in the monitoring boundary (Man – see par 55 - digital sensors 106A, 106B may be configured or programmed to detect events indicating entering or leaving the industrial site. The sensors may also associate the events with corresponding timestamps, store the associations between the detected events and the timestamps; see par 176 - to estimate the amount of manpower spent, a detection algorithm may be used to obtain sample counts of workers detected at different times of the day and by different cameras.) To any extent Man does not disclose, Iyengar discloses: The computer-implemented method of claim 1, further comprising determining a time to use the sensor to collect the type of information at the location in the monitoring boundary (Iyengar - see par 41 - FIG. 10 provides an exemplary sequence based neural net model to compute process metrics according to embodiments described herein. The neural net model to compute process metrics may be used in place of the block diagram of FIG. 2 or in combination therewith. Time sequenced signals from various sources may be fed as input to an Encoder Block (optionally) along with meta information like location of the sensor, process being monitored etc. to the model. The encoder processes the features across a certain time range and generates decoder state independent generic features; see par 76 - . For example, an area of an image not of interested may be reduced in data resolution, while areas of interest may be retained and/or increased in data resolution. For periods when a state is expected to remain static, the time fidelity of the data may be reduced, in that fewer data points/images are observed or analyzed over a given period of time. In other words, the sample rate may be reduced.). Man and Iyengar disclose: wherein the pattern recognition machine learning algorithm to determine patterns and associations in the historic data associated with the activity further includes historical key performance indicators (KPIs) associated with the historical tasks included in the activity and types of data to monitor the historical KPIs (Man – see par 47 - a decision support and data processing method for monitoring activities on industrial sites is configured to employ a machine learning approach to process the data received from a distributed network of sensors. Machine learning system may be programmed to process the collected data, and generate outputs that includes activity records, activity metrics, see par 94 - Functions for outputs may also include generating a set of performance metrics. Performance metrics may be represented using key performance indicators (“KPIs”). A KPI may be used to measure, for example, efficiency of an industrial process in an industrial site over time; See also Iyengar – see par 44 - FIG. 3 illustrates the exemplary analytics according to embodiments described herein to generate the features and benefits described herein. The computed Process Metrics and other meta information may be captured in a data base for the purpose of displaying and analyzing trends, compute statistics etc. The compute statics block may be used to measure and compute desired statistical trends or other metrics. For example, data from the database may be extracted to measure statistical trends and other metrics like probability density functions of time to completion of a specific activity, percentage of time a resource (e.g. worker) spends at a working location, heat maps of movement resources, etc; see par 75 - system may use the states, predictions, aggregated data, areas of interest, analyzed data, object detection, among other analytic tools to keep track a metric corresponding to the process ). It would be obvious to combine Man and Iyengar for the same reasons as claim 1. In addition, Man discloses associating events with times (See par 55) and measuring performance metrics (See par 94). Iyengar improves upon Man by disclosing computing process metrics for time ranges (See par 41) where sample rate of images may be reduced if state of process with area of image is not of interest (See par 75-76). Concerning claim 3, Man discloses associating events with timestamps to detect different times of the day from different cameras (See par 55, 176). Iyengar and Liu disclose The computer-implemented method of claim 2, wherein the determining the time to use the sensor comprises: determining a time to start data collection by the sensor in advance of the activity (Iyengar – see par 74 - Other state based determinations may include, for example, a condition of a resources, such as a machine, part, component, inventory, etc. The condition may include whether a resource is in use, in transition, broken, out of use, etc. The analysis of the data may also include using the state to predict an area of interest in a second image in the sequence of images, and the second image occurs later in time than the image. The prediction may be for example that a resource (such as a part or personnel) should be in a desired location after the completion of an action determined by the state. Liu – See par 51 - For example, the processor circuit 76 executing the requirements derivation module 34 can determine the video recognition requirements 60 in response to receiving a user request, via the input interface 58, from an administrator using the client device 20 that specifies a particular video recognition application for immediate deployment or at a future specified time or event (e.g., start real-time intruder detection at end of evening production run); the processor circuit 76 executing the requirements derivation module 34 also can detect the current requirements in response to the context sensing module 56 detecting an identified event from a sensor device, for example from a detected machine sensor indicating a start of a production run, etc.); “determining a time to stop data collection by the sensor after completion of the activity” (Iyengar – see par 76 - For example, an area of an image not of interested may be reduced in data resolution, while areas of interest may be retained and/or increased in data resolution. For periods when a state is expected to remain static, the time fidelity of the data may be reduced, in that fewer data points/images are observed or analyzed over a given period of time. In other words, the sample rate may be reduced. see also Liu – See par 51 - For example, the processor circuit 76 executing the requirements derivation module 34 can determine the video recognition requirements 60 in response to receiving a user request, via the input interface 58, from an administrator using the client device 20 that specifies a particular video recognition application for immediate deployment or at a future specified time or event (e.g., start real-time intruder detection at end of evening production run); the processor circuit 76 executing the requirements derivation module 34 also can detect the current requirements in response to the context sensing module 56 detecting an identified event from a sensor device, for example from a detected machine sensor indicating a start of a production run, etc. ). It would have been obvious to combine Man, Iyengar, and Liu for the same reasons as claim 1 and 2 above. In addition, Man discloses associating events with times (See par 55) and measuring performance metrics (See par 94). Liu improves upon Man and Iyengar by disclosing having start and end of production run for video recognition. One of ordinary skill in the art would be motivated to further include start and ends for analysis to efficiently improve upon the events associated with time in Man. Concerning claim 12 and 17, Man, Iyengar, and Liu disclose: The computer program product of claim 11, wherein the program instructions are executable to determine a time to use the sensor to collect the type of information at the location in the monitoring boundary, the time to use the sensor comprising a time to start data collection by the sensor in advance of the activity ((Man – see par 141 - cameras are positioned to capture images within a polygon that is spaced apart by a specified distance 1030 outside of industrial site 1010. The cameras may be positioned to capture images of persons or pieces of equipment passing through a virtual boundary that surrounds industrial site 1010. The virtual boundary may be either outside or inside industrial site 1010. An outside boundary may be plotted, for example, one meter outside of industrial site 1010. for “time to use the sensor” same as claim 2, 3 above – Man – see par 55 - digital sensors 106A, 106B may be configured or programmed to detect events indicating entering or leaving the industrial site. The sensors may also associate the events with corresponding timestamps; Iyengar – see par 74, 76; Liu – See par 51)) and a time to stop data collection by the sensor after completion of the activity (same as claim 3 above – Iyengar par 76; Liu par 51). It would have been obvious to combine Man, Iyengar, and Liu for the same reasons as claim 2 and 3 above. Concerning claim 4, 13, and 18, Man, Iyengar, and Liu disclose: The computer-implemented method of claim 1, further comprising deploying the sensor to the location in the monitoring boundary (Man – see par 145 - One or more cameras may be installed at corners of industrial site 1010/1110, or along the sides of industrial site 1010/1110. For example, two cameras may be installed at a particular location along a virtual boundary surrounding industrial site 1010/1110: one camera may be installed on a pole one meter above the ground, and another camera may be installed on the same pole two meters above the ground. For example, one camera may be configured to monitor trucks entering and leaving the site, while another camera may be configured to monitor the workers. see par 181 - decision support system is configured to record data from a plurality of sensors strategically installed throughout an industrial site.), wherein the pattern recognition machine learning algorithm to determine patterns and associations in the historic data associated (Man – see par 71 - machine learning processor 110C may be configured to process the data provided by the video cameras, digital cameras and sensors, and generate output in form of activity records, activity metrics, and activity-based alerts.) with the activity further includes historical locations of sensors in the historical physical boundaries to collect data to monitor historical key performance indicators (KPIs) (Man – see par 80 - an approach for monitoring activities on an industrial site includes one or more decision support systems that are programmed or configured to model behavioral characteristics of objects identified in the site. The decision support systems may be implemented as part of machine learning processor 110C; see par 89 - The system may, for example, monitor the received data for specific conditions, such as employees wearing safety gear in one or more areas. The system may, for example, monitor other conditions and states for compliance and providing indications, notices, reports, etc. corresponding to the analyzed data. Other conditions may also be used to define a specific process protocol. For example, a camera for observing temperature may be used to observe a temperature of personnel and/or equipment. The system may then observe a temperature relative to the object detected and a temperature threshold. For example, for observing personnel, the system may identify a temperature profile as belonging to a person and then measure the temperature against a threshold. see also Iyengar – See par 39 - The State Based Time-Dependent Processing Block may be programmed to measure and track any combination of the following: conformity of features to specific values/value ranges (e.g. location of an object or a group of objects within a certain region of the image; see par 74 - the one or more data sources includes at least one data stream of sequential images and the analyzing the received data comprises defining a state based on an image of the sequential images. The state based determination may include determining a location of an object within a region of the image. The analysis of the data may also include using the state to predict an area of interest in a second image in the sequence of images, and the second image occurs later in time than the image; see also Liu - see par 20 - distributed camera devices 14 can be physically deployed (e.g., physically mounted or positioned) at a fixed or movable locations for monitoring at respective detection zones 26 of a detection region 28; mounted on movable vehicles (e.g., unmanned aerial vehicles, i.e., “drones”) that execute coordinated movement (e.g., “swarming”) to provide monitoring of a remote (or movable) detection region 28 that is reachable by the movable vehicles) and times to collect the data to monitor the historical KPIs (Iyengar – see par 53 - tags may be used to search for specific events and/or may be used to train the system to automatically identify other events. FIG. 18B illustrates an exemplary embodiment in which the epochs of critical events are illustrated on a timeline. As shown, a timeline is provided at a top portion of the screen. The occurrence of an event (identified as “Episode” in the illustration) are provided as icons on the timeline. A user may then click on any event (or any portion of the timeline) and initiate one or more videos associated with the selected time. As illustrated, two cameras are selected that correspond to images that contributed to a given “episode”. The system may automatically select one or more camera feeds that may identify or assist the viewer in identifying or understanding the cause of one or more episode. The user may also select one or more cameras to display and/or add or remove one or more cameras from the display for the selected time. see also Liu –see par 39 - The model training module 50 can be configured for executing local recognition “learning” techniques associated with identifying a recognition model, and the feature learning module 52 can be configured for executing local recognition “learning” techniques associated with identifying a recognition feature, for local optimization of recognition techniques by the corresponding distributed camera device 14; see par 51 - a particular video recognition application for immediate deployment or at a future specified time or event (e.g., start real-time intruder detection at end of evening production run); the processor circuit 76 executing the requirements derivation module 34 also can detect the current requirements in response to the context sensing module 56 detecting an identified event from a sensor device, for example from a detected machine sensor indicating a start of a production run, etc). It would have been obvious to combine Man, Iyengar, and Liu for the same reasons as claim 2 and 3 above. Concerning claim 6, Man, Iyengar, and Liu disclose: The computer-implemented method of claim 4, further comprising monitoring a key performance indicator of the activity (Man - see par 94 - Functions for outputs may also include generating a set of performance metrics. Performance metrics may be represented using key performance indicators (“KPIs”). A KPI may be used to measure, for example, efficiency of an industrial process in an industrial site over time; see also Iyengar – see par 36 - the system may be configured to generate a dashboard for display on a visual display. The dashboard may present information to the user, retrieve or display information from the data sources, identify the results of the analytics including, but not limited to, asset effectiveness, issue identification and prioritization, workflow optimization, monitoring, estimation, verification, compliance, presentation, identification, and simulation of what-if scenarios.) by: collecting data using the sensor (Man – see par 42 - a decision support system receives input data collected from an industrial site by specialized data collection devices. Examples of the data collection devices include video cameras, digital sensors, and other types of computer-based data collectors. The devices may be deployed at different locations of an industrial site and may be configured to collect data and transmit the collected data to a computer. see also Iyengar – see par 41 - FIG. 10 provides an exemplary sequence based neural net model to compute process metrics according to embodiments described herein. The neural net model to compute process metrics may be used in place of the block diagram of FIG. 2 or in combination therewith. Time sequenced signals from various sources may be fed as input to an Encoder Block (optionally) along with meta information like location of the sensor, process being monitored etc. to the model; see par 44 - FIG. 3 illustrates the exemplary analytics according to embodiments described herein to generate the features and benefits described herein. The computed Process Metrics and other meta information may be captured in a data base for the purpose of displaying and analyzing trends, compute statistics etc. The compute statics block may be used to measure and compute desired statistical trends or other metrics.); presenting the collected data to a user via a dashboard at a client device (Man – see par 92, 118-119 - A decision support system may also be configured to generate outputs such as alarms, warnings, messages, and the like. The outputs may include specifications for displaying the alarms on computer dashboards of a management team in a timely fashion. outputs may also include the specification for defining a size and a resolution for displaying the alarms on, for example, relatively small displays of portable devices such as smartphones. see also Iyengar see par 36, 52 - the system may be configured to generate a dashboard for display on a visual display. see par 92 - The management system may include tiered operational performance dashboard, and a system of cameras, detectors, sensors, and combinations thereof to provide 24 hour, 7 day a week process oversight with abnormal condition notification). It would be obvious to combine Man, Iyengar, and Liu for the same reasons as claim 1 and claim 2. Concerning claims 14 and 19, Man, Iyengar, and Liu disclose discloses: The computer program product of claim 11, wherein the program instructions are executable to: determine a key performance indicator of the activity (same as claim 6 above – Man par 94; Iyengar See par 36; See par 41 – compute process metrics for process being monitored, location of sensor); and monitor the key performance indicator using data collected by the sensor (Man – see also Iyengar – see par 41 - FIG. 10 provides an exemplary sequence based neural net model to compute process metrics according to embodiments described herein. The neural net model to compute process metrics may be used in place of the block diagram of FIG. 2 or in combination therewith. Time sequenced signals from various sources may be fed as input to an Encoder Block (optionally) along with meta information like location of the sensor, process being monitored etc. to the model; see par 42 - a decision support system receives input data collected from an industrial site by specialized data collection devices. Examples of the data collection devices include video cameras, digital sensors, and other types of computer-based data collectors. The devices may be deployed at different locations of an industrial site and may be configured to collect data and transmit the collected data to a computer; see par 44 - FIG. 3 illustrates the exemplary analytics according to embodiments described herein to generate the features and benefits described herein. The computed Process Metrics and other meta information may be captured in a data base for the purpose of displaying and analyzing trends, compute statistics etc. The compute statics block may be used to measure and compute desired statistical trends or other metrics). It would be obvious to combine Man, Iyengar, and Liu for the same reasons as claim 6. Concerning claim 7, Man, Iyengar, and Liu disclose: The computer-implemented method of claim 6, further comprising determining the key performance indicator of the activity based on the analyzing (Man - see par 94 - Functions for outputs may also include generating a set of performance metrics. Performance metrics may be represented using key performance indicators (“KPIs”). A KPI may be used to measure, for example, efficiency of an industrial process in an industrial site over time; see par 95 - As the video streams are provided to computer 110, a decision support system may analyze the KPIs to diagnose the sources and reasons for inefficiencies. For example, the decision support system may determine bottlenecks that are caused by failing to complete certain prerequisite industrial tasks. see also Iyengar – see par 36 - the system may be configured to generate a dashboard for display on a visual display. The dashboard may present information to the user, retrieve or display information from the data sources, identify the results of the analytics including, but not limited to, asset effectiveness, issue identification and prioritization, workflow optimization, monitoring, estimation, verification, compliance, presentation, identification, and simulation of what-if scenarios; see par 44 - . The computed Process Metrics and other meta information may be captured in a data base for the purpose of displaying and analyzing trends, compute statistics etc. The compute statics block may be used to measure and compute desired statistical trends or other metrics. For example, data from the database may be extracted to measure statistical trends and other metrics like probability density functions of time to completion of a specific activity, percentage of time a resource (e.g. worker) spends at a working location, heat maps of movement resources, etc. Based on the cost contribution from each block, the Automated Workflow Optimization Block may rearrange the priorities of resources so as to minimize the total delay cost contribution to the workflow. see par 45 - provide metrics for a user. For example, time to completion for a workstation and/or an entire process or line may be provided. As an example, the delay contributions of a block may be provided. As an example, resource utilization may be provided, such as an in use time or down time of a given machine, person, component part, etc. Exemplary embodiments may provide optimized sequences and/or process steps. Exemplary embodiments may permit a use to redistribute resources and/or add and/or remove resources and run simulations based on history or real time data.). It would be obvious to combine Man, Iyengar, and Liu for the same reasons as claim 1 and 2. Concerning claims 15 and 20, Man, Iyengar, and Liu disclose: The computer program product of claim 14, wherein the program instructions are executable to: create a dashboard that displays data about the monitoring the key performance indicator (Man – see par 92, 118-119 - A decision support system may also be configured to generate outputs such as alarms, warnings, messages, and the like. The outputs may include specifications for displaying the alarms on computer dashboards ; see also Iyengar see par 36, 52 - the system may be configured to generate a dashboard for display on a visual display. see par 92 - The management system may include tiered operational performance dashboard, and a system of cameras, detectors, sensors, and combinations thereof to provide 24 hour, 7 day a week process oversight with abnormal condition notification.); and provide the dashboard to a client device (Man – see par 92, 118-119 - A decision support system may also be configured to generate outputs such as alarms, warnings, messages, and the like. The outputs may include specifications for displaying the alarms on computer dashboards of a management team in a timely fashion. outputs may also include the specification for defining a size and a resolution for displaying the alarms on, for example, relatively small displays of portable devices such as smartphones. see also Iyengar see par 36, 52 - the system may be configured to generate a dashboard for display on a visual display. see par 92 - The management system may include tiered operational performance dashboard, and a system of cameras, detectors, sensors, and combinations thereof to provide 24 hour, 7 day a week process oversight with abnormal condition notification; see also Iyengar – see par 36 - the system may be configured to generate a dashboard for display on a visual display). It would be obvious to combine Man, Iyengar, and Liu for the same reasons as claim 6. Concerning claim 8, Man, Iyengar, and Liu disclose The computer-implemented method of claim 1, further comprising: determining the activity comprises a first activity associated with a workflow; determining the workflow comprises a second activity (Man – see par 54 - Data input devices may be configured to collect information about persons, objects and activities taking place in an industrial site. For example, video cameras 102A, 102B, may be configured or programmed to record video segments depicting persons, trucks, and cranes present in an industrial site, store the recorded video segments, and transmit the recorded video segments to computer 110; see also Iyengar – see par 45 - time to completion for a workstation and/or an entire process or line may be provided. As an example, the delay contributions of a block may be provided. As an example, resource utilization may be provided, such as an in use time or down time of a given machine, person, component part, etc. Exemplary embodiments may provide optimized sequences and/or process steps. see par 50 - the system may also provide a feature to optimize the sequence of activities (e.g. manufacturing jobs), in addition to the priorities of resources, based on bottleneck contributions. The sequencing and prioritization may be changed adaptively based on inputs and updates from various data sources. The department specific view or group specific view of optimized workflow provides a visualization of various resources involved in the department. see par 65 - system and methods may include one or more cameras, analyzing the received data from the one or more cameras, analyzing the received data to identify inefficiency events within a process, and visualizing the identified inefficiency events. The system and method may include associating one or more performance metrics to the process); determining a second monitoring boundary for the second activity (Man – see par 141 - cameras are positioned to capture images within a polygon that is spaced apart by a specified distance 1030 outside of industrial site 1010. The cameras may be positioned to capture images of persons or pieces of equipment passing through a virtual boundary that surrounds industrial site 1010. The virtual boundary may be either outside or inside industrial site 1010. An outside boundary may be plotted, for example, one meter outside of industrial site 1010. In this arrangement, a collective field of view of the cameras defines a virtual fence 1000 or a virtual boundary around industrial site 1010, and cameras capture images of persons and equipment crossing virtual fence 1000 to either enter industrial site 1010 or leave industrial site 1010; see also Iyengar – see par 71- the incoming data may be used to analyze or predict attributes of the data. Within a single processing frame, one portion of the single processing frame may be used to predict information about another portion of the single processing frame. In an exemplary embodiment, the system and method includes determining an area of interest from a first single processing frame to predict an area of interest in a second single processing frame. Within sequential single processing frames, one portion of a first processing frame may be used to predict information about a second single processing frame; see par 74 - analysis of the data may also include using the state to predict an area of interest in a second image in the sequence of images, and the second image occurs later in time than the image.) determining a second type of information for monitoring the second activity (Man – see par 161 - , a decision support system is configured to recognize one or more types of events that the system needs to detect or track. The types of the events may be defined by the type of sensors used to provide the event-and-sensor specific measurements; see also Iyengar – see par 17 - in the case of Internet of Things (IoT) sensors, we may need one type of sensor to monitor pressure changes and another type of sensor to monitor temperature changes; see par 53 - . FIG. 18A uses the image of FIG. 1 to illustrate a camera feed of an area for sake of illustration. An actual camera feed may show the perspective view of a process area in a specific band or detection method (such as audio, visible light, temperature, etc); generating a recommendation of a second sensor to capture the second type of information (Man - see par 161 - a decision support system is configured to recognize one or more types of events that the system needs to detect or track. The types of the events may be defined by the type of sensors used to provide the event-and-sensor specific measurements see also Iyengar - see par 75 the system may use the states, predictions, aggregated data, areas of interest, analyzed data, object detection, among other analytic tools to keep track a metric corresponding to the process.); and generating a recommendation of a second location of the second sensor in the second monitoring boundary (Man – See par 141 - In an embodiment, the cameras are positioned to capture images within a polygon that is spaced apart by a specified distance 1030 outside of industrial site 1010. The cameras may be positioned to capture images of persons or pieces of equipment passing through a virtual boundary that surrounds industrial site 1010. see par 145 - One or more cameras may be installed at corners of industrial site 1010/1110, or along the sides of industrial site 1010/1110. For example, two cameras may be installed at a particular location along a virtual boundary surrounding industrial site 1010/1110: one camera may be installed on a pole one meter above the ground, and another camera may be installed on the same pole two meters above the ground; see par 159 - decision support system may be configured to track the industrial equipment; see also Liu – see par 28 - requirements derivation module 34 can determine the video recognition requirements 60 using, for example, inputs from any one of a context sensing module 56 that detects and processes sensor data from physical sensors in the distributed camera system 10 (e.g., interpreting temperature sensor data as fire alarm data, interpreting lighting changes in image data as person entering a room, motion-sensed data as beginning a manufacturing process, etc.). It would be obvious to combine Man, Iyengar, and Liu for the same reasons as claim 1. Concerning claim 9, Man, Iyengar, and Liu disclose: The computer-implemented method of claim 1, wherein the activity comprises a first activity in a workflow that comprises plural activities (Man [same as claim 8 ]– see par 54 - Data input devices may be configured to collect information about persons, objects and activities taking place in an industrial site. For example, video cameras 102A, 102B, may be configured or programmed to record video segments depicting persons, trucks, and cranes present in an industrial site, store the recorded video segments, and transmit the recorded video segments to computer 110; see also Iyengar [same as claim 8] – see par 45 - time to completion for a workstation and/or an entire process or line may be provided. … optimized sequences and/or process steps. see par 50 - the system may also provide a feature to optimize the sequence of activities (e.g. manufacturing jobs), … The department specific view or group specific view of optimized workflow provides a visualization of various resources involved in the department. see par 65 - performance metrics to the process), and further comprising: determining at least one respective key performance indicator for each of the plural activities based on the analyzing ([same as claim 7 - Man - see par 94 - … A KPI may be used to measure, for example, efficiency of an industrial process in an industrial site over time; see par 95 - As the video streams are provided to computer 110, a decision support system may analyze the KPIs to. see also Iyengar – see par 36; see par 44 - . The computed Process Metrics and other meta information may be captured in a data base for the purpose of displaying and analyzing trends, compute statistics etc.. see par 45 - provide metrics for a user.); determining respective monitoring boundaries for each of the plural activities based on the analyzing (Man – see par 141 - cameras are positioned to capture images within a polygon that is spaced apart by a specified distance 1030 outside of industrial site 1010. The cameras may be positioned to capture images of persons or pieces of equipment passing through a virtual boundary that surrounds industrial site 1010. The virtual boundary may be either outside or inside industrial site 1010. An outside boundary may be plotted, for example, one meter outside of industrial site 1010. In this arrangement, a collective field of view of the cameras defines a virtual fence 1000 or a virtual boundary around industrial site 1010, and cameras capture images of persons and equipment crossing virtual fence 1000 to either enter industrial site 1010 or leave industrial site 1010; see also Iyengar – see par 71- the incoming data may be used to analyze or predict attributes of the data. Within a single processing frame, one portion of the single processing frame may be used to predict information about another portion of the single processing frame. In an exemplary embodiment, the system and method includes determining an area of interest from a first single processing frame to predict an area of interest in a second single processing frame. Within sequential single processing frames, one portion of a first processing frame may be used to predict information about a second single processing frame; see par 74 - analysis of the data may also include using the state to predict an area of interest in a second image in the sequence of images, and the second image occurs later in time than the image); determining respective sensors for collecting respective types of information for monitoring the at least one respective key performance indicator for each of the plural activities based on the analyzing (Man - see par 161 - a decision support system is configured to recognize one or more types of events that the system needs to detect or track. The types of the events may be defined by the type of sensors used to provide the event-and-sensor specific measurements see also Iyengar - see par 75 the system may use the states, predictions, aggregated data, areas of interest, analyzed data, object detection, among other analytic tools to keep track a metric corresponding to the process); and deploying the respective sensors to determined locations in the respective monitoring boundaries (Iyengar – see par 20 - Additional sensors and/or monitoring devices may also be used in conjunction with cameras. For example, in critical areas or in any area of interest or as desired based on the process, equipment in use, or any other reason, additional sensors may be included and monitored and analyzed according to embodiments described herein. IoT sensors or IoT like sensors see also Liu – see par 20 - The distributed camera devices 14 can be physically deployed (e.g., physically mounted or positioned) at a fixed or movable locations for monitoring at respective detection zones 26 of a detection region 28; for example, the distributed camera device 14a can be physically deployed for monitoring the detection zone 26a, the distributed camera device 14b can be physically deployed for monitoring the detection zone 26b, etc., and the distributed camera device 14n can be physically deployed for monitoring the detection zone 26n. The distributed camera devices 14 can be physically mounted at fixed locations in a fixed detection region 28, for example within a manufacturing facility. The distributed camera devices 14 also can be portable, for example carried by one or more users or mounted on movable vehicles (e.g., unmanned aerial vehicles, i.e., “drones”) that execute coordinated movement (e.g., “swarming”) to provide monitoring of a remote (or movable) detection region 28 that is reachable by the movable vehicles). It would be obvious to combine Man, Iyengar, and Liu for the same reasons as claim 1. Concerning claim 10, Man, Iyengar, and Liu disclose: The computer-implemented method of claim 9, further comprising determining a first determined location in a first one of the respective monitoring boundaries overlaps a second determined location in a second one of the respective monitoring boundaries at a common location (Man – see par 89 - FIG. 3 depicts a combined display 302, 304 of several video streams provided to a computer from several video cameras. Combined display 302, 304 may include a plurality of display regions, two of which are depicted in FIG. 3; see par 141 - cameras are positioned to capture images within a polygon that is spaced apart by a specified distance 1030 outside of industrial site 1010. The cameras may be positioned to capture images of persons or pieces of equipment passing through a virtual boundary that surrounds industrial site 1010. The virtual boundary may be either outside or inside industrial site 1010. An outside boundary may be plotted, for example, one meter outside of industrial site 1010.), wherein the deploying the respective sensors to determined locations in the respective monitoring boundaries comprises deploying a single sensor to the common location (Iyengar – see par 16 - in the case of cameras or acoustic sensors, precision is inversely proportional to the field of view covered. Hence, most systems are forced to trade-off between the two and pick a compromise. Exemplary embodiments of the connected sensor system provide the best of both worlds by having a set of sensors address precision requirements while other sensors address field of view/scope (e.g. space, frequency coverage etc.) requirements. Accordingly, exemplary embodiments described herein may comprise multiple cameras. The multiple cameras may be connected through processing algorithms such that an output from one camera may inform an input to another camera, and/or may provide control signals to another camera; see par 27 - The cameras may define a field of view that captures one or more branches of a process path). Liu discloses that in some embodiments, distributed camera devices “deployed for providing non-overlapping detection zones 26” (See par 20). It would be obvious to combine Man, Iyengar, and Liu for the same reasons as claim 1. In addition, Man discloses deploying multiple cameras for boundaries surrounding industrial site (See par 89, 141). Iyengar improves upon Man by acknowledging a trade-off between precision and field of view (See par 16). Liu improves upon Man and Iyengar by disclosing that camera devices can be deployed so each individual camera has a non-overlapping detection zone (See FIG. 1, par 20). One of ordinary skill in the art would be motivated to further include cameras with non-overlapping zones to efficiently improve upon the cameras with boundaries in Man and the cameras installed with a field of view of a process path in Iyengar. Response to Arguments Applicant's arguments filed 11/3/25 have been fully considered but they are not persuasive and/or are moot in view of the new rejections. With regards to 103, Applicant’s arguments are moot in view of the new rejections necessitated by the amendments. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to IVAN R GOLDBERG whose telephone number is (571)270-7949. The examiner can normally be reached 830AM - 430PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Anita Coupe can be reached at 571-270-3614. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /IVAN R GOLDBERG/Primary Examiner, Art Unit 3619
Read full office action

Prosecution Timeline

Jun 29, 2023
Application Filed
Apr 03, 2025
Non-Final Rejection — §103
Jun 16, 2025
Examiner Interview Summary
Jun 16, 2025
Applicant Interview (Telephonic)
Jul 01, 2025
Response Filed
Aug 28, 2025
Final Rejection — §103
Oct 10, 2025
Examiner Interview Summary
Oct 10, 2025
Applicant Interview (Telephonic)
Nov 03, 2025
Response after Non-Final Action
Nov 21, 2025
Request for Continued Examination
Dec 05, 2025
Response after Non-Final Action
Mar 05, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596970
SYSTEM AND METHOD FOR INTERMODAL FACILITY MANAGEMENT
2y 5m to grant Granted Apr 07, 2026
Patent 12591826
SYSTEM FOR CREATING AND MANAGING ENTERPRISE USER WORKFLOWS
2y 5m to grant Granted Mar 31, 2026
Patent 12586020
DETERMINING IMPACTS OF WORK ITEMS ON REPOSITORIES
2y 5m to grant Granted Mar 24, 2026
Patent 12579493
SYSTEMS AND METHODS FOR CLIENT INTAKE AND MANAGEMENT USING HIERARCHICAL CONFLICT ANALYSIS
2y 5m to grant Granted Mar 17, 2026
Patent 12555055
CENTRALIZED ORCHESTRATION OF WORKFLOW COMPONENT EXECUTIONS ACROSS SOFTWARE SERVICES
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
35%
Grant Probability
72%
With Interview (+36.9%)
4y 8m
Median Time to Grant
High
PTA Risk
Based on 365 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month